CMS Computing Model
Commento:
Costruito ad uso e consumo per la definizione delle risorse:
LHCC review di Gennaio 2005
Ora e’ un “planning document”!?
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
Baseline and “Average”
Review of Computing Resources
for the LHC experiments
LHCC
 In the Computing Model we discuss an initial
baseline
– Best understanding of what we expect to be possible
– We will adjust to take account of any faster than expected
developments in for example grid middleware functionality
– Like all such battle plans, it may not survive unscathed the first
engagement with the enemy…
 We calculate specifications for “Average” centers.
– Tier-1 centers will certainly come in a range of actual capacities
(available to CMS)
Sharing with other experiments…
• Overall T1 capacity is not a strong function of NTier1
•
– Tier-2 centers will also cover a range of perhaps 0.5-1.5 times
these average values
David Stickland
Jan 2005
Page 2
•
And will probably be focused to some particular activities (Calibration,
Heavy-Ion,..) that will also break this symmetry in reality
Definizioni nel Computing
Model (CM) di CMS
I Tier-n sono “nominali” o meglio “average/canonical Tiers”
7 Tier1 (incluso uno speciale al CERN)
25 Tier2 (di cui uno speciale al CERN: ~2-3 canonical Tier2)
Il primo anno di riferimento e’ il 2008 (anche se e’ un po’ confuso nel
Computing Model paper)
Le risorse devono essere implementate nell’anno di “riferimento -1”
(valutazione dei costi) “We expect 2007 requirements to be covered by rampup
needed for 2008”
Scenario assunto: 
3
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
CM - Event Data Format (summary)
Event
Format
Content
Purpose
Event size
Events / year
(MByte)
DAQRAW
RAW
RECO
AOD
TAG
Detector data in FED
format and the L1 trigger
result.
Primary record of
physics event. Input
to online HLT
1-1.5
Detector data after on-line
formatting, the L1 trigger
result, the re-sult of the
HLT se-lections (“HLT
trigger bits”), potentially
some of the higher-level
quan-tities calculated
during HLT processing.
Input to Tier-0
reconstruction.
Primary archive of
events at CERN.
1.5
Reconstructed objects
(tracks, vertices, jets,
electrons, muons, etc.
including reconstructed
hits/clusters)
Output of Tier-0
recon-struction and
subsequent rereconstruction
passes. Sup-ports refitting of tracks, etc.
0.25
Reconstructed objects
(tracks, vertices, jets,
electrons, muons, etc.).
Possible small quantities of
very localized hit
information.
Physics analysis
0.05
Run/even number, highlevel physics objects, e.g.
used to index events.
Rapid identifi-cation
of events for further
study (event
directory).
P. Capiluppi - CSN1 - Roma
0.01
Data
volume
Event rate
150 Hz
(PByte)
1.5 × 109
=107 seconds
× 150Hz
–
3.3 × 109
=1.5 × 109 DAQ events
× 1.1 (dataset overlaps)
× 2(copies)
5.0
8.3 × 109
=1.5 × 109 DAQ events
× 1.1 (dataset overlaps)
× [2 (copies of 1st pass)
+ 3 (reprocessings/year)]
2.1
53 × 109
=1.5 × 109 DAQ events
× 1.1 (dataset overlaps)
× 4 (versions/year)
× 8(copies perTier − 1)
2.6
–
–
Event size
1.5 MByte
Second RAW
data copy at
Tier1s
RECO Objects
2+1 reproc/year
AOD
Primary basis
of analyses
Events
“catalog”
A total of ~9.5 PByte31 per
year
Gennaio
2005
4
Event Sizes and Rates
Review of Computing Resources
for the LHC experiments
LHCC
 Raw Data size is estimated to be 1.5MB for 2x1033
first full physics run
David Stickland
Jan 2005
Page 5
– Real initial event size more like 1.5MB
•
Expect to be in the range from 1 to 2 MB
•
Use 1.5 as central value
– Hard to deduce when the event size will fall and how that will be
compensated by increasing Luminosity
 Event Rate is estimated to be 150Hz for 2x1033 first
full physics run
– Minimum rate for discovery physics and calibration: 105Hz
(DAQ TDR)
– Standard Model (jets, hadronic, top,…) +50Hz
– LHCC study in 2002 showed that ATLAS/CMS have ~same
rates for same thresholds and physics reach
CM - Data Model specifications
(or location)
I RAW, RECO e AOD sono “divisi” in O(50) “Primary Datasets”
 Definiti dalla “trigger matrix” (L1 Trigger + High Level Trigger)
 Max 10% overlap
Seconda copia dei RAW data distribuita nei Tier1
 Solo il CERN Tier0 ha il full RAW sample
I RECO (leggi DST) sono distribuiti tra i Tier1
 Nessun Tier1 li ha tutti, ma ognuno ha lo share che corrisponde ai RAW
data residenti
 2+1 reprocessing per anno, 2 nei Tier1 e 1 al CERN (LHC downtime)
 Incluso
il reprocessing dei dati simulati
Gli AOD sono tutti in ogni Tier1
 4 versioni per anno, residenti su disco
Gli AOD e i RECO possono essere distribuiti ad ogni Tier2
 Meta’ degli AOD “correnti” e/o RECO di max 5 primary datasets
I “non-event” data (calibrazioni, allineamenti, etc.) sono presso i
Tier1(2) che ne fanno l’analisi e al CERN Tier0/Tier1/Tier2/Online-farm
6
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
CM - Data Flow
Bo, Ba,
LNL, Pd,
Pi, Rm1
CNAF
MC data
Reprocessed data
7
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
CM - Analysis Model
(descrizione-1/2)
Verra’ definito nel C-TDR, ed e’ ancora in evoluzione
 L’attivita’ del P-TDR ne dettera’ le caratteristiche iniziali
 Perche’
evolvera’ nel tempo comunque…
 Grid potra’ dare un “significant change”
 Ma
si parte in modo tradizionale
 Tuttavia
ci si aspetta che gli analisti abbiano un accesso ad una
User Interface presso i Tier2 e/o i Tier3
E
solo per alcuni users i jobs verranno sottomessi direttamente ai Tier1 (o
anche ai Tier2), la maggioranza ci accedera’ via Grid tools
I Dati sono navigabili in un “primary dataset”:
 AOD  RECO  RAW (vertical streaming)
 La

navigazione e’ “protetta”
Comunque non e’ sensato navigare dagli AOD ai RECO, ma solo dai RECO ai RAW
 Infatti chiedere agli AOD oggetti che sono solo nei RECO deve produrre eccezione
L’Event Data Model (framework) e’ in corso di ridefinizione:
 Con idee anche di CDF/BaBar
8
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
CM - Analysis Model
(descrizione-2/2)
RAW and RECO analysis
 Significant amount of experts analysis
 Studi
di trigger e detector (incluse calibrazioni, allinementi, fondi)
 Base
per re-reconstruction (new-RECO) e per creare sub-samples (newAOD, TAGs, Event directories, etc.)
 Dominante
all’inizio dell’esperimento (2007 e 2008?)
 Principalmente ai Tier1 (ma anche qualche Tier2 specializzato)
RECO and AOD analysis
 Significant physics analysis
 90%
of all physics analysis can be carried out from AOD data samples
 Less
than 10% of analyses should have to refer to RECO
 Principalmente ai Tier2 (ma anche ai Tier3?)
Event Directories & TAGs analysis
 Fanno parte degli “users skims” (o Derived Physics Data) anche se
sono prodotti “ufficialmente” quando si creano gli AOD
 Sono nei Tier2 e Tier3
9
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
CM - Data and Tier0 activities
Online Streams (RAW) arrivano in un 20 day input buffer
 Archiviati su nastro al Tier0
Prima ricostruzione  RECO
 Archiviati su nastro al Tier0
RAW+ RECO distribuiti ai Tier1
 1/NTier1 per ogni Tier1
AOD distribuiti a tutti i Tier1
Ri-ricostruzione al Tier0 (LHC downtime)
 RECO e AOD distribuiti come sopra
 Tempo impiegato ~4 mesi
 I restanti 2 mesi per prima ricostruzione completa degli HI data
 Possibilmente col contributo di “qualche” Tier2
CMS Tier0 Resources
CPU scheduled
Disk
Active tape
Tape I/O
4588
407
3775
600
kSI2K
Tbytes
Tbytes
MB/s
Eff Factors
85.00%
70.00%
100.00%
50%
CMS WAN at CERN = ~2x10 Gbps
P. Capiluppi - CSN1 - Roma
10
31 Gennaio 2005
Tier0: dettagli risorse
2007 Performance Estimates
PerfCPU
Performance per CPU
NCPU
Number of CPUs per Box
PerfDisk
GB per Disk
Total T0 Tapes: 3775 TB
Raw
= 2250 TB
HIRaw
= 350 TB
Calib
= 225 TB
1stReco = 375 TB
2ndReco = 375 TB
HIReco
= 50 TB
1stAOD
= 75 TB
2ndAOD = 75 TB
4 kSI2k
2
900 GB
Total T0 CPU: 4588 kSI2K
(EffSchCPU =85%)
Raw 1stReco
Calib
Raw 2ndReco
= 3750 kSI2K
= 150 kSI2K
= included above
11
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
CM - Data and Tier1 activities
Riceve il suo share di RAW+RECO (custodial data) + tutti gli AOD dal Tier0
 Custodial data anche archiviati su nastro
Riceve i RAW+RECO+AOD di simulazione dai suoi Tier2
 Anche archiviati su nastro
 Distribuisce gli AOD simulati a tutti i Tier1
Manda i RECO+AOD concordati ai suoi Tier2
Riprocessa la ri-ricostruzione concordata sui suoi RAW/RECO (real+simu)
 Manda i new-AOD agli altri Tier1 e ai Tier2, e li riceve dagli altri Tier1
Partecipa alle “calibrazioni”
Esegue le “large scale Physics Stream skims” dei ~ 10 “Physics Groups” che
usano i dati residenti
Il risultato e’ mandato ai Tier2 di competenza per l’analisi
 Full pass sui RECO (data+MC) in ~2 giorni  ogni gruppo ogni ~3 settimane

Supporta un “limitato” accesso (interattivo e batch) degli users
CPU scheduled
Resources CPU analysis
Disk
Active tape
Data Serving I/O Rate
CMS Tier1
1199
929
1121
1837
800
kSI2K
kSI2K
Tbytes
Tbytes
MB/s
Eff Factors
85.00%
75.00%
70.00%
100.00%
CMS WAN at each Tier1 = ~10 Gbps
P. Capiluppi - CSN1 - Roma
12
31 Gennaio 2005
Tier1: dettagli delle risorse
Total T1 Disks: 1121 TB
(EffDisk =70%)
Raw data
1stReco (curr vers)
2ndReco (old vers) (10% disk)
2ndReco Sim (old vers) (10%)
Simul Raw
(10% disk)
Simul Reco
(10% disk)
1stAOD (data & sim)(curr vers)
2ndAOD (old vers) (10% disk)
Calib data
HIReco
(10% disk)
Analyses Group space
= 375
= 63
= 13
= 17
= 43
= 9
= 150
= 30
= 38
= 6
= 43
TB
TB
TB
TB
TB
TB
TB
TB
TB
TB
TB
Data I/O Rate ≈ 800 MB/s
Local (Sim+Data) Reco
Full Sample size (Tapes)
/ TwoDay
Total T1 CPU: 2128 kSI2K
[CPU scheduled + CPU analysis]
(EffSchCPU =85%) (EffAnalCPU = 75%)
Two days per Group
per Tier1
P. Capiluppi - CSN1 - Roma
Re Reco Data
=
510 kSI2K
Re Reco Sim
=
510 kSI2K
Calib
Analyses skims
=
25 kSI2K
=
672 kSI2K
31 Gennaio 2005
13
CM - Data and Tier2 activities
Serve l’analisi per i “local Groups”
20-50 users, 1-3 Gruppi?
 “Local” non necessariamente “geografico”
 Fornisce il supporto storage per i “local Groups” e le sim private
 Ogni user analizza ogni 2 giorni 1/10 degli AOD e 1/10 dei RECO residenti
 Sviluppo software locale
 Accesso al sistema degli users (User Interface)

Importa i dataset (RECO+AOD+ skims) dai Tier1

Una volta ogni 3 settimane
Produce ed esporta i dati di simulazione (sim-RAW + RECO + AOD)

Non una responsabilita’ locale: eseguita centralmente via GRID
Puo’ analizzare e produrre le “calibrazioni”

Di interesse/responsabilita’ della comunita’ che accede al Tier2
Puo’ partecipare alla ricostruzione ed analisi degli HI data
CMS Tier2 Resources
CPU scheduled
CPU analysis
Disk
CMS WAN at each Tier2 = ~1 Gbps
P. Capiluppi - CSN1 - Roma
Eff Factors
250 kSI2K
85.00%
579 kSI2K
75.00%
218 Tbytes
70.00%
14
31 Gennaio 2005
CM - Computing Summary
Costi valutati con
“criteri CERN” e
proiettati al 2007
CERN investment:
Tier0 + un Tier1
+ un Tier2
CMS Italia: Tier1 (2.9 M€) + 6 Tier2 (0.6x6=3.6 M€) = ~6.5 M€
Annual expenditures
MCHF
2008
20
2009
20
2010
20
CMS Italia: ~1.9 M€/year
15
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
CM - Open Issues
Software & Framework
 Non incluso nel Computing Model, dovra’ essere nel C-TDR
Tools e servizi nei vari Tiers
 Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR)
Sviluppo software (e middleware)
 Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR)
Location and implementation of needed services
 Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR) + MoUs
Level of services agreement nei Tiers
 Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR) + MoUs
Personale
 Non incluso nel CM, dovra’ esserlo nel C-TDR (e LCG-TDR) + MoUs
Ruolo e consistenza dei “Physics Groups”
 Non c’e’ nel modello proposto, non in modo esplicito (C-TDR?)
Flusso dei dati “tra” Tier1 e “tra” Tier2
 Non c’e’ nel modello proposto, dovra’ essere nel C-TDR
I “non event data” sono poco trattati …
La distribuzione dei dati potrebbe essere diversa all’inizio (2007/8)
Etc.
16
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
Commenti e prospettive
Attuali candidati Tier1 per CMS:
 {CERN}, USA(FNAL), Italy (CNAF), France (Lyon), Germany (FZK), UK (RAL),
Spain (PIC), Taipei, [Russia?]
 Percentuale di contributo expected:

{CERN 8%}, USA 36%, Italy 20%, France 7%, Germany 6%, UK 5%, Spain 3%, Taipei
1%, [Russia? 14%]
Attuali candidati Tier2:
USA, ~7 Universities + LHC Physics Center a FNAL
 INFN, 6 sezioni (±1)
 In2p3, nessuno?
 DDF, ?
 UK, 3-4 sites?
 Es, 2-3 sites?
 Others?

Il CM proposto da sempre da CMS Italia non e’ dissimile da questo
 Un po’ piu’ importanza ai Tier2/3 e a Grid: attraverso risorse umane localmente
interessate e investimento hardware/infrastruttura/organizzazione
Test e verifica del CM: non solo attraverso il P-TDR!
 Analisi distribuita (via LCG) per il P-TDR gia’ da ora
 Service challenge di LCG entro 2005
 Attivita’ di sviluppo software e commitments di lungo periodo
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
17
Tempi
Febbraio, CMS
Produzione DST ex-DC04 over
Aprile, RRB
Approvazione dei MoUs: LCG (fase 2) ed
Esperimenti
Giugno, CMS
Sottomissione del C-TDR (e LCG-TDR) a LHCC
Giugno?, CMS
Prototipo funzionante di analisi DST
Autunno?, CSN1
Discussione C-TDRs
Ottobre, RRB
Ancora MoUs ?
Dicembre, CMS
Sottomissione del P-TDR
Nel frattempo bisogna implementare l’infrastruttura
hardware e software per produzione ed analisi
(anche attraverso work-around solutions)
18
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
2004
2005
Monday
CMS Meetings
6-Dec
CMS Week
C-TDR Status
Draft 0:
Schedule
13-Dec
20-Dec
27-Dec
Basic Computing Model
Draft 0 sent to LHCC
C-TDR Reviews / Approval
CMS approval of CM document for LHCC review
3-Jan
10-Jan
17-Jan
TCM
24-Jan
SC
Referees
31-Jan
SC
28-Feb
TCM
7-Mar
Referees
LHCC Review
of basic Computing Model / Resources
FB (1)
Tracker
CPT week
7-Feb
14-Feb
21-Feb
Draft 1:
Complete outline / authors
C-TDR mini-Workshop #1
TCM
Draft 2:
First complete (rough) draft
Run Meet(11)
CMS Week
14-Mar
C-TDR mini-Workshop #2
CMS C-TDR review, part 1:
Physics Model (requirements)
21-Mar
28-Mar
4-Apr
Draft 3:
MB/FB(RRB)
11-Apr
TCM
18-Apr
RRB (18)
CMS Phys. Week
(FNAL)
SC
2-May
TCM
9-May
16-May
Referees
Elec week
23-May
SC
FB (24)
13-Jun
Editorial work,
technical and
cost updates
Tracker
CMS approval of M&O manpower for RRB
CMS C-TDR review, part 2:
ECAL
25-Apr
30-May
6-Jun
Complete but not polished
Computing and Software
CMS management option
Keep to submission schedule or delay?
CMS C-TDR review, part 3:
Costs, management plan, milestones…
TriDAS
Draft 4:
Final version
CMS management option
C-TDR ready to request approval?
Run Meet(3)
CMS Annual Review
CMS Annual Review
Run Meet(1)
27-Jun
Referees
P. Capiluppi
- CSN1
- Roma
CMS approval of CMS (& LCG) TDR's
Submission to LHCC
20-Jun
31 Gennaio 2005
19
Back-up slides
20
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
Cost Evolution
This plan is for “2008” run (First major run)




Systems must be ramped up in 2006 and 2007
Established centers (CERN, FNAL, Lyon, RAL) could ramp latish
New centers have to ramp manpower as well, not leave to late
Some capacity required in 2007
Subsequent years
 Operations costs (Mostly Tape)
 Upgrade/Maintenance
 Replace

3-4 years maximum lifetime of most components
 Moores


25% “Units” each year.
law give steady upgrade
During next year, last years data becomes (over the year) mostly staged in rather
than on disk
Luminosity upgrades need more CPU and more disk
21
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
Tier2: dettagli delle risorse
Total T2 Disks: 218 TB
(EffDisk =70%)
1stReco (curr vers) (~5 prim datasets)
1stReco Sim (curr vers) (~5 prim datasets)
1stAOD (data & sim)(curr vers)
Analyses Group space
Local priv simul data
=
=
=
=
=
19
19
15
40
60
TB
TB
TB
TB
TB
Total T2 CPU: 829 kSI2K
[CPU scheduled + CPU analysis]
Each Group in
Twenty days
All local data in
Twenty days
P. Capiluppi - CSN1 - Roma
(EffSchCPU =85%) (EffAnalCPU = 75%)
Simul
Reco Sim
HI Reco
AOD Analyses
Reco Analyses
= 128 kSI2K
= 71 kSI2K
= 38 kSI2K
= 217 kSI2K
= 217 kSI2K
22
31 Gennaio 2005
C-TDR WORKING GROUPS
(DRAFT)
1.Physics input: Analysis, Data, Event Models
 Event / Data model, streams, data flow, processing, calibration, …
2.Computing Model: Key Features and Top-Level Architecture
 Analysis model, groups, users …
 Role of LCG and Grid components
3.Core Applications Software and Environment
 Architecture of software, software principles and development process
 Development environment and tools
 Applications framework, persistency, metadata...
 Toolkits : utilities, plug-ins, mathlibs, graphics, technology choices…...
4.Computing Services and System Operations
 Tier-0, Tier-1’s, Tier-2’s, local systems, networks
 (Multiple) Grids – expectations (e.g. LCG), fallback solutions
 Data Management and Database Systems
 Distributed (job) processing systems
 Final data challenge (“Computing Ready for Real Data”)
First order
iteration
done with
“Computing
Model”
paper
5.Project Management and Resources
 Size and costs: CPU, disk, tape, network, services, people
 Proposed computing organisation, plans, milestones
Conveners Italiani????
 Human resources and communications
e contributors?
 Risk management
23
P. Capiluppi - CSN1 - Roma
Task
force2005
italiana!
31 Gennaio
Grids
Review of Computing Resources
for the LHC experiments
LHCC
 We expect, at least initially, to manage data location by CMS
decisions and tools
– CMS physicists (services) can determine where to run their jobs
•
Minimize requirements on “Resource Brokers”
– CMS with Tier centers manages local file catalogs
– Minimize requirements for global file catalogs
•
Except at dataset level
 “Local” users or “Limited set of users” submitting jobs on a
given Tier-2
– Tier-2’s don’t have to publish globally what data they have, or be open to a
wide range of CMS physicists
– But Simulation production runs there using grid tools
 For Major Selection and processing (most) physicists use a
GRID UI to submit jobs on T1 centers
David Stickland
Jan 2005
Page 24
– Maybe from their Institute or Tier-2
– Some local users at Tier-1 using also local batch systems
Computing at CERN
Review of Computing Resources
for the LHC experiments
 The Online Computing at CMS Cessy
LHCC
 The CMS Tier-0 for primary reconstruction
 A CMS Tier-1 center
– Making use of the Tier-1 archive, but requiring its own drives/stage pools
•
Thus can be cheaper than an offsite Tier-1
 CMS Tier-2 Capacity for CERN based analysis
– Estimate need equivalent of 2-3 canonical CMS T2 centers at CERN
 The CMS CERN Tier-1 and Tier-2 centers can share some
resources for economy and performance, and provide a very
important analysis activity also at CERN
– Have not studied this optimization yet
David Stickland
Jan 2005
Page 25
Tier3s
Pag 37 (Specifications, overview)
 Tier-3 Centres are modest facilities at institutes for local use. Such
computing is not generally available for any coordinated CMS use but
is valuable for local physicists. We do not attempt at this time to
describe the uses or responsibilities of Tier-3 computing. We
nevertheless expect that significant, albeit difficult to predict, resources
may be available via this route to CMS.
Pag 50 (Tier2 roles)
 All Monte Carlo production is carried out at Tier-2 (And Tier-3)
26
P. Capiluppi - CSN1 - Roma
31 Gennaio 2005
Pag 3 (executive Summary)
Grids
 …GRID Middleware and Infrastructure must make it possible…via GRID middleware…designs of
GRID middleware…in local GRID implementations…
Pag 24 (RAW event rates)
 …a figure that could be reasonably accommodated by the computing systems that are currently being
planned in the context of the LHC Computing Grid (LCG).
Pag 32 (Analysis Model)
 …significant changes if/as we become convinced that new methods of for example Grid-based
analysis are ready for full scale deployment.
Pag 35 (Middleware and software)
 …we do not describe the Grid middleware nor the applications software in detail in this document. …
”then a full page on Grid”.
 Requirement 33: Multiple GRID implementations are assumed to be a fact of life. They must be
supported in a way that renders the details largely invisible to CMS physicists.
 Requirement 34: The GRID implementations should support the movement of jobs and their
execution at sites hosting the data, …
Pag 36 (Specifications of CM, overview)
 We expect this ensemble of resources to form the LHC Computing Grid. We use the term LCG to
define the full computing available to the LHC (CMS) rather than to describe one specific middleware
implementation and/or one specific deployed GRID.We expect to actually operate in a heterogeneous
GRID environment but we require the details of local GRID implementations to be largely invisible to
CMS physicists (these are described elsewhere, e.g.: LCG-2 Operations [9]; Grid-3 Operations [10];
EGEE [11]; NorduGrid [12]; Open Science Grid [13]. … “then a couple of sentences about Grid in
CMS”.
Pag 45 (Tier1 reprocessing)
 … we believe this would place unnecessarily high demands on the Grid infrastructure in the early
days …
Pag 50 (Tier2 data processing)
 …the ability to submit jobs locally directly or via Grid interfaces, and ability to submit (Grid) jobs to run
27
at Tier-1 centres …
31 Gennaio 2005
P. Capiluppi - CSN1 - Roma
Scarica

ppt - Infn