ATLAS Italia Calcolo
Overview ATLAS sw e computing
Calcolo fatto e previsto - share INFN
Resoconto Milestones
24-9-2003
L.Perini-CNS1@Lecce
1
Schema del talk
• ATLAS Computing org: aree in rifacimento/nuove
• Stato e sviluppi sw e tools
– Simulazione, Ricostruzione, Production Environment
• Data Challenges
– Calcolo fatto (DC1 etc.) in Italia e previsto (DC2 etc.)
– Inquadramento in ATLAS Globale
• Milestones 2003 resoconto
• Milestones 2004 proposta
24-9-2003
L.Perini-CNS1@Lecce
2
LHCC Review of Computing Manpower - 2 Sep. 2003
Computing Organization


The ATLAS Computing Organization was revised at the
beginning of 2003 in order to adapt it to current needs
Basic principles:

Management Team consisting of:

Computing Coordinator (Dario Barberis)

Software Project Leader (David Quarrie)

Small(er) executive bodies

Shorter, but more frequent, meetings

Good information flow, both horizontal and vertical

Interactions at all levels with the LCG project

The new structure is now in place and working well

A couple of areas still need some thought (this month)
Dario Barberis: ATLAS Organization
3
LHCC Review of Computing Manpower - 2 Sep. 2003
New computing organization
Internal
organization
being defined
this month
Dario Barberis: ATLAS Organization
4
LHCC Review of Computing Manpower - 2 Sep. 2003
Main boards in computing organization
• Computing Management Board (CMB):
• Computing Coordinator (chair)
• Software Project Leader
• TDAQ Liaison
• Physics Coordinator
• International Computing Board Chair
• GRID, Data Challenge & Operations Coordinator
• Planning & Resources Coordinator
• Data Management Coordinator
– Responsibilities: coordinate and manage computing activities. Set
priorities and take executive decisions.
– Meetings: bi-weekly.
Dario Barberis: ATLAS Organization
6
LHCC Review of Computing Manpower - 2 Sep. 2003
Main boards in computing organization
• Software Project Management Board (SPMB):
• Software Project Leader (chair)
• Computing Coordinator (ex officio)
• Simulation Coordinator
• Event Selection, Reconstruction & Analysis Tools Coordinator
• Core Services Coordinator
• Software Infrastructure Team Coordinator
• LCG Applications Liaison
• Calibration/Alignment Coordinator
• Sub-detector Software Coordinators
• Physics Liaison
• TDAQ Software Liaison
– Responsibilities: coordinate the coherent development of software
(both infrastructure and applications).
– Meetings: bi-weekly.
Dario Barberis: ATLAS Organization
7
LHCC Review of Computing Manpower - 2 Sep. 2003
Main boards in computing organization
• ATLAS-LCG Team:
– Includes all ATLAS representatives in the many LCG committees. Presently
9 people:
• SC2:
Dario Barberis (Computing Coordinator),
Daniel Froidevaux (from Physics Coordination)
• PEB:
Gilbert Poulard (DC Coordinator)
• GDB: Dario Barberis (Computing Coordinator),
Gilbert Poulard (DC Coordinator), Laura Perini (Grid Coordinator)
• GAG: Laura Perini (Grid Coordinator), Craig Tull (Framework-Grid integr.)
• AF:
David Quarrie (Chief Architect & SPL)
• POB: Peter Jenni (Spokesperson), Torsten Åkesson (Deputy Spokesperson)
• LHC4: Peter Jenni (Spokesperson), Torsten Åkesson (Deputy Spokesperson),
Dario Barberis (Computing Coordinator), Roger Jones (ICB Chair)
– Responsibilities: coordinate the ATLAS-LCG interactions, improve information
flow between “software development”, “computing organization” and
“management”. Meetings: weekly.
Dario Barberis: ATLAS Organization
8
LHCC Review of Computing Manpower - 2 Sep. 2003
Organization: work in progress (1)

Data Challenge, Grid and Operations

terms of office of key people coming to an end ~now

DC1 operation finished, we need to put in place an effective
organization for DC2

Grid projects moving from R&D phase to implementation and
eventually production systems

we are discussing how to coordinate at high level all activities:


Data Challenge organization and executions

"Continuous" productions for physics and detector performance
studies

Contacts with Grid middleware providers

Grid Application Interfaces

Grid Distributed Analysis
we plan to put a new organization in place by September 2003,
before the start of DC2 operations
Dario Barberis: ATLAS Organization
10
LHCC Review of Computing Manpower - 2 Sep. 2003
Organization: work in progress (2)

Event Selection, Reconstruction and Analysis Tools

here we aim to achieve a closer integration of people working on

high-level trigger algorithms

detector reconstruction

combined reconstruction

event data model

software tools for analysis

“effective” integration in this area already achieved with HLT
TDR work, now we have to set up a structure to maintain constant
contacts and information flow

organization of this area will have to be agreed with the TDAQ
and Physics Coordinators (discussions on-going)

most of the people involved will have dual reporting lines (same as
for detector software people)

we plan to put the new organization in place by the September
2003 ATLAS Week
Dario Barberis: ATLAS Organization
11
LHCC Review of Computing Manpower - 2 Sep. 2003
Computing Model Working Group (1)
• Work on the Computing Model was done in several different contexts:
• online to offline data flow
• world-wide distributed reconstruction and analysis
• computing resource estimations
• Time has come to bring all these inputs together coherently
• A small group of people has been put together to start collecting all existing
information and defining further work in view of the Computing TDR, with the
following backgrounds:
• Resources
• Networks
• Data Management
• Grid applications
• Computing farms
• Distributed physics analysis
• Distributed productions
• Alignment and Calibration procedures
• Data Challenges and tests of computing model
Dario Barberis: ATLAS Organization
12
LHCC Review of Computing Manpower - 2 Sep. 2003
Computing Model Working Group (2)
• This group will:
• first assemble existing information and digest it
• act as contact point for input into the Computing Model from all ATLAS
members
• prepare a “running” Computing Model document with up-to-date
information to be used for resource bids etc.
• prepare the Computing Model Report for the LHCC/LCG by end 2004
• contribute the Computing Model section of the Computing TDR (mid-2005)
• The goal is to come up with a coherent model for:
• physical hardware configuration
• e.g. how much disk should be located at experiment hall between the
Event Filter & Prompt Reconstruction Farm
• data flows
• processing stages
• latencies
• resources needed at CERN and in Tier-1 and Tier-2 facilities
Dario Barberis: ATLAS Organization
13
LHCC Review of Computing Manpower - 2 Sep. 2003
Computing Model Working Group (3)
• Group composition:
• Roger Jones (ICB chair, Resources), chairman
• Bob Dobinson (Networks)
• David Malon (Data Management)
• Torre Wenaus (Grid applications)
• Sverre Jarp (Computing farms)
• Paula Eerola (Distributed physics analysis)
• XXX (Distributed productions)
• Richard Hawkings (Alignment and Calibration procedures)
• Gilbert Poulard (Data Challenges and Computing Model tests)
• Dario Barberis & David Quarrie (Computing management, ex officio)
• First report expected in October 2003
• Tests of the Computing Model will be the main part of DC2 operation (2Q 2004)
Dario Barberis: ATLAS Organization
14
Simulation in ATLAS (A.Rimoldi)

Demanding environment
• People vs things
• The biggest collaboration ever gathered in HEP
• The most complete and challenging physics ever handled

The present simulation in pills:
• Fast Simulation: Atlfast
• Detailed simulation in Geant3
• In production since 10 years, but frozen since 1995 and used for
DC productions until now
• Detailed simulation in Geant4
• Growing up (and evolving fast) from the subdetectors side
• Detailed testbeam studies (tb as an ‘old times’ experiment)
• For all the technologies represented
• Physics studies extensively addressed since 2001
• For validation purposes
• Under development:
• Fast/semi-fast simulation, shower parameterizations
• Staged detector environment for early studies
• Optimizations, FLUKA integration…
L.Perini-CNS1@Lecce
24-9-2003
17
DC2



Different concepts from the different domains about
DC2…
For the Geant4 Simulation people the DC2 target means a
way to state that:
 Geant4 is the main simulation engine for Atlas from now
on
 We have concluded a first physics validation cycle and
found that Geant4 is now better or at least comparable
with Geant3
 We have written enough C++ code to say that the
geometry description of Atlas is at the same level of
detail as the one in Geant3
The application must still be optimized from the point of
view of
• Memory usage@run time
• Performance (CPU)
• Application robustness
L.Perini-CNS1@Lecce
24-9-2003
18
DC2 is close

We have a functional simulation program based on
Geant4 available now for the complete detector
• detector components already collected
• Shifting emphasis from subdetector physics
simulations to ATLAS physics simulations after three
years of physics validations

Studies under way:
Memory usage minimization
Performance optimization
Initialization time monitoring/minimization
Calorimeters parameterization
A new approach to the detector description through the
GeoModel

We are fully integrated within the Athena
framework
L.Perini-CNS1@Lecce
24-9-2003
19
Complete Simulation Chain






Events can be generated online or read in
Geometry layout can be chosen
Hits are defined for all detectors
Hits can now be written out (and read back
in) together with the HepMC information
Digitization being worked out right now
Pileup strategy to be developed in the near
future
L.Perini-CNS1@Lecce
24-9-2003
20
The plan (for Geant4) @short term
1.2.7.1.1.1.2.1
geometry of all subdetectors
1.2.7.1.1.1.2.1.1 shieldings in place
2 weeks oct- nov 03
1.2.7.1.1.1.2.1.2 cables & services
4 weeks oct- dec 03
Emphasistests
on: at different conditions
1.2.7.1.1.1.2.2
performance
1 week jul- feb 04
1.2.7.1.1.1.2.3 robustness tests for
selected event
2 weeks aug- feb 04
Refinement
ofsamples
geometry
1.2.7.1.1.1.2.4 robustness tests for selected regions
missing pieces
1.2.7.1.1.1.2.4.1
barrel
2 weeks sep- dec 03
combined
testbeam
setup
1.2.7.1.1.1.2.4.2
endcap
2 weeks sep- dec 03
1.2.7.1.1.1.2.4.3
transition region
2 weeks sep- dec 03
1.2.7.1.1.1.2.5 hits for all subdetectors
(check and and
test)robustness tests
2 weeks sep- dec 03
performance
1.2.7.1.1.1.2.6
persistency
2 weeks sep- nov 03
hits
&
digits
1.2.7.1.1.1.2.6.1 performance tests for all the detectors
persistency
components in place 1 week sep- nov 03
1.2.7.1.1.1.2.6.2
performancepileup
tests vs. different conditions
2 weeks sep- nov 03
1.2.7.1.1.1.2.6.3
robustness tests for all the det. components
2 weeks sep- dec 03
1.2.7.1.1.1.2.7 packages restructuring for inconsistency
In view
of DC2
with
old structures
3 weeks oct- dec 03
1.2.7.1.1.1.2.8 cleaning Early
packages
area (to
attic) from September with
1 week nov 03
tests
starting
1.2.7.1.1.1.2.9 revising single
writing rights
(obsolete,
new)
particle
beams
in order to evaluate the 1 week nov 03
1.2.7.1.1.1.2.10 documentation
4 weeks sep- dec 03
global performances well before DC2 startup
L.Perini-CNS1@Lecce
24-9-2003
21
Reconstruction: algorithms in
Athena

Two pattern recognition algorithms are available for the Inner
Detector


Two different packages are used to reconstruct tracks in the
Muon Spectrometer


MuonBox and MOORE
The initial reconstruction of cell energy is done separately in
LAr and TileCal. After that all reconstruction algorithms do not
see any difference between LArCell and TileCell and are using
generic CaloCells as input


iPatRec and xKalman
Jet reconstruction, ET miss
Several algorithms combine information from tracking
detectors and calorimeters in order to achieve good rejection
factor or identification efficiency

e/g identification, e/p rejection, t identification, m back tracking to Inner
Detector through calorimeters, …
Atlas week, Sep 2003, Prague
Alexander Solodkov
22
High Level Trigger
algorithm strategy

Offline model : Event Loop Manager directs an Algorithm:


Here is an event, see what you can do with it
High Level Trigger model: Steering directs an Algorithm:




Here is a seed. Access only relevant event data.
Only validate a given hypothesis
You may be called multiple times per this one event!
Do it all within LVL2 [EF] latency of O(10ms) [O(1 s)]
ISSUES
LEVEL 2
EVENT FILTER
Data Access
Restricted to
Regions-of-Interest
Full access to event if
necessary
Performance
Fast and rough
treatment
Slow and refined
approaches
No event-to-event
access
Possible event-toevent
Calibration & Alignment
Database Access
Atlas week, Sep 2003, Prague
Alexander Solodkov
23
New test beam reconstruction in
Athena

Inner Detector (Pixel, SCT), calorimeter (TileCal) and whole
Muon System are using latest TDAQ software at the test beam



ByteStream with test beam data is available in Athena now





ByteStreamCnvSvc is able to read test beam ByteStream since July 2003
ROD data decoding is implemented in the same way as in HLT
converters for MDT and RPC (July 2003) and TileCal (September 2003)
Converters are filling new Muon/TileCal EDM
RDO => RIO conversion available in Athena before is used at no cost
Reconstruction of Muon TB data is possible in Athena



ByteStream files are produced by DataFlow libraries
Format of the ROD fragment in the output ByteStream file is very close
to the one used for HLT performance studies
Muon reconstruction is done by MOORE package
Ntuples are produced for the analysis
Combined test beam (8 – 13 Sep 2003)

Both MDT and TileCal data are reconstructed in Athena
Atlas week, Sep 2003, Prague
Alexander Solodkov
24
MOORE MDT segments
reconstruction (test beam data)
Chambers misalignments
Barrel sagitta
180 GeV beam
Comparisons with Muonbox
are possible

For the full 3-D reconstruction the standard MOORE ntuple
can be used
Atlas week, Sep 2003, Prague
Alexander Solodkov
25
Reconstruction Task Force

Who


Mandate




Formed in Feb03 to perform high level re-design and decomposition of
the reconstruction and event data model
Cover everything between raw data and analysis
Look for common solutions to HLT and offline
Deliverables



Véronique Boisvert, Paolo Calafiura, Simon George (chair), Giacomo
Polesello, Srini Rajagopalan, David Rousseau
Interim reports published in April and May.
 Significant constructive feedback
Final report any day now
Interaction



Several well attended open meetings to kick off and present reports
Meetings focused on specific design issues to get input and feedback
Feedback incorporated into second interim report
Atlas week, Sep 2003, Prague
Alexander Solodkov
26
RTF recommendations



Very brief overview… please
read the report
Modularity, granularity, baseline
reconstruction
Reconstruction top down design
(dataflow)





EDM
Common interfaces between algorithms
 e.g. common classes for tracking
subsystems

Design patterns to give uniformity to data
classes in combined reconstruction
domain

Approach to units and transformations

Separation of event and non-event data

Navigation
Alexander Solodkov
27

Atlas week, Sep 2003, Prague
Domains: sub-systems, combined
reconstruction and analysis preparation
Analysis of algorithmic components,
identified common tools
Integration of fast simulation
Steering
Reconstruction Summary

A complete spectrum of reconstruction algorithms is available
in the Athena framework



Ongoing developments:






They are used both for HLT and offline reconstruction
The same algorithms are being tried for test beam analysis
Cleaner modularization (toolbox)
Robustness (noisy/dead channels, misalignments)
Extend algorithms reach (e.g low pt, very high pt)
New algorithms
Implementation of RTF recommendations in next releases will
improve greatly the quality of the reconstruction software
Next challenge: summer 2004, a complete ATLAS barrel
wedge in the test beam. Reconstruction and analysis using
(almost) only ATLAS offline reconstruction.
Atlas week, Sep 2003, Prague
Alexander Solodkov
29
Sviluppo nuovo ATLAS Production
environment
• Finora sviluppati diversi tools
– Specialmente in contesto Grid US
• Produzioni svolte con tool diversi in posti diversi
– Usata molta manpower, scarsa automazione, controlli e correzioni
a posteriori
• Decisione di sviluppare sistema nuovo e coerente, seguono
slides di Alessandro De Salvo
– Meetings luglio-agosto, finale ristretto 12-8 con De Salvo per
INFN: architettura sistema (con riusi), sharing fra CERN
(+nordici), INFN, US
– Per INFN partecipazione da Milano-CNAF (2p EDT, Guido), da
Napoli (2p), Roma1 (Alessandro)
24-9-2003
L.Perini-CNS1@Lecce
30
Atlas Production System

Design of an automatic production system to be deployed ATLAS-wide on
the time scale of DC2 (spring 2004)



Automatic
Robust
Support for several flavours of GRID and legacy resources
•
•
•
•

Components





LCG
US-GRID
NG
Local Batch queues
Production DB
Supervisor/Executors (master-slave system)
Data Management System (to be finalized)
Production Tools
To be defined




Security & Authorization
Continuous Parallel QA System
Monitoring Tools
Exact schemata of the Production DB
24-9-2003
L.Perini-CNS1@Lecce
31
Atlas Production System details (II)
Task = [job]*
Dataset = [partition]*
Luc Goossens
JOB DESCRIPTION
Location
Hint
(Task)
Data
Management
System
Task
(Dataset)
Luc
Goossens
Task
Transf.
Definition
+ physics signature
Human
intervention
Job
Run Info
Kaushik
De Supervisor 1
Location
Hint
(Job)
Job
(Partition)
Supervisor 2
Rob Gardner
US Grid
Executer
Partition Executable name
Transf. Release version
Definition signature
Supervisor 3
Alessandro
LCG De Salvo
Executer
Chimera
Supervisor 4
Oxana
NG Smirnova
Executer
RB
LSF
Executer
Luc
Goossens
RB
US Grid
24-9-2003
LCG
NG
L.Perini-CNS1@Lecce
Local Batch
33
LHCC Review of Computing Manpower - 2 Sep. 2003
ATLAS Computing Timeline
2003
• POOL/SEAL release
• ATLAS release 7 (with POOL persistency)
NOW
• LCG-1 deployment
2004
• ATLAS complete Geant4 validation
• ATLAS release 8
• DC2 Phase 1: simulation production
2005
• DC2 Phase 2: intensive reconstruction (the real challenge!)
• Combined test beams (barrel wedge)
• Computing Model paper
2006
• ATLAS Computing TDR and LCG TDR
• DC3: produce data for PRR and test LCG-n
• Computing Memorandum of Understanding
• Physics Readiness Report
2007
• Start commissioning run
• GO!
Dario Barberis: ATLAS Organization
34
LHCC Review of Computing Manpower - 2 Sep. 2003
High-Level Milestones

10 Sept. 2003
Software Release 7 (POOL integration)

31 Dec. 2003
Geant4 validation for DC2 complete

27 Feb. 2004
Software Release 8 (ready for DC2/1)

1 April 2004
DC2 Phase 1 starts

1 May 2004
Ready for combined test beam

1 June 2004
DC2 Phase 2 starts

31 Jul. 2004
DC2 ends

30 Nov. 2004
Computing Model paper

30 June 2005
Computing TDR

30 Nov. 2005
Computing MOU

30 June 2006
Physics Readiness Report

2 October 2006
Ready
Cosmic
Ray
Dario
Barberis:for
ATLAS
Organization
Run
35
DC1 e parte INFN
• DC1-1 fatto in 1.5 mesi : terminato settembre 02
– 107 eventi + 3* 107 particelle singole
• 39 siti
• 30 TB 500 KSi2K * mese
• Circa 3000 CPU usate (max)
– CPUs INFN 132 = Roma1 46, CNAF 40, Milano 20,
Napoli 16, LNF 10 (SI95=2*2000+800+600=5400)
• INFN circa 5% risorse e 5% share (ma INFN=10% ATLAS)
• DC1-2 pileup fatto in 1 mese: terminato fine 02
– 1.2 M eventi di DC1-1
• 10 TB e 40 KSi2K * mese, stessi siti “proporzionalmente”
– Risorse e share INFN come DC1-1 (per costruzione)
24-9-2003
L.Perini-CNS1@Lecce
36
Ricostruzione per HLT TDR
• Fatta su 1.3 M eventi in 15 giorni terminata a
maggio 2003
– 10 siti (Tier1 o simili)
– 30 KSi2K * mese
• Forse + CPU nei vari tests che in prod. finale…
• Frazione del CNAF vicina 10%
• Ripetuto in luglio e primi agosto 20 CPU CNAF
– continuata poi ricostruzione per fisica (A0) (vedi
monitor agosto CNAF-ATLAS)..
24-9-2003
L.Perini-CNS1@Lecce
37
DC2 in Italia
• Inizio in Aprile 2004 fine in Novembre
– Si userà nuovo ATLAS “production environment”
• Ricercatori INFN impegnati nello sviluppo
• Impegno ATLAS globale per simul+rec in SI2k*mese circa
doppio di DC1, supponendo CPU Geant4=Geant3
– CPU INFN richiesta da DC1*4 a DC1*6 (incertezza Geant4)
• Oltre a DC2 calcolo per fisica e rivelatori (come DC1)
– Vedi agosto a Mi, Na, Rm
• In DC2 prima volta analisi massiccia e distribuita (Tier3)
• Necessità 2004 prevedono (Tabella richieste da Referees)
– 18 kSI95 (5k esistenti + 13k new) in Tier2 (Disco 10.5TB ora + 11
new) New da anticipare a 2003
• A Mi-LCG 120 CPU (70new=6k) a Rm 100 (45new=4k) a Na
(45new=4k)
• LNF inizia con 0.2 K + 0.6 k new e 0.9 TB disco
– Da 7k a 15k in Tier1 (buffer per prestazioni Geant4)
– Aggiunta di 1.5k SI95 e disco a sistema Tier3 (ora solo 700 SI95! E
circa 1 TB in 8 sezioni)
24-9-2003
L.Perini-CNS1@Lecce
38
DC2 in Italia
• Importante non accada più che share INFN<10%
• Importante partecipare con tutte competenze locali
– Setting up e decisioni ora per il modello di calcolo e di
analisi
• Per il 2005 il piano è aumento contenuto rispetto a
richieste 2004 in Tier2 e raddoppiare CPU in Tier3
– Per Tier2 3 kSI95 e 2 TB disco (niente a Mi e Rm)
– Per Tier3 2 kSI95 e 3 TB disco
• Seguono slides (G.Poulard) su situazione DC e
planning ATLAS globale per illustrare i vari punti
24-9-2003
L.Perini-CNS1@Lecce
39
DC1 in numbers
Process
No. of
events
CPU Time
CPU-days
(400 SI2k)
kSI2k.months
Volume of
data
TB
Simulation
Physics evt.
107
415
30000
23
Simulation
Single part.
3x107
125
9600
2
Lumi02 Pile-up
4x106
22
1650
14
Lumi10 Pile-up
2.8x106
78
6000
21
Reconstruction
4x106
50
3750
Reconstruction
+ Lvl1/2
2.5x106
(84)
(6300)
690 (+84)
51000
(+6300)
Total
24-9-2003
L.Perini-CNS1@Lecce
60
40
ATLAS DC1 Phase 1 : July-August 2002
3200 CPU‘s
110 kSI95
71000 CPU days
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
1,41%
0,02%
14,33%
Australia
Austria
Canada
3,99%
CERN
Czech Republic 1,89%
France
4,33%
Germany
Israel
3,15%
Italy
2,22%
Japan
Nordic
Russia
10,72%
Spain
Taiwan
4,94%
UK
2,36%
USA
24-9-2003
1
2
3
Contribution to the overall
CPU-time (%) per country
4
5
6
7
10,92%
39 Institutes in
18 Countries
grid tools
used at 11 sites
28,66%
0,01%
9,59%
1,46%
8 9 10 11 12
L.Perini-CNS1@Lecce
5*10*7 events generated
1*10*7 events simulated
3*10*7 single particles
30 Tbytes
35 000 files
13
14
15
16
41
Primary data (in 8 sites)
Pile-up:
Low luminosity ~ 4 x 106 events (~ 4 x 103 NCU days)
High luminosity ~ 3 x 106 events ( ~ 12 x 103 NCU days)
6%
25%
Total amount of primary data: 59.1 TBytes
20%
4%
6%
Data (TB)
Simulation: 23.7 (40%)
Pile-up:
35.4 (60%)
Lumi02: (14.5)
Lumi10: (20.9)
4%
1
Alberta ( 3.6)
2
BNL
3
CNAF (3.6)
4
Lyon
(17.9)
5
FZK
(2.2)
6
Oslo
(2.6)
7
RAL
( 2.3)
8
CERN (14.7)
(12.1)
4%
24-9-2003
31%
L.Perini-CNS1@Lecce
Data replication using
Grid tools
(Magda)
42
DC2 resources
Process
No. of
events
(based on Geant3 numbers)
Time
span
CPU
power
CPU TIME
Volume At
of
CER
data
N
Off
site
months
kSI2k
kSI2k.months
TB
TB
TB
Simulation
107
2
260
520
24
8
16
Pile-up (*)
Digitization
107
2
175
350
(75)
(25)
(50)
Byte-stream
107
2
18
18
12
Total
107
2
435
870
42
(+57)
Reconst.
107
0.5
600
300
5
26
28
(+57) (+38)
5
5
* To be kept if no “0” suppression
24-9-2003
L.Perini-CNS1@Lecce
43
DC2: July 2003 – July 2004
At this stage the goal includes:
Full use of Geant4; POOL; LCG applications
 Pile-up and digitization in Athena
 Deployment of the complete Event Data Model
and the Detector Description
 Simulation of full ATLAS and 2004 combined
test beam
 Test the calibration and alignment procedures
 Use widely the GRID middleware and tools
 Large scale physics analysis
 Computing model studies (document end 2004)
 Run as much as possible the production on
24-9-2003
L.Perini-CNS1@Lecce
LCG-1

44
Task Flow for DC2 data
(Athena-ROOT)
Athena-POOL
H  4 mu
Pythia 6
HepMC
HepMC
HepMC
Event
generation
24-9-2003
Athena-POOL
Athena-POOL
Athena
Geant4
Athena
Geant4
Athena
Geant4
Detector
Simulation
ESD
AOD
Hits
MCTruth
Athena
Pile-up
+Digits
Digits
Hits
MCTruth
Athena
Pile-up
+Digits
Digits
Athena
ESD
AOD
Digits
Athena
ESD
AOD
Hits
MCTruth
Athena
Pile-up
+Digits
Athena
Byte-stream
Digitization
(Pile-up)
L.Perini-CNS1@Lecce
Reconstruction
45
DC2:Scenario & Time scale
 End-July 03: Release 7
 Mid-November 03: preproduction release
 February 1st 04:
Release 8 (production)
 April
1st
 June
1st
04:
04: “DC2”
 July 15th
24-9-2003
 Put




in place, understand & validate:
Geant4; POOL; LCG applications
Event Data Model
Digitization; pile-up; byte-stream
Conversion of DC1 data to POOL; large
scale persistency tests and reconstruction
 Testing and validation
 Run test-production
 Start final validation
 Start simulation; Pile-up & digitization
 Event mixing
 Transfer data to CERN





Intensive Reconstruction on “Tier0”
Distribution of ESD & AOD
Calibration; alignment
Start Physics analysis
Reprocessing
L.Perini-CNS1@Lecce
46
ATLAS Data Challenges: DC2
 We are building an ATLAS Grid production & Analysis
system
 We intend to put in place a “continuous” production
system
o
If we continue to produce simulated data during summer
2004 we want to keep open the possibility to run another
“DC” later (November 2004?) with more statistics
 We plan to use LCG-1 but we will have to live with other
Grid flavors and with “conventional” batch systems
 Combined test-beam operation foreseen as part of DC2
24-9-2003
L.Perini-CNS1@Lecce
47
Milestones 2003
• 1 - Completamento del 10% INFN della simulazione Geant3
per HLT TDR nel quadro del DC1
Aprile 2003
– Completato al come da slides presentate, gia’ entro febbraio, ma
5% (coerente con CPU disponibile)
• 2 - Completamento della ricostruzione e analisi dei dati
simulati di cui al punto precedente Giugno 2003
– I dati sono stati ricostruiti entro maggio senza trigger code e questi
dati sono stati trasferiti al CERN e usati per un’analisi rapida prima
della pubblicazione del HLT TDR. In luglio e fino ad inizio agosto
sono stati ri-ricostruiti con l’aggiunta del trigger-code
– Completata al 90% perchè è stato rimandato a data da
destinarsi il primo test realistico di analisi distribuita che
pensavamo di realizzare per il HLT TDR.
24-9-2003
L.Perini-CNS1@Lecce
48
Milestones 2003 (2)
•
3 - Simulazione di 10**6 eventi mu con GEANT4 e lo stesso
layout usato per HLT TDR
Giugno 2003
– Dal gruppo di simulazione di Pavia sono stati processati 4.5M di
eventi di muoni singoli a 20 GeV (con subsample extra a 200 GeV) in
testbeam 2002 mode con tempo stimato per evento di .1 s/ev su una
macchina PentiumIII 1.26GHz. I dati simulati sono stati poi processati
dai programmi di ricostruzione del muon system (Calib e Moore) e
sono stati confrontati con i dati reali del testbeam 2002. In relazione
a questa produzione di eventi e' stata effettuata una analisi ed e' stata
posta in pubblicazione una nota interna di Atlas (ATLAS-COMMUON-2003-014) (quattro nomi 2 da Pavia, 1 da Cosenza e uno
CERN).Sono inoltre stati a prodotti, sempre a Pavia, 1M di eventi di
muoni singoli e circa 2X10**4 eventi di Z-> mu mu ed altrettanti
di W->mu nu con muon system in versione aggiornata (versione
P03 del database dei muoni Amdb_SimRec) nella regione centrale
dello spettrometro a muoni per tests di robustness
– Completata al 100% (o più se possibile)
24-9-2003
L.Perini-CNS1@Lecce
49
Milestones 2003 (3)
•
4 - Ripetizione di una delle analisi HLT TDR sui dati mu generati con
GEANT4
Dicembre 2003
– Come riportato al punto precedente I dati di mu generati con GEANT4 sono
già stati validati con analisi e confronto con dati reali. Il confronto con i
risultati di GEANT3 è tuttora previsto.
•
5 - Inserimento dei TierX di ATLAS nel sistema di produzione di LCG,
e test dell'inserimento con le prime produzioni del DC2 di Atlas.
Dicembre 2003
– Il DC2 di ATLAS risulta spostato in avanti di 7 mesi rispetto alla data
prevista in luglio 2002 e l’accesso degli esperimenti a LCG-1 sta per
avvenire ora ( inizio settembre 2003) a fronte di una previsione per aprilemaggio (4 mesi circa).
– I Tier2 già attivati in ATLAS Italia (Milano, Roma1, Napoli) intendono
comunque installare LCG-1 e sperimentarne l’utilizzo entro il 2003. Milano
come Tier2 già committed a LCG installerà LCG-1 entro settembre e
parteciperà alle attività concordate fra ATLAS e LCG; Roma1 e Napoli
parteciperanno ai test di LCG-1 in un quadro puramente ATLAS. Dopo che
questa prima fase di tests sarà stata completata con successo proporremo
l’inserimento ufficiale di Roma1 e Napoli in LCG (primavera 2004?).
24-9-2003
L.Perini-CNS1@Lecce
50
Milestones 2004
• -1- entro maggio 2004: pronto s/w production quality per start
DC2 (Geant4, Athena release 8, production environment LCG)
–
GEANT4:
• ottimizzazione prestazioni rispetto ad attuale fattore 2 rispetto a GEANT3 (ma non c'e'
un target fissato)
• raffinamento geometria (cavi, servizi, etc.)
• finalizzazione digitizzazione e persistenza
–
Data Management (contributo INFN trascurabile)
•
integrazione con POOL e con SEAL dictionary (rappresentazione ATLAS event
Model)
• Persistenza per ESD, AOD, Tag Data in Pool
• Common geometry model for Reconstruction and Simulation
• Support for event collections and filtering ( ma questo puo' andare a luglio)
– Production Environment
• Nuovo sistema di produzione per ATLAS, automatizzato e che acceda in modo
coerente alle DB di metadati di produzione (ora AMI), catalogo files,
virtual data. Interfaccia utente uniforme per tutto ATLAS, interfacciato a
LCG (responsabilita' INFN),US-GRID(Chimera), NorduGrid e Plain Batch.
24-9-2003
L.Perini-CNS1@Lecce
51
Milestones 2004 (2)
• 2- entro ottobre 2004 completato DC2 simulazione,
ricostruzione ed eventuale reprocessing
– Partecipazione di Tier1,2 (CNAF, Milano, Napoli, Roma1) a fasi
simulazione, pileup e ricostruzione eseguendo il 10% di ATLAS
globale in Italia.
– Da aprile i siti Tier2 sono tutti LCG-capable, cioe' in tutti il s/w e'
installato e testato in mini-produzione italiana. (Milano inseritoin
LCG da prima del 2004).
– Analisi in Tier3 in collaborazione con Tier1,2: report per fine 2004
– Contributo INFN a TDR computing
24-9-2003
L.Perini-CNS1@Lecce
52
CPU load ATLAS@Napoli
24-9-2003
L.Perini-CNS1@Lecce
58
Roma1 Atlas Farm Usage Statistics

Farm info/description:

https://classis01.roma1.infn.it/atlas-farm
24-9-2003
L.Perini-CNS1@Lecce
59
Scarica

perini_calcolo_lhc_atlas