학술논문

Distributed computing grid experiences in CMS
Document Type
Periodical
Source
IEEE Transactions on Nuclear Science IEEE Trans. Nucl. Sci. Nuclear Science, IEEE Transactions on. 52(4):884-890 Aug, 2005
Subject
Nuclear Engineering
Bioengineering
Distributed computing
Collision mitigation
Detectors
Large-scale systems
Computational modeling
Event detection
Discrete event simulation
Data analysis
Large Hadron Collider
Collaboration
Data flow analysis
data management
data processing
distributed computing
distributed information systems
high energy physics
Language
ISSN
0018-9499
1558-1578
Abstract
The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system.