학술논문

Towards Exascale Computing for High Energy Physics: The ATLAS Experience at ORNL
Document Type
Conference
Source
2018 IEEE 14th International Conference on e-Science (e-Science) E-SCIENCE e-Science (e-Science), 2018 IEEE 14th International Conference on. :341-342 Oct, 2018
Subject
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Signal Processing and Analysis
Task analysis
Supercomputers
Large Hadron Collider
Resource management
Software
Physics
Computer architecture
Language
Abstract
Traditionally, the ATLAS experiment at Large Hadron Collider (LHC) has utilized distributed resources as provided by the Worldwide LHC Computing Grid (WLCG) to support data distribution, data analysis and simulations. For example, the ATLAS experiment uses a geographically distributed grid of approximately 200,000 cores continuously (250 000 cores at peak), (over 1,000 million core-hours per year) to process, simulate, and analyze its data (todays total data volume of ATLAS is more than 300 PB). After the early success in discovering a new particle consistent with the long-awaited Higgs boson, ATLAS is continuing the precision measurements necessary for further discoveries. Planned high-luminosity LHC upgrade and related ATLAS detector upgrades, that are necessary for physics searches beyond Standard Model, pose serious challenge for ATLAS computing. Data volumes are expected to increase at higher energy and luminosity, causing the storage and computing needs to grow at a much higher pace than the flat budget technology evolution (see Fig. 1). The need for simulation and analysis will overwhelm the expected capacity of WLCG computing facilities unless the range and precision of physics studies will be curtailed.