Optimizing large data transfers for the ALICE experiment in Run 3

Sergiu Weisz,Costin Grigoras, Alice-Florența Șuiu, Latchezar Betev,Mihai Carabaș,Nicolae Țăpuș

2023 22nd RoEduNet Conference: Networking in Education and Research (RoEduNet)(2023)

引用 0|浏览5
暂无评分
摘要
The physics programme and scope of HEP experiments natu-rally grow with time, thus the computing requirements, both CPU and storage, increase. The 4 large LHC experiments (ALICE, ATLAS, CMS, and LHCb) are undergoing upgrade cycles, for ALICE and LHCb the upgrade was finalized in the years of the LHC long Shutdown 2, 2019–2021, and for ATLAS and CMS it is foreseen to take place in 2026–2028. The ALICE upgrade was characterized by a complete overhaul of the detector, data acquisition (DAQ) systems, and entire software stack. As a consequence, the experiment can take up to 100 times more events, compared to the previous setup, and is employing a new online data compression and calibration utility (O2) to reduce the data stream, which nonetheless is 4 times larger than before the upgrade. To manage the new more complex data paths, a new data management software was designed and deployed as part of the Grid software upgrade. This article present the ALICE data movement system from the data compression farm O2 to the different storage instances around the world, with a focus on transfer optimization and automating data transfers.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要