Data-intensive end-user analyses in high energy physics require high data throughput to reach short turnaround cycles. This leads to enormous challenges for storage and network infrastructure, especially when facing the tremendously increasing amount of data to be processed during High-Luminosity LHC runs. Including opportunistic resources with volatile storage systems into the traditional HEP computing facilities makes this situation more complex. Bringing data close to the computing units is a promising approach to solve throughput limitations and improve the overall performance. We focus on coordinated distributed caching by coordinating workows to the most suitable hosts in terms of cached files. This allows optimizing overall pro...
Best Paper AwardInternational audienceMany scientific experiments are performed using scientific wor...
The projected Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be a...
Many scientific experiments are performed using scientific workflows, which are becoming more and mo...
High throughput and short turnaround cycles are core requirements for efficient processing of data-i...
With the second run period of the LHC, high energy physics collaborations will have to face increasi...
The heavily increasing amount of data produced by current experiments in high energy particle physic...
To enable data locality, we have developed an approach of adding coordinated caches to existing comp...
International audienceCaching can effectively reduce the cost of serving content and improve the use...
Modern data processing increasingly relies on data locality for performance and scalability, whereas...
Current and future end-user analyses and workflows in High Energy Physics demand the processing of g...
Modern High Energy Physics (HEP) requires large-scale processing of extensive amounts of scientific...
[EN] Data analysis workflows in High Energy Physics (HEP) read data written in the ROOT columnar for...
With the evolution of the WLCG towards opportunistic resource usage and cross-site data access, new ...
The ATLAS experiment at CERN’s Large Hadron Collider uses the Worldwide LHC Computing Grid, the WLCG...
The ATLAS experiment at CERN’s Large Hadron Collider uses theWorldwide LHC Computing Grid, the WLCG,...
Best Paper AwardInternational audienceMany scientific experiments are performed using scientific wor...
The projected Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be a...
Many scientific experiments are performed using scientific workflows, which are becoming more and mo...
High throughput and short turnaround cycles are core requirements for efficient processing of data-i...
With the second run period of the LHC, high energy physics collaborations will have to face increasi...
The heavily increasing amount of data produced by current experiments in high energy particle physic...
To enable data locality, we have developed an approach of adding coordinated caches to existing comp...
International audienceCaching can effectively reduce the cost of serving content and improve the use...
Modern data processing increasingly relies on data locality for performance and scalability, whereas...
Current and future end-user analyses and workflows in High Energy Physics demand the processing of g...
Modern High Energy Physics (HEP) requires large-scale processing of extensive amounts of scientific...
[EN] Data analysis workflows in High Energy Physics (HEP) read data written in the ROOT columnar for...
With the evolution of the WLCG towards opportunistic resource usage and cross-site data access, new ...
The ATLAS experiment at CERN’s Large Hadron Collider uses the Worldwide LHC Computing Grid, the WLCG...
The ATLAS experiment at CERN’s Large Hadron Collider uses theWorldwide LHC Computing Grid, the WLCG,...
Best Paper AwardInternational audienceMany scientific experiments are performed using scientific wor...
The projected Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be a...
Many scientific experiments are performed using scientific workflows, which are becoming more and mo...