Efficient distribution of physics data over ATLAS grid sites is one of the most important tasks for user data processing. ATLAS' initial static data distribution model over-replicated some unpopular data and under-replicated popular data, creating heavy disk space loads while underutilizing some processing resources due to low data availability. Thus, a new data distribution mechanism was implemented, PD2P (PanDA Dynamic Data Placement) within the production and distributed analysis system PanDA that dynamically reacts to user data needs, basing dataset distribution principally on user demand. Data deletion is also demand driven, reducing replica counts for unpopular data. This dynamic model has led to substantial improvements in efficient ...
This contribution presents a study on the applicability and usefulness of dynamic data placement met...
This contribution presents a study on the applicability and usefulness of dynamic data placement met...
The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at ...
The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing ...
ATLAS (A Toroidal LHC Apparatus) is one of several experiments of at the Large Hadron Collider (LHC)...
The ATLAS experiment's data management system is constantly tracing file movement operations that oc...
This paper describes a popularity prediction tool for data-intensive data management systems, such a...
For high-throughput computing the efficient use of distributed computing resources relies on an even...
This paper presents a system to predict future data popularity for data-intensive systems, such as A...
on behalf of the ATLAS collaboration This paper presents a system to predict future data popularity ...
The distributed monitoring infrastructure of the Compact Muon Solenoid (CMS) experiment at the Europ...
The PanDA Production and Distributed Analysis System is the ATLAS workload management system for pro...
During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of dat...
The Compact Muon Solenoid (CMS) expe- riment at the European Organization for Nuclear Research (CERN...
Scientific computing has advanced in a way of how to deal with massive amounts of data, since the pr...
This contribution presents a study on the applicability and usefulness of dynamic data placement met...
This contribution presents a study on the applicability and usefulness of dynamic data placement met...
The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at ...
The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing ...
ATLAS (A Toroidal LHC Apparatus) is one of several experiments of at the Large Hadron Collider (LHC)...
The ATLAS experiment's data management system is constantly tracing file movement operations that oc...
This paper describes a popularity prediction tool for data-intensive data management systems, such a...
For high-throughput computing the efficient use of distributed computing resources relies on an even...
This paper presents a system to predict future data popularity for data-intensive systems, such as A...
on behalf of the ATLAS collaboration This paper presents a system to predict future data popularity ...
The distributed monitoring infrastructure of the Compact Muon Solenoid (CMS) experiment at the Europ...
The PanDA Production and Distributed Analysis System is the ATLAS workload management system for pro...
During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of dat...
The Compact Muon Solenoid (CMS) expe- riment at the European Organization for Nuclear Research (CERN...
Scientific computing has advanced in a way of how to deal with massive amounts of data, since the pr...
This contribution presents a study on the applicability and usefulness of dynamic data placement met...
This contribution presents a study on the applicability and usefulness of dynamic data placement met...
The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at ...