The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing sites all over the world and is processed continuously by various central production and user analysis tasks. The popularity of data is typically measured as the number of accesses and plays an important role in resolving data management issues: deleting, replicating, moving between tapes, disks and caches. These data management procedures were still carried out in a semi-manual mode and now we have focused our efforts on automating it, making use of the historical knowledge about existing data management strategies. In this study we describe sources of information about data popularity and demonstrate their consistency. Based on the calculat...
The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (...
The distributed monitoring infrastructure of the Compact Muon Solenoid (CMS) experiment at the Europ...
The ATLAS experiment has had two years of steady data taking in 2010 and 2011. Data are calibrated, ...
The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing ...
ATLAS (A Toroidal LHC Apparatus) is one of several experiments of at the Large Hadron Collider (LHC)...
The ATLAS experiment's data management system is constantly tracing file movement operations that oc...
Efficient distribution of physics data over ATLAS grid sites is one of the most important tasks for ...
During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of dat...
This paper describes a popularity prediction tool for data-intensive data management systems, such a...
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton an...
During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of dat...
The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at ...
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron...
ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many mor...
on behalf of the ATLAS collaboration This paper presents a system to predict future data popularity ...
The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (...
The distributed monitoring infrastructure of the Compact Muon Solenoid (CMS) experiment at the Europ...
The ATLAS experiment has had two years of steady data taking in 2010 and 2011. Data are calibrated, ...
The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing ...
ATLAS (A Toroidal LHC Apparatus) is one of several experiments of at the Large Hadron Collider (LHC)...
The ATLAS experiment's data management system is constantly tracing file movement operations that oc...
Efficient distribution of physics data over ATLAS grid sites is one of the most important tasks for ...
During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of dat...
This paper describes a popularity prediction tool for data-intensive data management systems, such a...
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton an...
During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of dat...
The CMS experiment at the LHC accelerator at CERN relies on its computing infrastructure to stay at ...
More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron...
ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many mor...
on behalf of the ATLAS collaboration This paper presents a system to predict future data popularity ...
The ATLAS experiment is taking data steadily since Autumn 2009, collecting close to 1 fb-1 of data (...
The distributed monitoring infrastructure of the Compact Muon Solenoid (CMS) experiment at the Europ...
The ATLAS experiment has had two years of steady data taking in 2010 and 2011. Data are calibrated, ...