High throughput computing (HTC) has aided the scientific community in the analysis of vast amounts of data and computational jobs in distributed environments. To manage these large workloads, several systems have been developed to efficiently allocate and provide access to distributed resources. Many of these systems rely on job characteristics estimates (e.g., job runtime) to characterize the workload behavior, which in practice is hard to obtain. In this work, we perform an exploratory analysis of the CMS experiment workload using the statistical recursive partitioning method and conditional inference trees to identify patterns that charac-terize particular behaviors of the workload. We then propose an estimation process to predict job ch...
Job schedulers in high energy physics require accurate information about predicted resource consumpt...
The analysis of workload traces from real production parallel machines can aid a wide variety of par...
The physics event reconstruction is one of the biggest challenges for the computing of the LHC exper...
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given ...
Data collected by the Compact Muon Solenoid experiment at the Large Hadron Collider are continuously...
Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resourc...
Abstract. The performance of supercomputer schedulers is greatly af-fected by the characteristics of...
Estimates of task runtime, disk space usage, and memory consumption, are commonly used by scheduling...
The physics event reconstruction in LHC/CMS is one of the biggest challenges for computing.Among the...
This paper evaluates several main learning and heuris-tic techniques for application run time predic...
At the Large Hadron Collider (LHC), more than 30 petabytes of data are produced from particle collis...
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact M...
As High Performance Computing (HPC) has grown considerably and is expected to grow even more, effect...
The paper is devoted to machine learning methods and algorithms for the supercomputer jobs executio...
Abstract. When a moldable job is submitted to a space-sharing parallel computer, it must choose whet...
Job schedulers in high energy physics require accurate information about predicted resource consumpt...
The analysis of workload traces from real production parallel machines can aid a wide variety of par...
The physics event reconstruction is one of the biggest challenges for the computing of the LHC exper...
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given ...
Data collected by the Compact Muon Solenoid experiment at the Large Hadron Collider are continuously...
Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resourc...
Abstract. The performance of supercomputer schedulers is greatly af-fected by the characteristics of...
Estimates of task runtime, disk space usage, and memory consumption, are commonly used by scheduling...
The physics event reconstruction in LHC/CMS is one of the biggest challenges for computing.Among the...
This paper evaluates several main learning and heuris-tic techniques for application run time predic...
At the Large Hadron Collider (LHC), more than 30 petabytes of data are produced from particle collis...
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact M...
As High Performance Computing (HPC) has grown considerably and is expected to grow even more, effect...
The paper is devoted to machine learning methods and algorithms for the supercomputer jobs executio...
Abstract. When a moldable job is submitted to a space-sharing parallel computer, it must choose whet...
Job schedulers in high energy physics require accurate information about predicted resource consumpt...
The analysis of workload traces from real production parallel machines can aid a wide variety of par...
The physics event reconstruction is one of the biggest challenges for the computing of the LHC exper...