With the increasing adoption of public and private cloud resources to support the demands in terms of computing capacity of the WLCG, the HEP community has begun studying several benchmarking applications aimed at continuously assessing the performance of virtual machines procured from commercial providers. In order to characterise the behaviour of these benchmarks, in-depth profiling activities have been carried out. In this document we outline our experience in profiling one specific application, the ATLAS Kit Validation, in an attempt to explain an unexpected distribution in the performance samples obtained on systems based on Intel Haswell-EP processors
The IT infrastructures of companies and research centres are implementing new technologies to satisf...
Grid computing provides the main resource for data processing of High Energy Physics experiments, an...
Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than antici...
In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intr...
Benchmarking of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) suite for over a decad...
The benchmarking and accounting of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) sui...
HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by...
The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energ...
The HEPiX Benchmarking Working Group has developed a framework to benchmark the performance of a com...
In order to estimate the capabilities of a computing slot with limited processing time, it is necess...
performance analysis, statistical profiling, Xen, OProfile Virtual Machine (VM) environments (e.g., ...
The demanding computing needs of the CMS experiment require thoughtful planning and management of it...
The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energ...
CERN measures computing systems’ performance based on typical usage patterns that happen “in real ...
Empirical performance measurements of computer systems almost always exhibit variability and anomali...
The IT infrastructures of companies and research centres are implementing new technologies to satisf...
Grid computing provides the main resource for data processing of High Energy Physics experiments, an...
Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than antici...
In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intr...
Benchmarking of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) suite for over a decad...
The benchmarking and accounting of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) sui...
HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by...
The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energ...
The HEPiX Benchmarking Working Group has developed a framework to benchmark the performance of a com...
In order to estimate the capabilities of a computing slot with limited processing time, it is necess...
performance analysis, statistical profiling, Xen, OProfile Virtual Machine (VM) environments (e.g., ...
The demanding computing needs of the CMS experiment require thoughtful planning and management of it...
The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energ...
CERN measures computing systems’ performance based on typical usage patterns that happen “in real ...
Empirical performance measurements of computer systems almost always exhibit variability and anomali...
The IT infrastructures of companies and research centres are implementing new technologies to satisf...
Grid computing provides the main resource for data processing of High Energy Physics experiments, an...
Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than antici...