ATLAS, CERN-IT, and CMS embarked on a project to develop a common system for analysis workflow management, resource provisioning and job scheduling. This distributed computing infrastructure was based on elements of PanDA and prior CMS workflow tools. After an extensive feasibility study and development of a proof-of-concept prototype, the project now has a basic infrastructure that supports the analysis use cases of both experiments via common services. In this paper we will discuss the state of the current solution and give an overview of all the components of the system
The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of dat...
During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since t...
The chain of the typical CMS analysis workflow execution starts once configured and submitted by the...
ATLAS, CERN-IT, and CMS embarked on a project to develop a common system for analysis workflow mana...
The CMS collaboration is undertaking a big effort to define the analysis model and to develop softwa...
The CMS collaboration is undertaking a big effort to define the analysis model and to develop softwa...
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, dist...
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, dist...
The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed...
The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is locate...
The CMS experiment at LHC has had a distributed computing model since early in the project plan. The...
The CMS experiment is currently developing a computing system capable of serving, processing and arc...
In CMS Computing the highest priorities for analysis tools are the improvement of the end users' abi...
The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is locate...
CMS is one of the two general-purpose HEP experiments currently under construction for the Large Had...
The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of dat...
During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since t...
The chain of the typical CMS analysis workflow execution starts once configured and submitted by the...
ATLAS, CERN-IT, and CMS embarked on a project to develop a common system for analysis workflow mana...
The CMS collaboration is undertaking a big effort to define the analysis model and to develop softwa...
The CMS collaboration is undertaking a big effort to define the analysis model and to develop softwa...
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, dist...
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, dist...
The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed...
The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is locate...
The CMS experiment at LHC has had a distributed computing model since early in the project plan. The...
The CMS experiment is currently developing a computing system capable of serving, processing and arc...
In CMS Computing the highest priorities for analysis tools are the improvement of the end users' abi...
The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is locate...
CMS is one of the two general-purpose HEP experiments currently under construction for the Large Had...
The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of dat...
During normal data taking CMS expects to support potentially as many as 2000 analysis users. Since t...
The chain of the typical CMS analysis workflow execution starts once configured and submitted by the...