This paper describes the design of a resilient and flexible software architecture that has been developed to satisfy the data processing requirements of a large HEP experiment, CMS, currently being constructed at the LHC machine at CERN. We describe various components of a software framework that allows integration of physics modules and which can be easily adapted for use in different processing environments both real-time (online trigger) and offline (event reconstruction and analysis). Features such as the mechanisms for scheduling algorithms, configuring the application and managing the dependences among modules are described in detail. In particular, a major effort has been placed on providing a service for managing persistent data and...
In this thesis, we describe the needs for the operation of a Database Management System (DBMS) to st...
The CMS experiment is currently developing a computing system capable of serving, processing and arc...
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, dist...
CMS is one of the two general-purpose HEP experiments currently under construction for the Large Had...
The Large Hadron Collider restarted in 2015 with a higher centre-of-mass energy of 13 TeV. The insta...
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first leve...
We report on the status and plans for the event reconstruction software of the CMS experiment. The ...
The Large Hadron Collider at CERN restarted in 2015 with a higher centre-of-mass energy of 13 TeV. T...
The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first le...
The first running period of the LHC was a great success. In particular vital for the timely analysis...
The architecture of the Level-1 Trigger Control and Monitoring system for the CMS experiment is pres...
The CMS experiment at the CERN LHC (Large Hadron Collider) relies on a distributed computing infrast...
The implementation of persistency in the Compact Muon Solenoid (CMS) Software Framework uses the cor...
The globally distributed computing infrastructure required to cope with the multi-petabytes datasets...
The demands of future high energy physics experiments towards software and computing have led to pla...
In this thesis, we describe the needs for the operation of a Database Management System (DBMS) to st...
The CMS experiment is currently developing a computing system capable of serving, processing and arc...
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, dist...
CMS is one of the two general-purpose HEP experiments currently under construction for the Large Had...
The Large Hadron Collider restarted in 2015 with a higher centre-of-mass energy of 13 TeV. The insta...
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first leve...
We report on the status and plans for the event reconstruction software of the CMS experiment. The ...
The Large Hadron Collider at CERN restarted in 2015 with a higher centre-of-mass energy of 13 TeV. T...
The CMS experiment at the LHC uses a two-stage trigger system, with events flowing from the first le...
The first running period of the LHC was a great success. In particular vital for the timely analysis...
The architecture of the Level-1 Trigger Control and Monitoring system for the CMS experiment is pres...
The CMS experiment at the CERN LHC (Large Hadron Collider) relies on a distributed computing infrast...
The implementation of persistency in the Compact Muon Solenoid (CMS) Software Framework uses the cor...
The globally distributed computing infrastructure required to cope with the multi-petabytes datasets...
The demands of future high energy physics experiments towards software and computing have led to pla...
In this thesis, we describe the needs for the operation of a Database Management System (DBMS) to st...
The CMS experiment is currently developing a computing system capable of serving, processing and arc...
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, dist...