Tape storage is still a cost effective way to keep large amounts of data over a long period of time and it is expected that this will continue in the future. The GridKa tape environment is a complex system of many hardware components and software layers. Configuring this system for optimal performance for all use cases is a non-trivial task and requires a lot of experience. We present the current status of the GridKa tape environment, report on recent upgrades and improvements and plans to further develop and enhance the system, especially with regard to the future requirements of the HEP experiments and their large data centers. The short-term planning mainly includes the transition from TSM to HPSS as the backend and the effects on the co...
The GridKa Tier 1 data and computing center hosts a significant share of WLCG processing resources. ...
The LHC program has been successful in part due to the globally distributed computing resources used...
The ATLAS collaboration started a process to understand the computing needs for the High Luminosity ...
Tape storage is still a cost effective way to keep large amounts of data over a long period of time ...
Data growth over several years within HEP experiments requires a wider use of storage systems for WL...
Tape storage remains the most cost-effective system for safe long-term storage of petabytes of data ...
Storage has been identified as the main challenge for the future distributed computing infrastructur...
The ATLAS Experiment is storing detector and simulation data in raw and derived data formats across ...
Optimization of computing resources, in particular storage, the costliest one, is a tremendous chall...
The computing center GridKa is serving the ALICE, ATLAS, CMS and LHCb experiments as one of the bigg...
The ATLAS Experiment is storing detector and simulation data in raw and derived data formats across ...
For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using comm...
CERN's tape-based archive system has collected over 70 Petabytes of data during the first run of the...
The distributed Grid computing infrastructure has been instrumental in the successful exploitation o...
This note will summarize the Software development and operational experience and improvements of the...
The GridKa Tier 1 data and computing center hosts a significant share of WLCG processing resources. ...
The LHC program has been successful in part due to the globally distributed computing resources used...
The ATLAS collaboration started a process to understand the computing needs for the High Luminosity ...
Tape storage is still a cost effective way to keep large amounts of data over a long period of time ...
Data growth over several years within HEP experiments requires a wider use of storage systems for WL...
Tape storage remains the most cost-effective system for safe long-term storage of petabytes of data ...
Storage has been identified as the main challenge for the future distributed computing infrastructur...
The ATLAS Experiment is storing detector and simulation data in raw and derived data formats across ...
Optimization of computing resources, in particular storage, the costliest one, is a tremendous chall...
The computing center GridKa is serving the ALICE, ATLAS, CMS and LHCb experiments as one of the bigg...
The ATLAS Experiment is storing detector and simulation data in raw and derived data formats across ...
For over a decade, dCache has been synonymous with large-capacity, fault-tolerant storage using comm...
CERN's tape-based archive system has collected over 70 Petabytes of data during the first run of the...
The distributed Grid computing infrastructure has been instrumental in the successful exploitation o...
This note will summarize the Software development and operational experience and improvements of the...
The GridKa Tier 1 data and computing center hosts a significant share of WLCG processing resources. ...
The LHC program has been successful in part due to the globally distributed computing resources used...
The ATLAS collaboration started a process to understand the computing needs for the High Luminosity ...