Validating experimental results from articles has finally become a norm at many HPC and systems conferences. Nowadays, more than half of accepted papers pass artifact evaluation and share related code and data. Unfortunately, lack of a common experimental framework, common research methodology and common formats places an increasing burden on evaluators to validate a growing number of ad-hoc artifacts. Furthermore, having too many ad-hoc artifacts and Docker snapshots is almost as bad as not having any (!), since they cannot be easily reused, customized and built upon. While overviewing more than 100 papers during artifact evaluation at HPC conferences, we noticed that many of them use similar experimental setups, benchmarks, models, data ...
International audienceDesigning, analyzing and optimizing applications for rapidly evolving computer...
This is software documentation for the Collective Knowledge framework v1.15.0. Related resources: ...
Continuing innovation in science and technology is vital for our society and requires ever-increasin...
Validating experimental results from articles has finally become a norm at many HPC and systems conf...
Validating experimental results from articles has finally become a norm at many HPC and systems conf...
The original presentation was shared via SlideShare. Validating experimental results from articles ...
Developing novel applications based on deep tech (ML, AI, HPC, quantum, IoT) and deploying them in p...
I started drafting this document at the beginning of the development of the 3rd version of plugin-ba...
This presentation introduces Collective Knowledge Playground - a free, open-source and technology-ag...
14 March 2017, CNRS webinar, Grenoble, France (original slides were shared here). A decade ago my r...
The keynote presentation from the 1st ACM conference on reproducibility and replicability (ACM REP'2...
Based on our interdisciplinary background, we propose to radically change research and development m...
International audienceSoftware and hardware co-design and optimization of HPC systems has become int...
International audienceDesigning, analyzing and optimizing applications for rapidly evolving computer...
This is software documentation for the Collective Knowledge framework v1.15.0. Related resources: ...
Continuing innovation in science and technology is vital for our society and requires ever-increasin...
Validating experimental results from articles has finally become a norm at many HPC and systems conf...
Validating experimental results from articles has finally become a norm at many HPC and systems conf...
The original presentation was shared via SlideShare. Validating experimental results from articles ...
Developing novel applications based on deep tech (ML, AI, HPC, quantum, IoT) and deploying them in p...
I started drafting this document at the beginning of the development of the 3rd version of plugin-ba...
This presentation introduces Collective Knowledge Playground - a free, open-source and technology-ag...
14 March 2017, CNRS webinar, Grenoble, France (original slides were shared here). A decade ago my r...
The keynote presentation from the 1st ACM conference on reproducibility and replicability (ACM REP'2...
Based on our interdisciplinary background, we propose to radically change research and development m...
International audienceSoftware and hardware co-design and optimization of HPC systems has become int...
International audienceDesigning, analyzing and optimizing applications for rapidly evolving computer...
This is software documentation for the Collective Knowledge framework v1.15.0. Related resources: ...
Continuing innovation in science and technology is vital for our society and requires ever-increasin...