ReBench is a tool to run and document benchmark experiments. Currently, it is mostly used for benchmarking language implementations, but it can be used to monitor the performance of all kinds of other applications and programs, too.If you use this software, please consider citing. Either based on the metadata found in this file, or the main DOI: https://doi.org/10.5281/zenodo.131176
Historically, benchmarks have been used for commercial purposes. A customer develops or selects a be...
Reproducible research relies on well-designed benchmarks. However, evaluation on a single benchmark ...
ReproZip is a tool that simplifies the process of creating reproducible experiments from command-lin...
This is the first official release of ReBench as a "feature-complete" product. Feature-complete here...
Restructure command-line options in help, and use argparse (#73) Add support for Python 3 and PyPy ...
Performance evaluation of database tools and systems is frequently done by using performance benchma...
This release focuses on reducing the noise from the system (#143, #144). For this purpose, it intro...
1 Introduction Benchmarking is an important technique for assessing the performance of persistent ob...
Engineering related research, such as research on worst-case execution time, uses experimentation to...
added --setup-only option, to run one benchmark for each setup (#110, #115) added ignore_timeout set...
Performance problems in applications should ideally be detected as soon as they occur, i.e., directl...
Abstract. In this article we present the software architecture and implementation of GridBench, an e...
Reproducibility and repeatability are key properties of benchmarks. However, achieving reproducibili...
Compiler-assisted variable size benchmarking for the study of C++ metaprogram compile times. ctbenc...
made user interface more consistent and concise (#83, #85, #92, #101, #102) added concept of iterati...
Historically, benchmarks have been used for commercial purposes. A customer develops or selects a be...
Reproducible research relies on well-designed benchmarks. However, evaluation on a single benchmark ...
ReproZip is a tool that simplifies the process of creating reproducible experiments from command-lin...
This is the first official release of ReBench as a "feature-complete" product. Feature-complete here...
Restructure command-line options in help, and use argparse (#73) Add support for Python 3 and PyPy ...
Performance evaluation of database tools and systems is frequently done by using performance benchma...
This release focuses on reducing the noise from the system (#143, #144). For this purpose, it intro...
1 Introduction Benchmarking is an important technique for assessing the performance of persistent ob...
Engineering related research, such as research on worst-case execution time, uses experimentation to...
added --setup-only option, to run one benchmark for each setup (#110, #115) added ignore_timeout set...
Performance problems in applications should ideally be detected as soon as they occur, i.e., directl...
Abstract. In this article we present the software architecture and implementation of GridBench, an e...
Reproducibility and repeatability are key properties of benchmarks. However, achieving reproducibili...
Compiler-assisted variable size benchmarking for the study of C++ metaprogram compile times. ctbenc...
made user interface more consistent and concise (#83, #85, #92, #101, #102) added concept of iterati...
Historically, benchmarks have been used for commercial purposes. A customer develops or selects a be...
Reproducible research relies on well-designed benchmarks. However, evaluation on a single benchmark ...
ReproZip is a tool that simplifies the process of creating reproducible experiments from command-lin...