<p>Reproducibility and repeatability are key properties of benchmarks. However, achieving reproducibility can<br> be difficult. We faced this while applying the microbenchmark MooBench to the resource monitoring framework SPASS-meter. In this paper, we discuss some interesting problems that occurred while trying to reproduce previous benchmarking results. In the process of reproduction, we extended MooBench and made improvements to the performance of SPASS-meter. We conclude with lessons learned for reproducing (micro-)benchmarks.</p
Benchmarks that closely match the behavior of production workloads are crucial to design and provisi...
We present a synthetic benchmarking framework that targets the systematic evaluation of RV tools for...
<p>Application-level monitoring frameworks, such as Kieker, provide insight into the inner workings ...
Reproducibility and repeatability are key properties of benchmarks. However, achieving reproducibili...
Software performance faults have severe consequences for users, developers, and companies. One way t...
Microbenchmarking frameworks, such as Java\u27s Microbenchmark Harness (JMH), allow developers to wr...
Performance problems in applications should ideally be detected as soon as they occur, i.e., directl...
Application-level monitoring of continuously operating software systems provides insights into their...
Historically, benchmarks have been used for commercial purposes. A customer develops or selects a be...
MRNet is an infrastructure that provides scalable multicast and data aggregation functionality for d...
Replicating results of performance benchmarks can be difficult. A common problem is that researchers...
Experimental evaluation is key to systems research. Because mod-ern systems are complex and non-dete...
Rigorous performance engineering traditionally assumes measuring on bare-metal environments to contr...
One of the major requirements for e-science applications\ud handling sensor data, is reproducibility...
Software performance changes are costly and often hard to detect pre-release. Similar to software te...
Benchmarks that closely match the behavior of production workloads are crucial to design and provisi...
We present a synthetic benchmarking framework that targets the systematic evaluation of RV tools for...
<p>Application-level monitoring frameworks, such as Kieker, provide insight into the inner workings ...
Reproducibility and repeatability are key properties of benchmarks. However, achieving reproducibili...
Software performance faults have severe consequences for users, developers, and companies. One way t...
Microbenchmarking frameworks, such as Java\u27s Microbenchmark Harness (JMH), allow developers to wr...
Performance problems in applications should ideally be detected as soon as they occur, i.e., directl...
Application-level monitoring of continuously operating software systems provides insights into their...
Historically, benchmarks have been used for commercial purposes. A customer develops or selects a be...
MRNet is an infrastructure that provides scalable multicast and data aggregation functionality for d...
Replicating results of performance benchmarks can be difficult. A common problem is that researchers...
Experimental evaluation is key to systems research. Because mod-ern systems are complex and non-dete...
Rigorous performance engineering traditionally assumes measuring on bare-metal environments to contr...
One of the major requirements for e-science applications\ud handling sensor data, is reproducibility...
Software performance changes are costly and often hard to detect pre-release. Similar to software te...
Benchmarks that closely match the behavior of production workloads are crucial to design and provisi...
We present a synthetic benchmarking framework that targets the systematic evaluation of RV tools for...
<p>Application-level monitoring frameworks, such as Kieker, provide insight into the inner workings ...