Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suites

  • Grambow, Martin
  • Laaber, Christoph
  • Leitner, Philipp
  • Bermbach, David
Publication date
May 2021
Publisher
PeerJ, Ltd.
Language
English

Abstract

Performance problems in applications should ideally be detected as soon as they occur, i.e., directly when the causing code modification is added to the code repository. To this end, complex and cost-intensive application benchmarks or lightweight but less relevant microbenchmarks can be added to existing build pipelines to ensure performance goals. In this paper, we show how the practical relevance of microbenchmark suites can be improved and verified based on the application flow during an application benchmark run. We propose an approach to determine the overlap of common function calls between application and microbenchmarks, describe a method which identifies redundant microbenchmarks, and present a recommendation algorithm which revea...

Extracted data

Loading...

Related items

Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suites
  • Grambow, Martin
  • Laaber, Christoph
  • Leitner, Philipp
  • Bermbach, David
May 2021

Performance problems in applications should ideally be detected as soon as they occur, i.e., directl...

Using Microbenchmark Suites to Detect Application Performance Changes
  • Grambow, Martin
  • Kovalev, Denis
  • Laaber, Christoph
  • Leitner, Philipp
  • Bermbach, David
January 2022

Software performance changes are costly and often hard to detect pre-release. Similar to software te...

Dynamically reconfiguring software microbenchmarks: reducing execution time without sacrificing result quality
  • Laaber, Christoph
  • Würsten, Stefan
  • Gall, Harald C
  • Leitner, Philipp
December 2020

Executing software microbenchmarks, a form of small-scale performance tests predominantly used for l...

We use cookies to provide a better user experience.