Analyzing parallel programs has become increasingly difficult due to the immense amount of information collected on large systems. The use of clustering techniques has been proposed to analyze applications. However, while the objective of previous works is focused on identifying groups of processes with similar characteristics, we target a much finer granularity in the application behavior. In this paper, we present a tool that automatically characterizes the different computation regions between communication primitives in message-passing applications. This study shows how some of the clustering algorithms which may be applicable at a coarse grain are no longer adequate at this level. Density-based clustering algorithms applied to the perf...
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
The performance of massively parallel program is often impacted by the cost of communication across ...
In this thesis, we studied the behavior of parallel programs to understand how to automated the task...
Many data mining techniques have been proposed for parallel applications performance analysis, the...
High Performance Computing and Supercomputing is the high end area of the computing science that stu...
Clusters have become a very cost-effective platform for high-performance computing. Usually these sy...
Analyzing parallel programs has become increasingly difficult due to the immense amount of informati...
With larger and larger systems being constantly deployed, trace-based performance analysis of paral...
As access to supercomputing resources is becoming more and more commonplace, performance analysis to...
Many research activities have focused on the problem of task scheduling in heterogeneous systems fro...
Often parallel scientific applications are instrumented and traces are collected and analyzed to ide...
Today most complex scientific applications requires a large number of calculations to solve a partic...
The increasing use of massively parallel supercomputers to solve largescale scientific problems has ...
AbstractThis paper deals with a technique that can support the re-engineering of parallel programs b...
Measuring the performance of parallel codes is a compromise between lots of factors. The most import...
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
The performance of massively parallel program is often impacted by the cost of communication across ...
In this thesis, we studied the behavior of parallel programs to understand how to automated the task...
Many data mining techniques have been proposed for parallel applications performance analysis, the...
High Performance Computing and Supercomputing is the high end area of the computing science that stu...
Clusters have become a very cost-effective platform for high-performance computing. Usually these sy...
Analyzing parallel programs has become increasingly difficult due to the immense amount of informati...
With larger and larger systems being constantly deployed, trace-based performance analysis of paral...
As access to supercomputing resources is becoming more and more commonplace, performance analysis to...
Many research activities have focused on the problem of task scheduling in heterogeneous systems fro...
Often parallel scientific applications are instrumented and traces are collected and analyzed to ide...
Today most complex scientific applications requires a large number of calculations to solve a partic...
The increasing use of massively parallel supercomputers to solve largescale scientific problems has ...
AbstractThis paper deals with a technique that can support the re-engineering of parallel programs b...
Measuring the performance of parallel codes is a compromise between lots of factors. The most import...
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
The performance of massively parallel program is often impacted by the cost of communication across ...
In this thesis, we studied the behavior of parallel programs to understand how to automated the task...