Directed acyclic graph (DAG)-aware task scheduling algorithms have been studied extensively in recent years, and these algorithms have achieved significant performance improvements in data-parallel analytic platforms. However, current DAG-aware task scheduling algorithms, among which HEFT and GRAPHENE are notable, pay little attention to the cache management policy, which plays a vital role in in-memory data-parallel systems such as Spark. Cache management policies that are designed for Spark exhibit poor performance in DAG-aware task-scheduling algorithms, which leads to cache misses and performance degradation. In this study, we propose a new cache management policy known as Long-Running Stage Set First (LSF), which makes full use of the ...
Massively parallel processing devices, like Graphics Processing Units (GPUs), have the ability to ac...
Long memory latency and limited throughput become performance bottlenecks of GPGPU applications. The...
Static scheduling is the temporal and spatial mapping of a program to the resources of parallel syst...
In systems with complex many-core cache hierarchy, exploiting data locality can significantly reduce...
International audienceScientific workflows are frequently modeled as Directed Acyclic Graphs (DAGs) ...
International audienceScientific workflows are frequently modeled as Directed Acyclic Graphs (DAG) o...
International audienceMost schedulability analysis techniques for multi-core architectures assume a ...
International audienceMost schedulability analysis techniques for multi-core architectures assume a ...
Computational task DAGs are executed on parallel computers by a task scheduling algorithm. Intellige...
This work studies energy-aware real-time scheduling of a set of sporadic Directed Acyclic Graph (DAG...
Effective cache utilization is critical to performance in chip-multiprocessor systems (CMP). Modern ...
Multi-socket Multi-core architectures with shared caches in each socket have become mainstream when ...
International audienceWe investigate efficient execution of computations, modeled as Directed Acycli...
Many computational solutions can be expressed as DAGs, in which the nodes represent tasks to be exec...
International audienceThe task-based approach is a parallelization paradigm in which an algorithm is...
Massively parallel processing devices, like Graphics Processing Units (GPUs), have the ability to ac...
Long memory latency and limited throughput become performance bottlenecks of GPGPU applications. The...
Static scheduling is the temporal and spatial mapping of a program to the resources of parallel syst...
In systems with complex many-core cache hierarchy, exploiting data locality can significantly reduce...
International audienceScientific workflows are frequently modeled as Directed Acyclic Graphs (DAGs) ...
International audienceScientific workflows are frequently modeled as Directed Acyclic Graphs (DAG) o...
International audienceMost schedulability analysis techniques for multi-core architectures assume a ...
International audienceMost schedulability analysis techniques for multi-core architectures assume a ...
Computational task DAGs are executed on parallel computers by a task scheduling algorithm. Intellige...
This work studies energy-aware real-time scheduling of a set of sporadic Directed Acyclic Graph (DAG...
Effective cache utilization is critical to performance in chip-multiprocessor systems (CMP). Modern ...
Multi-socket Multi-core architectures with shared caches in each socket have become mainstream when ...
International audienceWe investigate efficient execution of computations, modeled as Directed Acycli...
Many computational solutions can be expressed as DAGs, in which the nodes represent tasks to be exec...
International audienceThe task-based approach is a parallelization paradigm in which an algorithm is...
Massively parallel processing devices, like Graphics Processing Units (GPUs), have the ability to ac...
Long memory latency and limited throughput become performance bottlenecks of GPGPU applications. The...
Static scheduling is the temporal and spatial mapping of a program to the resources of parallel syst...