Scientific workflows are generally computing- and data-intensive with large volume of data generated during their execution. Therefore, some of the data should be saved to avoid the expensive re-execution of tasks in case of exceptions. However, cloud-based data storage services come at some expense. In this paper, we extend the risk evaluation model, which assigns different weights to tasks based on their ordering relationship, to decide the occasion to perform backup or checkpoint service after the completion of a task. The proposed method computes and compares the potential loss with and without data backup to achieve the tradeoff between overhead of check pointing and re-execution after exceptions. We also design the utility function wi...
Cloud computing is a virtualized, scalable, ubiquitous, and distributed computing paradigm that prov...
Due to the dynamic nature of the underlying high-performance infrastructures for scientific workflow...
International audienceThis work deals with scheduling and checkpointing strategies to execute scient...
Data-intensive workflows are generally computing- and data-intensive with large volume of data gener...
Selecting appropriate services for task execution in workflows should not only consider budget and d...
In this paper, we aim at optimizing fault-tolerance techniques based on a checkpointing/restart mech...
International audienceIn this paper, we aim at optimizing fault-tolerance tech- niques based on a ch...
Scientific workflows are data- and compute-intensive; thus, they may run for days or even weeks...
The robustness of a schedule, with respect to its probability of successful execution, becomes an in...
Abstract—Due to the dynamic nature of the underlying high-performance infrastructures for scientific...
Scientific workflows are used to model applications of high throughput computation and complex large...
Due to the dynamic nature of the underlying high performance infrastructures for scientific workflow...
© 2015 Dr. Deepak Poola ChandrashekarCloud environments offer low-cost computing resources as a subs...
Cloud computing is a virtualized, scalable, ubiquitous, and distributed computing paradigm that prov...
Due to the dynamic nature of the underlying high-performance infrastructures for scientific workflow...
International audienceThis work deals with scheduling and checkpointing strategies to execute scient...
Data-intensive workflows are generally computing- and data-intensive with large volume of data gener...
Selecting appropriate services for task execution in workflows should not only consider budget and d...
In this paper, we aim at optimizing fault-tolerance techniques based on a checkpointing/restart mech...
International audienceIn this paper, we aim at optimizing fault-tolerance tech- niques based on a ch...
Scientific workflows are data- and compute-intensive; thus, they may run for days or even weeks...
The robustness of a schedule, with respect to its probability of successful execution, becomes an in...
Abstract—Due to the dynamic nature of the underlying high-performance infrastructures for scientific...
Scientific workflows are used to model applications of high throughput computation and complex large...
Due to the dynamic nature of the underlying high performance infrastructures for scientific workflow...
© 2015 Dr. Deepak Poola ChandrashekarCloud environments offer low-cost computing resources as a subs...
Cloud computing is a virtualized, scalable, ubiquitous, and distributed computing paradigm that prov...
Due to the dynamic nature of the underlying high-performance infrastructures for scientific workflow...
International audienceThis work deals with scheduling and checkpointing strategies to execute scient...