Abstract. State-based systems with discrete or continuous time are of-ten modelled with the help of Markov chains. In order to specify perfor-mance measures for such systems, one can define a reward structure over the Markov chain, leading to the Markov Reward Model (MRM) formal-ism. Typical examples of performance measures that can be defined in this way are time-based measures (e.g. mean time to failure), average en-ergy consumption, monetary cost (e.g. for repair, maintenance) or even combinations of such measures. These measures can also be regarded as target objects for system optimization. For that reason, an MRM can be enhanced with an additional control structure, leading to the formalism of Markov Decision Processes (MDP). In this ...
This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains wi...
Abstract. Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with unce...
Markov-reward models, as extensions of continuous-time Markov chains, have received increased attent...
Costs and rewards are important ingredients for many types of systems, modelling critical aspects li...
Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects li...
A Markov decision process (MDP) relies on the notions of state, describing the current situation of ...
This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton...
Abstract—We study the convergence of Markov decision pro-cesses, composed of a large number of objec...
This thesis attempts to bring together two different approaches to the modeling of event driven syst...
International audienceWe study the convergence of Markov decision processes, composed of a large num...
This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton...
Composite performance and dependability analysis is gaining importance in the design of complex, fau...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains wi...
Continuous-time Markov decision processes (CTMDPs) are widely used for the control of queueing syste...
This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains wi...
Abstract. Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with unce...
Markov-reward models, as extensions of continuous-time Markov chains, have received increased attent...
Costs and rewards are important ingredients for many types of systems, modelling critical aspects li...
Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects li...
A Markov decision process (MDP) relies on the notions of state, describing the current situation of ...
This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton...
Abstract—We study the convergence of Markov decision pro-cesses, composed of a large number of objec...
This thesis attempts to bring together two different approaches to the modeling of event driven syst...
International audienceWe study the convergence of Markov decision processes, composed of a large num...
This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton...
Composite performance and dependability analysis is gaining importance in the design of complex, fau...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains wi...
Continuous-time Markov decision processes (CTMDPs) are widely used for the control of queueing syste...
This paper considers model checking of Markov reward models (MRMs), continuous-time Markov chains wi...
Abstract. Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with unce...
Markov-reward models, as extensions of continuous-time Markov chains, have received increased attent...