We present a method for designing robust controllers for dynamical systems with linear temporal logic specifications. We abstract the original system by a finite Markov Decision Process (MDP) that has transition probabilities in a specified uncertainty set. A robust control policy for the MDP is generated that maximizes the worst-case probability of satisfying the specification over all transition probabilities in the uncertainty set. To do this, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic p...
Abstract — We present a method to generate a robot control strategy that maximizes the probability t...
Abstract — We consider the synthesis of control policies for probabilistic systems, modeled by Marko...
Abstract — In this paper, we develop a method to automati-cally generate a control policy for a dyna...
We present a method for designing a robust control policy for an uncertain system subject to tempora...
Abstract—We present a method for designing robust con-trollers for dynamical systems with linear tem...
Abstract — We present a method for designing a robust control policy for an uncertain system subject...
Discrete-time stochastic systems are an essential modelling tool for many engineering systems. We co...
Abstract—We consider synthesis of control policies that maxi-mize the probability of satisfying give...
The formal verification and controller synthesis for Markov decision processes that evolve over unco...
Optimal solutions to Markov decision problems may be very sensitive with respect to the state transi...
Abstract—We consider synthesis of controllers that maximize the probability of satisfying given temp...
Abstract. We study the problem of effective controller synthesis for finite-state Markov decision pr...
We study the synthesis of robust optimal control policies for Markov decision processes with transit...
Abstract — We present a method to generate a robot control strategy that maximizes the probability t...
Abstract — We consider the synthesis of control policies for probabilistic systems, modeled by Marko...
Abstract — In this paper, we develop a method to automati-cally generate a control policy for a dyna...
We present a method for designing a robust control policy for an uncertain system subject to tempora...
Abstract—We present a method for designing robust con-trollers for dynamical systems with linear tem...
Abstract — We present a method for designing a robust control policy for an uncertain system subject...
Discrete-time stochastic systems are an essential modelling tool for many engineering systems. We co...
Abstract—We consider synthesis of control policies that maxi-mize the probability of satisfying give...
The formal verification and controller synthesis for Markov decision processes that evolve over unco...
Optimal solutions to Markov decision problems may be very sensitive with respect to the state transi...
Abstract—We consider synthesis of controllers that maximize the probability of satisfying given temp...
Abstract. We study the problem of effective controller synthesis for finite-state Markov decision pr...
We study the synthesis of robust optimal control policies for Markov decision processes with transit...
Abstract — We present a method to generate a robot control strategy that maximizes the probability t...
Abstract — We consider the synthesis of control policies for probabilistic systems, modeled by Marko...
Abstract — In this paper, we develop a method to automati-cally generate a control policy for a dyna...