Markov decision processes have become the de facto standard in modeling and solving sequential decision making problems under uncertainty. This book studies lifting Markov decision processes, reinforcement learning and dynamic programming to the first-order (or, relational) setting
International audienceIn this article, we address the problem of early classification on temporal se...
Classical treatments of problems of sequential mate choice assume that the distribution of the quali...
In this paper we present a new method for reinforcement learning in relational domains. A logical la...
Learning and reasoning in large, structured, probabilistic worlds is at the heart of artificial inte...
Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision probl...
The problem of making decisions is ubiquitous in life. This problem becomes even more complex when t...
Stochastic sequential decision-making problems are generally modeled and solved as Markov decision p...
This dissertation considers a particular aspect of sequential decision making under uncertainty in w...
Markov decision processes capture sequential decision making under uncertainty, where an agent must ...
Decision making with adaptive utility provides a generalisation to classical Bayesian decision theor...
This paper deals with cognitive theories behind agent-based modeling of learning and information pro...
International audienceMarkov Decision Processes (MDPs) are a mathematical framework for modeling seq...
Markov decision processes provide a rigorous mathematical framework for sequential decision making u...
In this work we consider probabilistic approaches to sequential decision making. The ultimate goal i...
This chapter presents an overview of simulation-based techniques useful for solving Markov decision ...
International audienceIn this article, we address the problem of early classification on temporal se...
Classical treatments of problems of sequential mate choice assume that the distribution of the quali...
In this paper we present a new method for reinforcement learning in relational domains. A logical la...
Learning and reasoning in large, structured, probabilistic worlds is at the heart of artificial inte...
Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision probl...
The problem of making decisions is ubiquitous in life. This problem becomes even more complex when t...
Stochastic sequential decision-making problems are generally modeled and solved as Markov decision p...
This dissertation considers a particular aspect of sequential decision making under uncertainty in w...
Markov decision processes capture sequential decision making under uncertainty, where an agent must ...
Decision making with adaptive utility provides a generalisation to classical Bayesian decision theor...
This paper deals with cognitive theories behind agent-based modeling of learning and information pro...
International audienceMarkov Decision Processes (MDPs) are a mathematical framework for modeling seq...
Markov decision processes provide a rigorous mathematical framework for sequential decision making u...
In this work we consider probabilistic approaches to sequential decision making. The ultimate goal i...
This chapter presents an overview of simulation-based techniques useful for solving Markov decision ...
International audienceIn this article, we address the problem of early classification on temporal se...
Classical treatments of problems of sequential mate choice assume that the distribution of the quali...
In this paper we present a new method for reinforcement learning in relational domains. A logical la...