When operating in stochastic, partially observable, multiagent settings, it is crucial to accurately predict the actions of other agents. In my thesis work, I propose methodologies for learning the policy of external agents from their observed behavior, in the form of finite state controllers. To perform this task, I adopt Bayesian learning algorithms based on nonparametric prior distributions, that provide the flexibility required to infer models of unknown complexity. These methods are to be embedded in decision making frameworks for autonomous planning in partially observable multiagent systems
Stochastic planning has gained popularity over classical planning in recent years by offering princi...
We consider reinforcement learning in partially observable domains where the agent can query an expe...
Partially Observable Markov Decision Processes (POMDPs) have been met with great success in planning...
We consider an autonomous agent operating in a stochastic, partially-observable, multiagent environm...
Bayesian methods for reinforcement learning (BRL) allow model uncertainty to be considered explicitl...
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, dec...
Bayesian methods for reinforcement learning (BRL) allow model uncertainty to be considered explicitl...
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, dec...
Abstraction plays an essential role in the way the agents plan their behaviours, especially to reduc...
It is often assumed that agents in multiagent systems with state uncertainty have full knowledge of ...
It is often assumed that agents in multiagent systems with state uncertainty have full knowledge of ...
It is often assumed that agents in multiagent systems with state uncertainty have full knowledge of ...
<p>As a growing number of agents are deployed in complex environments for scientific research and hu...
Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general framewor...
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, dec...
Stochastic planning has gained popularity over classical planning in recent years by offering princi...
We consider reinforcement learning in partially observable domains where the agent can query an expe...
Partially Observable Markov Decision Processes (POMDPs) have been met with great success in planning...
We consider an autonomous agent operating in a stochastic, partially-observable, multiagent environm...
Bayesian methods for reinforcement learning (BRL) allow model uncertainty to be considered explicitl...
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, dec...
Bayesian methods for reinforcement learning (BRL) allow model uncertainty to be considered explicitl...
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, dec...
Abstraction plays an essential role in the way the agents plan their behaviours, especially to reduc...
It is often assumed that agents in multiagent systems with state uncertainty have full knowledge of ...
It is often assumed that agents in multiagent systems with state uncertainty have full knowledge of ...
It is often assumed that agents in multiagent systems with state uncertainty have full knowledge of ...
<p>As a growing number of agents are deployed in complex environments for scientific research and hu...
Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general framewor...
Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, dec...
Stochastic planning has gained popularity over classical planning in recent years by offering princi...
We consider reinforcement learning in partially observable domains where the agent can query an expe...
Partially Observable Markov Decision Processes (POMDPs) have been met with great success in planning...