The contribution of this paper is to introduce heuristics, that go beyond safe state abstraction in hierarchical reinforcement learning, to approx-imate a decomposed value function. Additional improvements in time and space complexity for learning and execution may outweigh achiev-ing less than hierarchically optimal performance and deliver anytime de-cision making during execution. Heuristics are discussed in relation to HEXQ, a MDP partitioning that generates a hierarchy of abstract models using safe state abstraction. The approximation methods are illustrated empirically.
Machines (HAM) (Parr, 1998) and the MAXQ approach (Dietterich, 2000). They are all based on the noti...
Autonomous systems are often difficult to program. Reinforcement learning (RL) is an attractive alte...
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowle...
An open problem in reinforcement learning is discovering hierarchical structure. HEXQ, an algorith...
Abstract. HEXQ is a reinforcement learning algorithm that discovers hierarchical structure automatic...
This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decompos...
This thesis addresses the open problem of automatically discovering hierarchical structure in reinfo...
This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decompos...
Hierarchical Reinforcement Learning (HRL) algorithms can perform planning at multiple levels of abst...
Hierarchical methods have attracted much recent attention as a means for scaling reinforcement learn...
Reinforcement learning (RL) is an area of Machine Learning (ML) concerned with learning how a softwa...
Factored representations, model-based learning, and hierarchies are well-studied techniques for impr...
Safe state abstraction in reinforcement learning allows an agent to ignore aspects of its current st...
Part of the problem is that MDPs model a system in fine detail. In recent yearsthere has been a move...
A hierarchical representation of the input-output transition function in a learning system is sugges...
Machines (HAM) (Parr, 1998) and the MAXQ approach (Dietterich, 2000). They are all based on the noti...
Autonomous systems are often difficult to program. Reinforcement learning (RL) is an attractive alte...
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowle...
An open problem in reinforcement learning is discovering hierarchical structure. HEXQ, an algorith...
Abstract. HEXQ is a reinforcement learning algorithm that discovers hierarchical structure automatic...
This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decompos...
This thesis addresses the open problem of automatically discovering hierarchical structure in reinfo...
This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decompos...
Hierarchical Reinforcement Learning (HRL) algorithms can perform planning at multiple levels of abst...
Hierarchical methods have attracted much recent attention as a means for scaling reinforcement learn...
Reinforcement learning (RL) is an area of Machine Learning (ML) concerned with learning how a softwa...
Factored representations, model-based learning, and hierarchies are well-studied techniques for impr...
Safe state abstraction in reinforcement learning allows an agent to ignore aspects of its current st...
Part of the problem is that MDPs model a system in fine detail. In recent yearsthere has been a move...
A hierarchical representation of the input-output transition function in a learning system is sugges...
Machines (HAM) (Parr, 1998) and the MAXQ approach (Dietterich, 2000). They are all based on the noti...
Autonomous systems are often difficult to program. Reinforcement learning (RL) is an attractive alte...
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowle...