SummaryHuman behavior displays hierarchical structure: simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those in hierarchical reinforcement learning (HRL), a machine-learning framework that extends reinforcement-learning mechanisms into hierarchical domains. To test this, we leveraged a distinctive prediction arising from HRL. In ordinary reinforcement learning, reward prediction errors are computed when there is an unanticipated change in the prospects for accomplishing overall task goals...
How do people decide how long to continue in a task, when to switch, and to which other task? It is ...
Common approaches to Reinforcement Learning (RL) are seriously challenged by large-scale application...
In our everyday lives, we must learn and utilize context-specific information to inform our decision...
SummaryHuman behavior displays hierarchical structure: simple actions cohere into subtask sequences,...
We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in t...
Humans have the fascinating ability to achieve goals in a complex and constantly changing world, sti...
Recent advances in computational reinforcement learning suggest that humans and animals can learn fr...
Recent advances in computational reinforcement learning suggest that humans and animals can learn fr...
A longstanding view of the organization of human and animal behavior holds that behavior is hierarch...
A longstanding view of the organization of human and animal behavior holds that behavior is hierarch...
To increase the adaptivity of hierarchical reinforcement learning (HRL) and accelerate the learning ...
We can make good decisions by capturing and exploiting the structure of the natural world. It is tho...
Funding Information: Open access funding provided by Swiss Federal Institute of Technology Zurich. T...
Funding Information: Open access funding provided by Swiss Federal Institute of Technology Zurich. T...
Often the world is structured such that distinct sensory contexts signify the same abstract rule set...
How do people decide how long to continue in a task, when to switch, and to which other task? It is ...
Common approaches to Reinforcement Learning (RL) are seriously challenged by large-scale application...
In our everyday lives, we must learn and utilize context-specific information to inform our decision...
SummaryHuman behavior displays hierarchical structure: simple actions cohere into subtask sequences,...
We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in t...
Humans have the fascinating ability to achieve goals in a complex and constantly changing world, sti...
Recent advances in computational reinforcement learning suggest that humans and animals can learn fr...
Recent advances in computational reinforcement learning suggest that humans and animals can learn fr...
A longstanding view of the organization of human and animal behavior holds that behavior is hierarch...
A longstanding view of the organization of human and animal behavior holds that behavior is hierarch...
To increase the adaptivity of hierarchical reinforcement learning (HRL) and accelerate the learning ...
We can make good decisions by capturing and exploiting the structure of the natural world. It is tho...
Funding Information: Open access funding provided by Swiss Federal Institute of Technology Zurich. T...
Funding Information: Open access funding provided by Swiss Federal Institute of Technology Zurich. T...
Often the world is structured such that distinct sensory contexts signify the same abstract rule set...
How do people decide how long to continue in a task, when to switch, and to which other task? It is ...
Common approaches to Reinforcement Learning (RL) are seriously challenged by large-scale application...
In our everyday lives, we must learn and utilize context-specific information to inform our decision...