The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both m...
How do we use our memories of the past to guide decisions we’ve never had to make before? Although e...
Daw, N.D., et al. (2011) 'Model-based influences on human’s choices and striatal prediction err...
SummaryReinforcement learning (RL) uses sequential experience with situations (“states”) and outcome...
SummaryThe mesostriatal dopamine system is prominently implicated in model-free reinforcement learni...
A standard assumption in neuroscience is that low-effort model-free learning is automatic and contin...
Substantial recent work has explored multiple mechanisms of decision-making in humans and other anim...
Contains fulltext : 165662.pdf (publisher's version ) (Open Access)Two distinct sy...
Decisions may arise via “model-free ” repetition of previously reinforced actions, or by “model-base...
Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeate...
Reinforcement learning (RL) provides a framework involving two diverse approaches to reward-based de...
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeate...
n Recent work suggests that uctuations in dopamine deliv-ery at target structures represent an evalu...
SummaryDecision making is often considered to arise out of contributions from a model-free habitual ...
The computational framework of reinforcement learning has been used to forward our understanding of ...
Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to as...
How do we use our memories of the past to guide decisions we’ve never had to make before? Although e...
Daw, N.D., et al. (2011) 'Model-based influences on human’s choices and striatal prediction err...
SummaryReinforcement learning (RL) uses sequential experience with situations (“states”) and outcome...
SummaryThe mesostriatal dopamine system is prominently implicated in model-free reinforcement learni...
A standard assumption in neuroscience is that low-effort model-free learning is automatic and contin...
Substantial recent work has explored multiple mechanisms of decision-making in humans and other anim...
Contains fulltext : 165662.pdf (publisher's version ) (Open Access)Two distinct sy...
Decisions may arise via “model-free ” repetition of previously reinforced actions, or by “model-base...
Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeate...
Reinforcement learning (RL) provides a framework involving two diverse approaches to reward-based de...
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeate...
n Recent work suggests that uctuations in dopamine deliv-ery at target structures represent an evalu...
SummaryDecision making is often considered to arise out of contributions from a model-free habitual ...
The computational framework of reinforcement learning has been used to forward our understanding of ...
Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to as...
How do we use our memories of the past to guide decisions we’ve never had to make before? Although e...
Daw, N.D., et al. (2011) 'Model-based influences on human’s choices and striatal prediction err...
SummaryReinforcement learning (RL) uses sequential experience with situations (“states”) and outcome...