A standard assumption in neuroscience is that low-effort model-free learning is automatic and continuously used, whereas more complex model-based strategies are only used when the rewards they generate are worth the additional effort. We present evidence refuting this assumption. First, we demonstrate flaws in previous reports of combined model-free and model-based reward prediction errors in the ventral striatum that probably led to spurious results. More appropriate analyses yield no evidence of model-free prediction errors in this region. Second, we find that task instructions generating more correct model-based behaviour reduce rather than increase mental effort. This is inconsistent with cost–benefit arbitration between model-based and...
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeate...
Model-free learning creates stimulus-response associations, but are there limits to the types of sti...
Behaviour in spatial navigation is often organised into map-based (place-driven) versus map-free (cu...
SummaryThe mesostriatal dopamine system is prominently implicated in model-free reinforcement learni...
The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, wit...
Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based...
Distinct model-free and model-based learning processes are thought to drive both typical and dysfunc...
<div><p>Many accounts of decision making and reinforcement learning posit the existence of two disti...
Substantial recent work has explored multiple mechanisms of decision-making in humans and other anim...
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process ...
<div><p>Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic re...
Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to as...
The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-...
SummaryReinforcement learning (RL) uses sequential experience with situations (“states”) and outcome...
Distinct model-free and model-based learning processes are thought to drive both typical and dysfunc...
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeate...
Model-free learning creates stimulus-response associations, but are there limits to the types of sti...
Behaviour in spatial navigation is often organised into map-based (place-driven) versus map-free (cu...
SummaryThe mesostriatal dopamine system is prominently implicated in model-free reinforcement learni...
The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, wit...
Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based...
Distinct model-free and model-based learning processes are thought to drive both typical and dysfunc...
<div><p>Many accounts of decision making and reinforcement learning posit the existence of two disti...
Substantial recent work has explored multiple mechanisms of decision-making in humans and other anim...
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process ...
<div><p>Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic re...
Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to as...
The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-...
SummaryReinforcement learning (RL) uses sequential experience with situations (“states”) and outcome...
Distinct model-free and model-based learning processes are thought to drive both typical and dysfunc...
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeate...
Model-free learning creates stimulus-response associations, but are there limits to the types of sti...
Behaviour in spatial navigation is often organised into map-based (place-driven) versus map-free (cu...