Little is known about how dopamine (DA) neuron firing rates behave in cognitively demanding decision-making tasks. Here, we investigated midbrain DA activity in monkeys performing a discrimination task in which the animal had to use working memory (WM) to report which of two sequentially applied vibrotactile stimuli had the higher frequency. We found that perception was altered by an internal bias, likely generated by deterioration of the representation of the first frequency during the WM period. This bias greatly controlled the DA phasic response during the two stimulation periods, confirming that DA reward prediction errors reflected stimulus perception. In contrast, tonic dopamine activity during WM was not affected by the bias and did ...
The latest animal neurophysiology has revealed that the dopamine reward prediction error signal driv...
Learning to optimally predict rewards requires agents to account for fluctuations in reward value. R...
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process ...
SummaryDopamine is essential to cognitive functions. However, despite abundant studies demonstrating...
Abstract Dopamine neurons respond to reward-predicting cues but also modulate information processing...
The dopamine projection from ventral tegmental area (VTA) to nucleus accumbens (NAc) is critical for...
Substantial evidence suggests that the phasic activities of dopaminergic neurons in the primate midb...
Midbrain dopamine neurons respond to reward-predictive stimuli. In the natural environment reward-pr...
This article focuses on recent modeling studies of dopamine neuron activity and their influence on b...
Situations where rewards are unexpectedly obtained or withheld represent opportunities for new learn...
Dopamine is a neurotransmitter and disruption to this system has long been associated with the neuro...
Midbrain dopamine neurons are known to encode reward prediction errors (RPE) used to update value pr...
Reward follows work, but with finite energy, all life must decide what reward is worth the work. Eco...
AbstractMidbrain dopamine (DA) neurons are thought to encode reward prediction error. Reward predict...
Midbrain dopamine (DA) neurons are phasically activated in response to unexpected reward presentatio...
The latest animal neurophysiology has revealed that the dopamine reward prediction error signal driv...
Learning to optimally predict rewards requires agents to account for fluctuations in reward value. R...
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process ...
SummaryDopamine is essential to cognitive functions. However, despite abundant studies demonstrating...
Abstract Dopamine neurons respond to reward-predicting cues but also modulate information processing...
The dopamine projection from ventral tegmental area (VTA) to nucleus accumbens (NAc) is critical for...
Substantial evidence suggests that the phasic activities of dopaminergic neurons in the primate midb...
Midbrain dopamine neurons respond to reward-predictive stimuli. In the natural environment reward-pr...
This article focuses on recent modeling studies of dopamine neuron activity and their influence on b...
Situations where rewards are unexpectedly obtained or withheld represent opportunities for new learn...
Dopamine is a neurotransmitter and disruption to this system has long been associated with the neuro...
Midbrain dopamine neurons are known to encode reward prediction errors (RPE) used to update value pr...
Reward follows work, but with finite energy, all life must decide what reward is worth the work. Eco...
AbstractMidbrain dopamine (DA) neurons are thought to encode reward prediction error. Reward predict...
Midbrain dopamine (DA) neurons are phasically activated in response to unexpected reward presentatio...
The latest animal neurophysiology has revealed that the dopamine reward prediction error signal driv...
Learning to optimally predict rewards requires agents to account for fluctuations in reward value. R...
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process ...