Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals c...
■ Our ability to make decisions is predicated upon our knowl-edge of the outcomes of the actions ava...
Errors in human behavior elicit a cascade of brain activity related to performance monitoring and er...
Reward learning depends on accurate reward associations with potential choices. These associations c...
The encoding of sensory information in the human brain is thought to be optimised by two principal p...
Learning to predict threat is important for survival. Such learning may be driven by differences bet...
The encoding of sensory information in the human brain is thought to be optimised by two principal p...
A recent theory holds that the anterior cingulate cortex (ACC) uses reinforcement learning signals c...
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeate...
Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeate...
In reinforcement learning, an agent makes sequential decisions to maximize reward. During learning, ...
Reinforcement learning describes motivated behavior in terms of two abstract signals. The representa...
The ability to detect and compensate for errors is crucial in producing effective, goal-directed beh...
Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to as...
SignificanceAn influential idea in neuroscience is that neural circuits do not only passively proces...
Goal-directed and instrumental learning are both important controllers of human behavior. Learning a...
■ Our ability to make decisions is predicated upon our knowl-edge of the outcomes of the actions ava...
Errors in human behavior elicit a cascade of brain activity related to performance monitoring and er...
Reward learning depends on accurate reward associations with potential choices. These associations c...
The encoding of sensory information in the human brain is thought to be optimised by two principal p...
Learning to predict threat is important for survival. Such learning may be driven by differences bet...
The encoding of sensory information in the human brain is thought to be optimised by two principal p...
A recent theory holds that the anterior cingulate cortex (ACC) uses reinforcement learning signals c...
Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeate...
Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeate...
In reinforcement learning, an agent makes sequential decisions to maximize reward. During learning, ...
Reinforcement learning describes motivated behavior in terms of two abstract signals. The representa...
The ability to detect and compensate for errors is crucial in producing effective, goal-directed beh...
Reinforcement learning (RL) uses sequential experience with situations (“states”) and outcomes to as...
SignificanceAn influential idea in neuroscience is that neural circuits do not only passively proces...
Goal-directed and instrumental learning are both important controllers of human behavior. Learning a...
■ Our ability to make decisions is predicated upon our knowl-edge of the outcomes of the actions ava...
Errors in human behavior elicit a cascade of brain activity related to performance monitoring and er...
Reward learning depends on accurate reward associations with potential choices. These associations c...