Converging evidence in human electrophysiology suggests that evaluative feedback provided during performance monitoring (PM) elicits two distinctive and successive ERP components: the feedback-related negativity (FRN) and the P3b. Whereas the FRN has previously been linked to reward prediction error (RPE), the P3b has been conceived as reflecting motivational or attentional processes following the early processing of the RPE, including action value updating. However, it remains unclear whether these two consecutive neurophysiological effects depend on the direction of the unexpectedness (better- or worse-than-expected outcomes; signed RPE) or instead only on the degree of unexpectedness irrespective of direction (i.e., unsigned RPE). To add...
Reinforcement learning in humans and other animals is driven by reward prediction errors: deviations...
The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-erro...
Reward learning depends on accurate reward associations with potential choices. These associations c...
Converging evidence in human electrophysiology suggests that evaluative feedback provided during per...
Converging evidence in human electrophysiology suggests that evaluative feedback provided during per...
The feedback-related negativity (FRN) is a well-established electrophysiological correlate of feedba...
Comparisons between expectations and outcomes are critical for learning. Termed prediction errors, t...
The stimulus-preceding negativity (SPN) component reflects the anticipatory phase of reward processi...
The Feedback-Related Negativity (FRN) provides a reliable ERP marker of performance monitoring (PM)....
Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an...
Reward processing is influenced by reward magnitude, as previous EEG studies showed changes in ampli...
Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an...
The feedback-related negativity (FRN) is a mid-frontal event-related potential (ERP) recorded in var...
Electrophysiological investigations of brain processing of feedback reveal that the anterior cingula...
In reinforcement learning (RL), an agent makes sequential decisions to maximise the reward it can ob...
Reinforcement learning in humans and other animals is driven by reward prediction errors: deviations...
The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-erro...
Reward learning depends on accurate reward associations with potential choices. These associations c...
Converging evidence in human electrophysiology suggests that evaluative feedback provided during per...
Converging evidence in human electrophysiology suggests that evaluative feedback provided during per...
The feedback-related negativity (FRN) is a well-established electrophysiological correlate of feedba...
Comparisons between expectations and outcomes are critical for learning. Termed prediction errors, t...
The stimulus-preceding negativity (SPN) component reflects the anticipatory phase of reward processi...
The Feedback-Related Negativity (FRN) provides a reliable ERP marker of performance monitoring (PM)....
Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an...
Reward processing is influenced by reward magnitude, as previous EEG studies showed changes in ampli...
Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an...
The feedback-related negativity (FRN) is a mid-frontal event-related potential (ERP) recorded in var...
Electrophysiological investigations of brain processing of feedback reveal that the anterior cingula...
In reinforcement learning (RL), an agent makes sequential decisions to maximise the reward it can ob...
Reinforcement learning in humans and other animals is driven by reward prediction errors: deviations...
The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-erro...
Reward learning depends on accurate reward associations with potential choices. These associations c...