A: Exemplary trial sequence of experimental reversal learning task. Participants were instructed to choose the card that they thought would lead to a monetary reward. After they chose one of the two cards, the corresponding card was highlighted and feedback was displayed. The feedback consisted of either a 10 Eurocents coin for a win outcome, or a crossed 10 Eurocents for a loss outcome. B: The time series of the underlying reward probability of one of the two stimuli. The reward probability of the more rewarding stimuli at any time step was set to 0.8 and a punishment probability to 0.2 (and vice versa for the other stimulus). Reward contingencies remained stable for the first 55 trials (pre-reversal phase) and for the last 35 trials (post...
<p>The sequence of trial events on the Risk Task (A) and Reversal Learning Task (B).</p
Two experiments were attempted to confirm the facilitative effect of overtraining in the original le...
<p>(<b>A</b>) The timeline of the sequential-sampling task used in Experiment 1. After fixation, two...
Reversal learning paradigms are widely used assays of behavioral flexibility with their probabilisti...
<p>(A): Example trial. Participants continuously fixated a white dot at the center of the screen. Af...
<p>(A) Trial structure and feedback schedule. Participants were presented with an abstract image and...
Serial reversal-learning procedures are simple preparations that allow for a better understanding of...
<p>(A) When each trial begins, one of the two stimuli, or , is presented in random on a screen. The...
<p>(<b>A</b>) Probabilistic Reversal Learning task, showing types of feedback events. (<b>B</b>) RL ...
from simple to complex • Reversal learning illustrates a very simple yet computationally challenging...
This repository contains re-test data for a reversal learning task completed by 150 participants, an...
(a)–(c) show the performance of an agent with a value of model decay determined by state-action pred...
<p>(A) Schematic representation of the behavioral training and testing protocol. The rewarded and un...
<p>In two subsequent gambles on 96 trials, subjects could gamble the safe option (cash in 20 cents) ...
<p><b>A.</b> The schematic diagram of the model. The network is composed of three parts: input layer...
<p>The sequence of trial events on the Risk Task (A) and Reversal Learning Task (B).</p
Two experiments were attempted to confirm the facilitative effect of overtraining in the original le...
<p>(<b>A</b>) The timeline of the sequential-sampling task used in Experiment 1. After fixation, two...
Reversal learning paradigms are widely used assays of behavioral flexibility with their probabilisti...
<p>(A): Example trial. Participants continuously fixated a white dot at the center of the screen. Af...
<p>(A) Trial structure and feedback schedule. Participants were presented with an abstract image and...
Serial reversal-learning procedures are simple preparations that allow for a better understanding of...
<p>(A) When each trial begins, one of the two stimuli, or , is presented in random on a screen. The...
<p>(<b>A</b>) Probabilistic Reversal Learning task, showing types of feedback events. (<b>B</b>) RL ...
from simple to complex • Reversal learning illustrates a very simple yet computationally challenging...
This repository contains re-test data for a reversal learning task completed by 150 participants, an...
(a)–(c) show the performance of an agent with a value of model decay determined by state-action pred...
<p>(A) Schematic representation of the behavioral training and testing protocol. The rewarded and un...
<p>In two subsequent gambles on 96 trials, subjects could gamble the safe option (cash in 20 cents) ...
<p><b>A.</b> The schematic diagram of the model. The network is composed of three parts: input layer...
<p>The sequence of trial events on the Risk Task (A) and Reversal Learning Task (B).</p
Two experiments were attempted to confirm the facilitative effect of overtraining in the original le...
<p>(<b>A</b>) The timeline of the sequential-sampling task used in Experiment 1. After fixation, two...