<p><b>A.</b> A comparison of time allocation in the training session with the experimental session. We plotted the proportion of time subjects allocated to the first movement in the experimental session against that in the training session. If were similar between the experimental and the training session, most points would fall symmetrically about the diagonal line. Colors were used to code for the reward conditions (blue: equal-reward condition; red: unequal-reward condition). Different symbols were used to code distance conditions (dot: equal-distance condition; cross: unequal-distance condition). <b>B.</b> The estimated probability of hitting target B (second movement) () was plotted against the mean movement time (ms) to target A (th...
<p>(A) Initial directional deviation of movements to the learned target (LT), as a function of the n...
<p>LS is identified as the first day with significant performance, i.e. with a significant differenc...
(A) During each trial of the “tokens task,” 15 tokens jump, one every 200 ms, from the central circl...
<p>For each subject and condition, we plotted the mean movement times and dwell time of the unequal-...
<p><b>A</b>) The outline of the training schedule for the recognition of moving bars. <b>B</b>) Repr...
<p>Comparison of the training time and forecasting accuracy (training 40%, testing 60%).</p
<p>A,B: distance of participants from the centre of exit D2 over time. A and B show the data for dif...
Purpose: We investigated how directional change, distance, and reward affected the speed and accurac...
<p>(A) Average reach trajectory to the far target of each side in each condition for a single partic...
<p>(A) In the graph the percentage of correct responses obtained in the training session (x) and in ...
<p>(Left) On average, Go stimuli took significantly longer to learn than No Go stimuli. (Right) Erro...
<p>(Left) Error rates decreased with practice, and the number of errors were greater for Go stimuli ...
Purpose: We investigated how directional change, distance, and reward affected the speed and accurac...
<p><b>(A)</b> Representative participant data showing how a reaching trajectory is gradually updated...
<p>Parameter is the learning rate, turns the model from a strict policy gradient rule to naive Heb...
<p>(A) Initial directional deviation of movements to the learned target (LT), as a function of the n...
<p>LS is identified as the first day with significant performance, i.e. with a significant differenc...
(A) During each trial of the “tokens task,” 15 tokens jump, one every 200 ms, from the central circl...
<p>For each subject and condition, we plotted the mean movement times and dwell time of the unequal-...
<p><b>A</b>) The outline of the training schedule for the recognition of moving bars. <b>B</b>) Repr...
<p>Comparison of the training time and forecasting accuracy (training 40%, testing 60%).</p
<p>A,B: distance of participants from the centre of exit D2 over time. A and B show the data for dif...
Purpose: We investigated how directional change, distance, and reward affected the speed and accurac...
<p>(A) Average reach trajectory to the far target of each side in each condition for a single partic...
<p>(A) In the graph the percentage of correct responses obtained in the training session (x) and in ...
<p>(Left) On average, Go stimuli took significantly longer to learn than No Go stimuli. (Right) Erro...
<p>(Left) Error rates decreased with practice, and the number of errors were greater for Go stimuli ...
Purpose: We investigated how directional change, distance, and reward affected the speed and accurac...
<p><b>(A)</b> Representative participant data showing how a reaching trajectory is gradually updated...
<p>Parameter is the learning rate, turns the model from a strict policy gradient rule to naive Heb...
<p>(A) Initial directional deviation of movements to the learned target (LT), as a function of the n...
<p>LS is identified as the first day with significant performance, i.e. with a significant differenc...
(A) During each trial of the “tokens task,” 15 tokens jump, one every 200 ms, from the central circl...