<p>Subjects were randomly assigned to the training or validation set. All training, including tuning of algorithm parameters with 10-fold cross validation, was performed on the training set.</p
For each participant, the training set was split into three consecutive windows of equal size. In ea...
<p>Empirical significance was obtained from the fraction of permutations that showed a correlation h...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
<p>Values for the tuning parameters for each algorithm were selected using 10-fold cross validation ...
Highest accuracies (in dark red) were reached using only the top 1–3% of voxel active for each condi...
A count matrix undergoes pre-processing, including normalization and filtering. The data is randomly...
<p>(<b>a</b>) Graph of -log<sub>10</sub><i>P</i> values for all features derived by comparing progre...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
<p>The number of selected reliable samples and the corresponding classification accuracy when probab...
Results of sensitivity analyses across different splits of the training and test sets. We created 1,...
<p>(<i>A</i>) Penalized cross validation error for task-related (blue) and non-task-related (red) RO...
<p>The prediction results compared with other methods on the training dataset using 10-fold cross-va...
<p>First, different models are trained and validated with cross-validation and the best set of param...
Taking 75% voxels as training set, and the remaining 25% as validation set. After 20000 iterations, ...
The “inner” cross-validation: The “inner” cross-validation is for model selection based on their acc...
For each participant, the training set was split into three consecutive windows of equal size. In ea...
<p>Empirical significance was obtained from the fraction of permutations that showed a correlation h...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
<p>Values for the tuning parameters for each algorithm were selected using 10-fold cross validation ...
Highest accuracies (in dark red) were reached using only the top 1–3% of voxel active for each condi...
A count matrix undergoes pre-processing, including normalization and filtering. The data is randomly...
<p>(<b>a</b>) Graph of -log<sub>10</sub><i>P</i> values for all features derived by comparing progre...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
<p>The number of selected reliable samples and the corresponding classification accuracy when probab...
Results of sensitivity analyses across different splits of the training and test sets. We created 1,...
<p>(<i>A</i>) Penalized cross validation error for task-related (blue) and non-task-related (red) RO...
<p>The prediction results compared with other methods on the training dataset using 10-fold cross-va...
<p>First, different models are trained and validated with cross-validation and the best set of param...
Taking 75% voxels as training set, and the remaining 25% as validation set. After 20000 iterations, ...
The “inner” cross-validation: The “inner” cross-validation is for model selection based on their acc...
For each participant, the training set was split into three consecutive windows of equal size. In ea...
<p>Empirical significance was obtained from the fraction of permutations that showed a correlation h...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...