<p>Note. <i>r</i> = correlations between scores predicted by the model and score for that tweet from raters on the hold out (validation) sample. Model <i>R</i> and RMSE are the average values of the out of sample cases for each of the 25 bootstrapped samples used to train each model on the training dataset.</p><p>Correlations between model predictions and validation data, Model R, Model RMSE on Training Models.</p
<p>For each method, the accuracy, the sensitivity, the specificity and the Matthews correlation coef...
<p>Series 1 shows the observations of training set without the abnormal observation. Series 3 shows ...
<p>The models used include VARMA and SVR, whereas the features used include the raw social dynamics ...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
Results of sensitivity analyses across different splits of the training and test sets. We created 1,...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
For each validation set the following metrics were calculated: RMSE, Pearson’s correlation coefficie...
Statistics of cross validated models trained with tiE energy terms versus experimental dTm values, w...
<p>RMSE: root mean square error (kJ/mol), MAE: mean absolute error (kJ/mol), MAE (%): MAE as a perce...
<p>Average SD for the coefficient of determination () obtained for each class by the predictive mod...
<p>Comparison of RMSE for the prediction model on the target NM_002559 with HEGS-based features and ...
<p>Empirical significance was obtained from the fraction of permutations that showed a correlation h...
<p><i>RMSE (a)</i>: root mean square error of the training set; <i>RMSE (b)</i>: root mean square er...
<p>*Values are significantly different from random prediction (One-Sample t-test, p < 0.01, One-Samp...
Comparison results of different network models: A is training accuracy of model, B is validation acc...
<p>For each method, the accuracy, the sensitivity, the specificity and the Matthews correlation coef...
<p>Series 1 shows the observations of training set without the abnormal observation. Series 3 shows ...
<p>The models used include VARMA and SVR, whereas the features used include the raw social dynamics ...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
Results of sensitivity analyses across different splits of the training and test sets. We created 1,...
<p>Results for feature selection, model selection and validation, using the two selection criteria a...
For each validation set the following metrics were calculated: RMSE, Pearson’s correlation coefficie...
Statistics of cross validated models trained with tiE energy terms versus experimental dTm values, w...
<p>RMSE: root mean square error (kJ/mol), MAE: mean absolute error (kJ/mol), MAE (%): MAE as a perce...
<p>Average SD for the coefficient of determination () obtained for each class by the predictive mod...
<p>Comparison of RMSE for the prediction model on the target NM_002559 with HEGS-based features and ...
<p>Empirical significance was obtained from the fraction of permutations that showed a correlation h...
<p><i>RMSE (a)</i>: root mean square error of the training set; <i>RMSE (b)</i>: root mean square er...
<p>*Values are significantly different from random prediction (One-Sample t-test, p < 0.01, One-Samp...
Comparison results of different network models: A is training accuracy of model, B is validation acc...
<p>For each method, the accuracy, the sensitivity, the specificity and the Matthews correlation coef...
<p>Series 1 shows the observations of training set without the abnormal observation. Series 3 shows ...
<p>The models used include VARMA and SVR, whereas the features used include the raw social dynamics ...