1. All field-collected data are double entered with automated checks to prevent invalid values from being entered. 2. The two versions of the double-entered data are compared using an R script, and mismatches are corrected. 3. A pull request is submitted to the data repository (i.e., GitHub), which triggers data checks run by the continuous integration system (i.e., Travis CI). 4. If the system detects any issues, the update is reviewed again, and corrections are made to the pull request, automatically triggering the data checks to run again. 5. Once the new data pass all automated checks, a data manager reviews the changes and merges the new data into the main data repository. 6. Addition of the new data triggers the continuous integration...
This talk highlights the workflow automation process at Flanders Institute for Biomechanical Experim...
Creating software releases is one of the more tedious occupations in the life of a software develope...
Abstract Information integration and workflow technologies for data analysis have always been major ...
Over the past decade, biology has undergone a data revolution in how researchers collect data and th...
Over the past decade, biology has undergone a data revolution in how researchers collect data and th...
Experiments with adapting concepts from software development to research data management --- an inho...
One approach to continuously achieve a certain data quality level is to use an integration pipeline ...
<p>Data analysis, statistical research, and teaching statistics have at least one thing in common: t...
Cleaning data (i.e., making sure data contains no errors) can take a large part of a project’s lifet...
Collecting and refining research data or writing software is a part of many researchers' daily routi...
In large software development companies, software systems are being built from several modules. In s...
deploy2zenodo is a shell script to deploy your data to zenodo. You can use it in a CI pipeline as an...
What's Changed New Features and Enhancements import: Expose --force flag by @daavoo in https://git...
Creating software releases is one of the more tedious occupations in the life of a software develope...
Automatic and repeatable builds are an established software engineering practices for achieving cont...
This talk highlights the workflow automation process at Flanders Institute for Biomechanical Experim...
Creating software releases is one of the more tedious occupations in the life of a software develope...
Abstract Information integration and workflow technologies for data analysis have always been major ...
Over the past decade, biology has undergone a data revolution in how researchers collect data and th...
Over the past decade, biology has undergone a data revolution in how researchers collect data and th...
Experiments with adapting concepts from software development to research data management --- an inho...
One approach to continuously achieve a certain data quality level is to use an integration pipeline ...
<p>Data analysis, statistical research, and teaching statistics have at least one thing in common: t...
Cleaning data (i.e., making sure data contains no errors) can take a large part of a project’s lifet...
Collecting and refining research data or writing software is a part of many researchers' daily routi...
In large software development companies, software systems are being built from several modules. In s...
deploy2zenodo is a shell script to deploy your data to zenodo. You can use it in a CI pipeline as an...
What's Changed New Features and Enhancements import: Expose --force flag by @daavoo in https://git...
Creating software releases is one of the more tedious occupations in the life of a software develope...
Automatic and repeatable builds are an established software engineering practices for achieving cont...
This talk highlights the workflow automation process at Flanders Institute for Biomechanical Experim...
Creating software releases is one of the more tedious occupations in the life of a software develope...
Abstract Information integration and workflow technologies for data analysis have always been major ...