Datasets used in the work "Automating Code-Related Tasks Through Transformers: The Impact of Pre-training"
Training a deep learning model on source code has gained significant traction recently. Since such m...
This dataset supports the publication: 'Dynamic Transformer for Efficient Machine Translation on...
In today’s world, which is full of innovations in various fields, the role of Information Technologi...
Resources related by the research work "Automating Code Review Activities by Large-Scale Pre-Trainin...
Patches an issue with the serialization of the TrainingArgumentsIf you use this software, please cit...
Datasets for the paper "Bridging Pre-trained Models and Downstream Tasks for Source Code Understandi...
Transformer architecture has widespread applications, particularly in Natural Language Processing an...
We provide the datasets for reproducing the experiments in "Code Execution with Pre-trained Language...
Code and data necessary for reproducing the results from 'Assessing the Quality of Source Code Iden...
Recent years have seen the successful application of large pre-trained models to code representation...
The programming exercises were automatically generated by the Digital Teaching Assistant (DTA) syste...
Source code and data for "Toward accurate interpretable predictions of materials properties within t...
This is the fine-tuning dataset used in the paper Impact of Code Language Models on Automated Progra...
Transformers are the current state-of-the-art of natural language processing in many domains and are...
- Pretrained models of protein sequences. See https://github.com/microsoft/protein-sequence-models f...
Training a deep learning model on source code has gained significant traction recently. Since such m...
This dataset supports the publication: 'Dynamic Transformer for Efficient Machine Translation on...
In today’s world, which is full of innovations in various fields, the role of Information Technologi...
Resources related by the research work "Automating Code Review Activities by Large-Scale Pre-Trainin...
Patches an issue with the serialization of the TrainingArgumentsIf you use this software, please cit...
Datasets for the paper "Bridging Pre-trained Models and Downstream Tasks for Source Code Understandi...
Transformer architecture has widespread applications, particularly in Natural Language Processing an...
We provide the datasets for reproducing the experiments in "Code Execution with Pre-trained Language...
Code and data necessary for reproducing the results from 'Assessing the Quality of Source Code Iden...
Recent years have seen the successful application of large pre-trained models to code representation...
The programming exercises were automatically generated by the Digital Teaching Assistant (DTA) syste...
Source code and data for "Toward accurate interpretable predictions of materials properties within t...
This is the fine-tuning dataset used in the paper Impact of Code Language Models on Automated Progra...
Transformers are the current state-of-the-art of natural language processing in many domains and are...
- Pretrained models of protein sequences. See https://github.com/microsoft/protein-sequence-models f...
Training a deep learning model on source code has gained significant traction recently. Since such m...
This dataset supports the publication: 'Dynamic Transformer for Efficient Machine Translation on...
In today’s world, which is full of innovations in various fields, the role of Information Technologi...