This is a collection of pre-processed wikidata jsons which were used in the creation of CSQA dataset (Ref: https://arxiv.org/abs/1801.10314). Please refer to https://amritasaha1812.github.io/CSQA/download/ for more details
This data dump of Wikidata is published to allow fair and replicable evaluation of KGQA systems with...
Collection of pre-processed data of the abstracts in publications on exoplanets, as collected by NAS...
These datasets are used to support the results of the paper "Datasets for Creating and Querying Pers...
This is a collection of pre-processed wikidata jsons which were used in the creation of CSQA dataset...
This dataset is a dump of Wikidata in JSON format, produced in January 4, 2016 by the Wikimedia Foun...
This dataset consists the complete revision history of every instance of the 100 most important clas...
WDumper is a third-party tool enables users to craete custom dump of Wikidata. Here we create some t...
LiterallyWikidata is a collection of Knowledge Graph Completion benchmark datasets extracted from Wi...
WikiDBs (https://wikidbs.github.io/) is a corpus of relational databases built from Wikidata (https:...
The Wikidata knowledge base provides a public infrastructure for machine-readable metadata about com...
We maintain a wiki comparison dataset (which we used to call a wiki segmentation dataset) to show a ...
Wikidata-CS (Commonsense) datasets associated with the submission entitled 'Commonsense Knowledge in...
An introduction into how Wikidata can be used as a semantic platform for the life sciences and beyon...
The World Wide Web contains a vast amount of information. This feature makes it a very useful part o...
Recent advances in sequencing technology have created unprecedented opportunities for biological res...
This data dump of Wikidata is published to allow fair and replicable evaluation of KGQA systems with...
Collection of pre-processed data of the abstracts in publications on exoplanets, as collected by NAS...
These datasets are used to support the results of the paper "Datasets for Creating and Querying Pers...
This is a collection of pre-processed wikidata jsons which were used in the creation of CSQA dataset...
This dataset is a dump of Wikidata in JSON format, produced in January 4, 2016 by the Wikimedia Foun...
This dataset consists the complete revision history of every instance of the 100 most important clas...
WDumper is a third-party tool enables users to craete custom dump of Wikidata. Here we create some t...
LiterallyWikidata is a collection of Knowledge Graph Completion benchmark datasets extracted from Wi...
WikiDBs (https://wikidbs.github.io/) is a corpus of relational databases built from Wikidata (https:...
The Wikidata knowledge base provides a public infrastructure for machine-readable metadata about com...
We maintain a wiki comparison dataset (which we used to call a wiki segmentation dataset) to show a ...
Wikidata-CS (Commonsense) datasets associated with the submission entitled 'Commonsense Knowledge in...
An introduction into how Wikidata can be used as a semantic platform for the life sciences and beyon...
The World Wide Web contains a vast amount of information. This feature makes it a very useful part o...
Recent advances in sequencing technology have created unprecedented opportunities for biological res...
This data dump of Wikidata is published to allow fair and replicable evaluation of KGQA systems with...
Collection of pre-processed data of the abstracts in publications on exoplanets, as collected by NAS...
These datasets are used to support the results of the paper "Datasets for Creating and Querying Pers...