TUBITAK-ARDEB [117E977]This work is supported by TUBITAK-ARDEB under the grant number 117E977.End-to-end data-driven approaches lead to rapid development of language generation and dialogue systems. Despite the need for large amounts of well-organized data, these approaches jointly learn multiple components of the traditional generation pipeline without requiring costly human intervention. End-to-end approaches also enable the use of loosely aligned parallel datasets in system development by relaxing the degree of semantic correspondences between training data representations and text spans. However, their potential in Turkish language generation has not yet been fully exploited. In this work, we apply sequenceto-sequence (Seq2Seq) neural m...
Neural networks have been shown to successfully solve many natural language processing tasks previou...
39th European Conference on Information Retrieval, ECIR 2017, , 8-13 April 2017Modeling syntactic in...
Language model pre-training architectures have demonstrated to be useful to learn language represent...
In the last decades, data-to-text (D2T) systems that directly learn from data have gained a lot of a...
Named entity recognition (NER) is an extensively studied task that extracts and classifies named ent...
2018-08-01Recurrent neural networks (RNN) have been successfully applied to various Natural Language...
In Natural Language Processing (NLP), it is important to detect the relationship between two sequenc...
We present a comparison of word-based and character-based sequence-to sequence models for data-to-te...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this t...
Ozdemir O, Akin ES, Velioglu R, Dalyan T. A comparative study of neural machine translation models f...
Text in English ; Abstract: English and TurkishIncludes bibliographical references (leaves 43-44)x, ...
Social media has become a rich data source for natural language processing tasks with its worldwide ...
Entity Linking, a vital component of Natural Language Processing (NLP), aims to link named entities ...
Neural networks have been shown to successfully solve many natural language processing tasks previou...
39th European Conference on Information Retrieval, ECIR 2017, , 8-13 April 2017Modeling syntactic in...
Language model pre-training architectures have demonstrated to be useful to learn language represent...
In the last decades, data-to-text (D2T) systems that directly learn from data have gained a lot of a...
Named entity recognition (NER) is an extensively studied task that extracts and classifies named ent...
2018-08-01Recurrent neural networks (RNN) have been successfully applied to various Natural Language...
In Natural Language Processing (NLP), it is important to detect the relationship between two sequenc...
We present a comparison of word-based and character-based sequence-to sequence models for data-to-te...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
We introduce the problems of data-to-text generation and the current state of the art, i.e. pretrain...
Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this t...
Ozdemir O, Akin ES, Velioglu R, Dalyan T. A comparative study of neural machine translation models f...
Text in English ; Abstract: English and TurkishIncludes bibliographical references (leaves 43-44)x, ...
Social media has become a rich data source for natural language processing tasks with its worldwide ...
Entity Linking, a vital component of Natural Language Processing (NLP), aims to link named entities ...
Neural networks have been shown to successfully solve many natural language processing tasks previou...
39th European Conference on Information Retrieval, ECIR 2017, , 8-13 April 2017Modeling syntactic in...
Language model pre-training architectures have demonstrated to be useful to learn language represent...