The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5% F1 sc...
International audienceMany data such as social networks, movie preferences or knowledge bases are mu...
Relational structure extraction covers a wide range of tasks and plays an important role in natural ...
Information Extraction (IE) aims at mapping texts into fixed structure representing the key informat...
Large Language Models (LLMs) have achieved remarkable success in many formal language oriented tasks...
Recent work has demonstrated the positive impact of incorporating linguistic representations as addi...
Pre-trained language models (LMs) have advanced the state-of-the-art for many semantic tasks and hav...
Neural models for distantly supervised relation extraction (DS-RE) encode each sentence in an entity...
Recent progress in pretraining language models on large textual corpora led to a surge of improvemen...
Sentence-level relation extraction (RE) aims at identifying the relationship between two entities in...
Making an informed choice of pre-trained language model (LM) is critical for performance, yet enviro...
Machine Learning is often challenged by insufficient labeled data. Previous methods employing implic...
Despite the recent success of large pretrained language models (LMs) on a variety of prompting tasks...
A primary criticism towards language models (LMs) is their inscrutability. This paper presents evide...
Incorporating factual knowledge into pre-trained language models (PLM) such as BERT is an emerging t...
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (...
International audienceMany data such as social networks, movie preferences or knowledge bases are mu...
Relational structure extraction covers a wide range of tasks and plays an important role in natural ...
Information Extraction (IE) aims at mapping texts into fixed structure representing the key informat...
Large Language Models (LLMs) have achieved remarkable success in many formal language oriented tasks...
Recent work has demonstrated the positive impact of incorporating linguistic representations as addi...
Pre-trained language models (LMs) have advanced the state-of-the-art for many semantic tasks and hav...
Neural models for distantly supervised relation extraction (DS-RE) encode each sentence in an entity...
Recent progress in pretraining language models on large textual corpora led to a surge of improvemen...
Sentence-level relation extraction (RE) aims at identifying the relationship between two entities in...
Making an informed choice of pre-trained language model (LM) is critical for performance, yet enviro...
Machine Learning is often challenged by insufficient labeled data. Previous methods employing implic...
Despite the recent success of large pretrained language models (LMs) on a variety of prompting tasks...
A primary criticism towards language models (LMs) is their inscrutability. This paper presents evide...
Incorporating factual knowledge into pre-trained language models (PLM) such as BERT is an emerging t...
We propose the LLMs4OL approach, which utilizes Large Language Models (LLMs) for Ontology Learning (...
International audienceMany data such as social networks, movie preferences or knowledge bases are mu...
Relational structure extraction covers a wide range of tasks and plays an important role in natural ...
Information Extraction (IE) aims at mapping texts into fixed structure representing the key informat...