A major drawback of modern neural OpenIE systems and benchmarks is that they prioritize high coverage of information in extractions over compactness of their constituents. This severely limits the usefulness of OpenIE extractions in many downstream tasks. The utility of extractions can be improved if extractions are compact and share constituents. To this end, we study the problem of identifying compact extractions with neural-based methods. We propose CompactIE, an OpenIE system that uses a novel pipelined approach to produce compact extractions with overlapping constituents. It first detects constituents of the extractions and then links them to build extractions. We train our system on compact extractions obtained by processing existing ...
Funding Information: We acknowledge support from the FWF DACH project I3827-N36, COST action CA18234...
Open information extraction (OpenIE) is a novel paradigm that produces structured information from u...
Various tasks in natural language processing (NLP) suffer from lack of labelled training data, which...
Most existing data is stored in unstructured textual formats, which makes their subsequent processi...
The goal of open information extraction (OIE) is to extract facts from natural language text, and to...
State of the art neural methods for open information extraction (OpenIE) usually extract triplets (o...
International audienceOpen Information Extraction (OIE) is the task of extracting tuples of the form...
Large and performant neural networks are often overparameterized and can be drastically reduced in s...
Presented on February 12, 2018 at 11:15 a.m. in the Krone Engineered Biosystems Building, Room 1005....
The recent trend in deep neural networks (DNNs) research is to make the networks more compact. The m...
Natural language text, which exists in unstructured format, has a vast amount of knowledge about the...
Confluence is a novel non-Intersection over Union (IoU) alternative to Non-Maxima Suppression (NMS) ...
The explosion of data has made it crucial to analyze the data and distill important information effe...
Hardware accelerators for neural network inference can exploit common data properties for performanc...
Over the past couple decades, we have witnessed a huge explosion in data generation from almost ever...
Funding Information: We acknowledge support from the FWF DACH project I3827-N36, COST action CA18234...
Open information extraction (OpenIE) is a novel paradigm that produces structured information from u...
Various tasks in natural language processing (NLP) suffer from lack of labelled training data, which...
Most existing data is stored in unstructured textual formats, which makes their subsequent processi...
The goal of open information extraction (OIE) is to extract facts from natural language text, and to...
State of the art neural methods for open information extraction (OpenIE) usually extract triplets (o...
International audienceOpen Information Extraction (OIE) is the task of extracting tuples of the form...
Large and performant neural networks are often overparameterized and can be drastically reduced in s...
Presented on February 12, 2018 at 11:15 a.m. in the Krone Engineered Biosystems Building, Room 1005....
The recent trend in deep neural networks (DNNs) research is to make the networks more compact. The m...
Natural language text, which exists in unstructured format, has a vast amount of knowledge about the...
Confluence is a novel non-Intersection over Union (IoU) alternative to Non-Maxima Suppression (NMS) ...
The explosion of data has made it crucial to analyze the data and distill important information effe...
Hardware accelerators for neural network inference can exploit common data properties for performanc...
Over the past couple decades, we have witnessed a huge explosion in data generation from almost ever...
Funding Information: We acknowledge support from the FWF DACH project I3827-N36, COST action CA18234...
Open information extraction (OpenIE) is a novel paradigm that produces structured information from u...
Various tasks in natural language processing (NLP) suffer from lack of labelled training data, which...