Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency. We propose Fast-Forward indexes -- vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or ...
Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relie...
Deep pretrained transformer networks are effective at various ranking tasks, such as question answer...
Recent work has shown that small distilled language models are strong competitors to models that are...
Neural ranking methods based on large transformer models have recently gained significant attention ...
Neural document ranking approaches, specifically transformer models, have achieved impressive gains ...
The availability of massive data and computing power allowing for effective data driven neural appro...
The recent availability of increasingly powerful hardware has caused a shift from traditional inform...
Neural ranking models use shallow or deep neural networks to rank search results in response to a qu...
Many recent approaches of passage retrieval are using dense embeddings generated from deep neural mo...
Retrieval with extremely long queries and documents is a well-known and challenging task in informat...
Dense retrieval, which describes the use of contextualised language models such as BERT to identify ...
The advent of contextualised language models has brought gains in search effectiveness, not just whe...
As information retrieval researchers, we not only develop algorithmic solutions to hard problems, bu...
Deep pre-trained language models (e,g. BERT) are effective at large-scale text retrieval task. Exist...
Recently, retrieval models based on dense representations are dominant in passage retrieval tasks, d...
Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relie...
Deep pretrained transformer networks are effective at various ranking tasks, such as question answer...
Recent work has shown that small distilled language models are strong competitors to models that are...
Neural ranking methods based on large transformer models have recently gained significant attention ...
Neural document ranking approaches, specifically transformer models, have achieved impressive gains ...
The availability of massive data and computing power allowing for effective data driven neural appro...
The recent availability of increasingly powerful hardware has caused a shift from traditional inform...
Neural ranking models use shallow or deep neural networks to rank search results in response to a qu...
Many recent approaches of passage retrieval are using dense embeddings generated from deep neural mo...
Retrieval with extremely long queries and documents is a well-known and challenging task in informat...
Dense retrieval, which describes the use of contextualised language models such as BERT to identify ...
The advent of contextualised language models has brought gains in search effectiveness, not just whe...
As information retrieval researchers, we not only develop algorithmic solutions to hard problems, bu...
Deep pre-trained language models (e,g. BERT) are effective at large-scale text retrieval task. Exist...
Recently, retrieval models based on dense representations are dominant in passage retrieval tasks, d...
Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relie...
Deep pretrained transformer networks are effective at various ranking tasks, such as question answer...
Recent work has shown that small distilled language models are strong competitors to models that are...