In recent years, large pre-trained transformers have led to substantial gains in performance over traditional retrieval models and feedback approaches. However, these results are primarily based on the MS Marco/TREC Deep Learning Track setup, with its very particular setup, and our understanding of why and how these models work better is fragmented at best. We analyze effective BERT-based cross-encoders versus traditional BM25 ranking for the passage retrieval task where the largest gains have been observed, and investigate two main questions. On the one hand, what is similar? To what extent does the neural ranker already encompass the capacity of traditional rankers? Is the gain in performance due to a better ranking of the same documents ...
In the field of information retrieval, Passage related to Query are usually easy to get, and the pas...
The recent availability of increasingly powerful hardware has caused a shift from traditional inform...
The advent of contextualised language models has brought gains in search effectiveness, not just whe...
In recent years, large pre-trained transformers have led to substantial gains in performance over tr...
The emergence of BERT in 2018 has brought a huge boon to retrieval effectiveness in many tasks acros...
Neural ranking methods based on large transformer models have recently gained significant attention ...
Neural approaches that use pre-trained language models are effective at various ranking tasks, such ...
Due to the growing amount of available information, learning to rank has become an important researc...
The availability of massive data and computing power allowing for effective data driven neural appro...
As information retrieval researchers, we not only develop algorithmic solutions to hard problems, bu...
Supervised machine learning models and their evaluation strongly depends on the quality of the under...
Neural ranking models use shallow or deep neural networks to rank search results in response to a qu...
Deep pretrained transformer networks are effective at various ranking tasks, such as question answer...
[Background]: The advent of bidirectional encoder representation from trans- formers (BERT) language...
This second campaign of the TREC Deep Learning Track was an opportunity for us to experiment with de...
In the field of information retrieval, Passage related to Query are usually easy to get, and the pas...
The recent availability of increasingly powerful hardware has caused a shift from traditional inform...
The advent of contextualised language models has brought gains in search effectiveness, not just whe...
In recent years, large pre-trained transformers have led to substantial gains in performance over tr...
The emergence of BERT in 2018 has brought a huge boon to retrieval effectiveness in many tasks acros...
Neural ranking methods based on large transformer models have recently gained significant attention ...
Neural approaches that use pre-trained language models are effective at various ranking tasks, such ...
Due to the growing amount of available information, learning to rank has become an important researc...
The availability of massive data and computing power allowing for effective data driven neural appro...
As information retrieval researchers, we not only develop algorithmic solutions to hard problems, bu...
Supervised machine learning models and their evaluation strongly depends on the quality of the under...
Neural ranking models use shallow or deep neural networks to rank search results in response to a qu...
Deep pretrained transformer networks are effective at various ranking tasks, such as question answer...
[Background]: The advent of bidirectional encoder representation from trans- formers (BERT) language...
This second campaign of the TREC Deep Learning Track was an opportunity for us to experiment with de...
In the field of information retrieval, Passage related to Query are usually easy to get, and the pas...
The recent availability of increasingly powerful hardware has caused a shift from traditional inform...
The advent of contextualised language models has brought gains in search effectiveness, not just whe...