The recent availability of increasingly powerful hardware has caused a shift from traditional information retrieval (IR) approaches based on term matching, which remained the state of the art for several decades, to large pre-trained neural language models. These neural rankers achieve substantial improvements in performance, as their complexity and extensive pre-training give them the ability of understanding natural language in a way. As a result, neural rankers go beyond term matching by performing relevance estimation based on the semantics of queries and documents. However, these improvements in performance don't come without sacrifice. In this thesis, we focus on two fundamental challenges of neural ranking models, specifically, on...
Deep neural models revolutionized the research landscape in the Information Retrieval (IR) domain. N...
Perhaps the applied nature of information retrieval research goes some way to explain the community'...
Recent work has shown that inducing a large language model (LLM) to generate explanations prior to o...
The availability of massive data and computing power allowing for effective data driven neural appro...
Neural networks with deep architectures have demonstrated significant performance improvements in co...
As information retrieval researchers, we not only develop algorithmic solutions to hard problems, bu...
Neural ranking methods based on large transformer models have recently gained significant attention ...
Recent developments of machine learning models, and in particular deep neural networks, have yielded...
Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transfor...
Neural ranking models use shallow or deep neural networks to rank search results in response to a qu...
LEarning TO Rank (LETOR) is a research area in the field of Information Retrieval (IR) where machine...
Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision...
Neural approaches that use pre-trained language models are effective at various ranking tasks, such ...
One challenge with neural ranking is the need for a large amount of manually-labeled relevance judgm...
Due to the growing amount of available information, learning to rank has become an important researc...
Deep neural models revolutionized the research landscape in the Information Retrieval (IR) domain. N...
Perhaps the applied nature of information retrieval research goes some way to explain the community'...
Recent work has shown that inducing a large language model (LLM) to generate explanations prior to o...
The availability of massive data and computing power allowing for effective data driven neural appro...
Neural networks with deep architectures have demonstrated significant performance improvements in co...
As information retrieval researchers, we not only develop algorithmic solutions to hard problems, bu...
Neural ranking methods based on large transformer models have recently gained significant attention ...
Recent developments of machine learning models, and in particular deep neural networks, have yielded...
Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transfor...
Neural ranking models use shallow or deep neural networks to rank search results in response to a qu...
LEarning TO Rank (LETOR) is a research area in the field of Information Retrieval (IR) where machine...
Despite the impressive improvements achieved by unsupervised deep neural networks in computer vision...
Neural approaches that use pre-trained language models are effective at various ranking tasks, such ...
One challenge with neural ranking is the need for a large amount of manually-labeled relevance judgm...
Due to the growing amount of available information, learning to rank has become an important researc...
Deep neural models revolutionized the research landscape in the Information Retrieval (IR) domain. N...
Perhaps the applied nature of information retrieval research goes some way to explain the community'...
Recent work has shown that inducing a large language model (LLM) to generate explanations prior to o...