Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation
Abstract
Retrieval and ranking models are the backbone of many applications such as web search, open domain QA, or text-based recommender systems. The latency of neural <PRE_TAG>ranking models</POST_TAG> at query time is largely dependent on the architecture and deliberate choices by their designers to trade-off effectiveness for higher efficiency. This focus on low query latency of a rising number of efficient ranking <PRE_TAG>architectures</POST_TAG> make them feasible for production deployment. In machine learning an increasingly common approach to close the effectiveness gap of more efficient models is to apply knowledge distillation from a large teacher model to a smaller student model. We find that different ranking <PRE_TAG>architectures</POST_TAG> tend to produce output scores in different magnitudes. Based on this finding, we propose a cross-architecture training procedure with a margin focused loss (Margin-MSE), that adapts knowledge distillation to the varying score output distributions of different BERT and non-<PRE_TAG>BERT</POST_TAG> passage <PRE_TAG>ranking <PRE_TAG>architectures</POST_TAG></POST_TAG>. We apply the teachable information as additional fine-grained labels to existing training triples of the MSMARCO-Passage collection. We evaluate our procedure of distilling knowledge from state-of-the-art concatenated <PRE_TAG>BERT models</POST_TAG> to four different efficient <PRE_TAG>architectures</POST_TAG> (TK, Col<PRE_TAG>BERT</POST_TAG>, PreTT, and a BERT CLS dot product model). We show that across our evaluated architectures our Margin-MSE knowledge distillation significantly improves re-ranking <PRE_TAG>effectiveness</POST_TAG> without compromising their efficiency. Additionally, we show our general distillation method to improve nearest neighbor based index retrieval with the BERT dot product model, offering competitive results with specialized and much more costly training methods. To benefit the community, we publish the teacher-score training files in a ready-to-use package.
Models citing this paper 25
Browse 25 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper