Papers
arxiv:2204.01500

Which Tricks are Important for Learning to Rank?

Published on Apr 4, 2022
Authors:
,
,
,

Abstract

Nowadays, state-of-the-art learning-to-rank (LTR) methods are based on gradient-boosted decision trees (GBDT). The most well-known algorithm is LambdaMART that was proposed more than a decade ago. Recently, several other GBDT-based ranking algorithms were proposed. In this paper, we conduct a thorough analysis of these methods in a unified setup. In particular, we address the following questions. Is direct optimization of a smoothed ranking loss preferable over optimizing a convex surrogate? How to properly construct and smooth surrogate ranking losses? To address these questions, we compare LambdaMART with YetiRank and StochasticRank methods and their modifications. We also improve the YetiRank approach to allow for optimizing specific ranking loss functions. As a result, we gain insights into learning-to-rank approaches and obtain a new state-of-the-art algorithm.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2204.01500 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2204.01500 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2204.01500 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.