Differentiable Unbiased Online Learning to Rank

Harrie Oosterhuis and Maarten de Rijke
Published in Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18), 2018. [pdf, code]

Online Learning to Rank (OLTR) methods optimize rankers based on user interactions. State-of-the-art OLTR methods are built specifically for linear models. Their approaches do not extend well to non-linear models such as neural networks. We introduce an entirely novel approach to OLTR that constructs a weighted differentiable pairwise loss after each interaction: Pairwise Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional approach that relies on interleaving or multileaving and extensive sampling of models to estimate gradients. Instead, its gradient is based on inferring preferences between document pairs from user clicks and can optimize any differentiable model. We prove that the gradient of PDGD is unbiased w.r.t. user document pair preferences. Our experiments on the largest publicly available Learning to Rank (LTR) datasets show considerable and significant improvements under all levels of interaction noise. PDGD outperforms existing OLTR methods both in terms of learning speed as well as final convergence. Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear models to be optimized effectively. Our results show that using a neural network leads to even better performance at convergence than a linear model. In summary, PDGD is an efficient and unbiased OLTR approach that provides a better user experience than previously possible.

Download the paper here.

Code is available here.

Recommended citation:

H. Oosterhuis, M. de Rijke. "Differentiable Unbiased Online Learning to Rank." In Proceedings of the 27th ACM International Conference on Information and Knowledge Management. ACM, 2018.