Probabilistic Multileave Gradient Descent

Harrie Oosterhuis, Anne Schuth and Maarten de Rijke
Published in European Conference on Information Retrieval (ECIR ’16), 2016. [pdf, code]

Online learning to rank methods aim to optimize ranking models based on user interactions. The dueling bandit gradient descent (DBGD) algorithm is able to effectively optimize linear ranking models solely from user interactions. We propose an extension of DBGD, called probabilistic multileave gradient descent (P-MGD) that builds on probabilistic multileave, a recently proposed highly sensitive and unbiased online evaluation method. We demonstrate that P-MGD significantly outperforms state-of-the-art online learning to rank methods in terms of online performance, without sacrificing offline performance and at greater learning speed.

Download the paper here.

Code is available here.

Recommended citation:

H. Oosterhuis, M. de Rijke. "Probabilistic Multileave Gradient Descent." In European Conference on Information Retrieval. Springer, Cham, 2016.