Optimal Baseline Corrections for Off-Policy Contextual Bandits

Shashank Gupta, Olivier Jeunen, Harrie Oosterhuis, and Maarten de Rijke
Published in Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (CIKM ’24), 2024. [pdf, code]

The off-policy learning paradigm allows for recommender systems and general ranking applications to be framed as decision-making problems, where we aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric. With unbiasedness comes potentially high variance, and prevalent methods exist to reduce estimation variance. These methods typically make use of control variates, either additive (i.e., baseline corrections or doubly robust methods) or multiplicative (i.e., self-normalisation).

Our work unifies these approaches by proposing a single framework built on their equivalence in learning scenarios. The foundation of our framework is the derivation of an equivalent baseline correction for all of the existing control variates. Consequently, our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it. This optimal estimator brings significantly improved performance in both evaluation and learning, and minimizes data requirements. Empirical observations corroborate our theoretical findings.

Download the paper here.

Code is available here.

Recommended citation:

Gupta, S., Jeunen, O., Oosterhuis, H., & de Rijke, M. (2024, October). Optimal Baseline Corrections for Off-Policy Contextual Bandits. In Proceedings of the 18th ACM Conference on Recommender Systems (RecSys ’24).