Document Type
Conference Proceeding
Publication Date
3-15-2020
Abstract
Offline evaluations of recommender systems attempt to estimate users’ satisfaction with recommendations using static data from prior user interactions. These evaluations provide researchers and developers with first approximations of the likely performance of a new system and help weed out bad ideas before presenting them to users. However, offline evaluation cannot accurately assess novel, relevant recommendations, because the most novel items were previously unknown to the user, so they are missing from the historical data and cannot be judged as relevant.
We present a simulation study to estimate the error that such missing data causes in commonly-used evaluation metrics in order to assess its prevalence and impact. We find that missing data in the rating or observation process causes the evaluation protocol to systematically mis-estimate metric values, and in some cases erroneously determine that a popularity-based recommender outperforms even a perfect personalized recommender. Substantial breakthroughs in recommendation quality, therefore, will be difficult to assess with existing offline techniques.
Copyright Statement
This is an author-produced, peer-reviewed version of this conference proceeding. The final, definitive version of this document can be found online at 2020 Conference on Human Information Interaction and Retrieval (CHIIR ’20), published by the Association for Computing Machinery. Copyright restrictions may apply. doi: 10.1145/3343413.3378004
Publication Information
Tian, Mucun and Ekstrand, Michael D.. (2020). "Estimating Error and Bias in Offline Evaluation Results". 2020 Conference on Human Information Interaction and Retrieval (CHIIR ’20), 392-396. https://dx.doi.org/10.1145/3343413.3378004
Comments
For the corresponding computer script please see:
https://doi.org/10.18122/cs_scripts.8.boisestate