Idea
Recommender systems aim to mitigate the inherent choice overload problem in today’s digital world by providing personalized recommendations of items to users. These recommendations are computed based on previous user behavior (e.g., implicit feedback such as items purchased, viewed, or listened to or explicit feedback such as ratings for an item). To this end, recommender systems research has primarily focused on improving the prediction accuracy of recommendation algorithms. However, recently, we have observed a shift towards more user-centric evaluation methods as accuracy-driven development and advancement of recommender systems have not been able to capture all aspects relevant to a user’s satisfaction with a given recommender system. Kaminskas and Bridge find that the focus of recommender systems has shifted to also include a broader range of ‘‘beyond accuracy’’ objectives to be evaluated.
In this master thesis, we aim to perform a systematic literature review on evaluation methods for recommender systems. We are particularly interested in getting a deeper understanding of multi-method (or mixed-method) evaluation approaches that combine evaluation strategies to gain a more profound understanding of user satisfaction.