Recommender system

Recommender systems or recommendation systems (sometimes replacing "system" with a synonym such as platform or engine) are a subclass of information filtering system that seek to predict the 'rating' or 'preference' that a user would give to an item.[1][2]

Recommender systems have become extremely common in recent years, and are applied in a variety of applications. The most popular ones are probably movies, music, news, books, research articles, search queries, social tags, and products in general. However, there are also recommender systems for experts,[3][4] collaborators,[5] jokes, restaurants, financial services,[6] life insurance, persons (online dating), and Twitter followers.[7]

Overview

Recommender systems typically produce a list of recommendations in one of two ways – through collaborative or content-based filtering.[8] Collaborative filtering approaches building a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[9] Content-based filtering approaches utilize a series of discrete characteristics of an item in order to recommend additional items with similar properties.[10] These approaches are often combined (see Hybrid Recommender Systems).

The differences between collaborative and content-based filtering can be demonstrated by comparing two popular music recommender systems – Last.fm and Pandora Radio.

Each type of system has its own strengths and weaknesses. In the above example, Last.fm requires a large amount of information on a user in order to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems.[11] While Pandora needs very little information to get started, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).

Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found by themselves. Interestingly enough, recommender systems are often implemented using search engines indexing non-traditional data.

Montaner provides the first overview of recommender systems, from an intelligent agents perspective.[12] Adomavicius provides a new overview of recommender systems.[13] Herlocker provides an additional overview of evaluation techniques for recommender systems,[14] and Beel et al. discuss the problems of offline evaluations.[15] Beel et al. also provide a literature survey on research paper recommender systems.[16]

Recommender systems are an active research topic in the data mining and machine learning fields. Conferences that address recommender system research include RecSys, SIGIR, and KDD.

Approaches

Collaborative filtering

One approach to the design of recommender systems that has wide use is collaborative filtering.[17] Collaborative filtering methods are based on collecting and analyzing a large amount of information on users’ behaviors, activities or preferences and predicting what users will like based on their similarity to other users. A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach[18] and the Pearson Correlation as first implemented by Allen.[19]

Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past.

When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection.

Examples of explicit data collection include the following:

Examples of implicit data collection include the following:

The recommender system compares the collected data to similar and dissimilar data collected from others and calculates a list of recommended items for the user. Several commercial and non-commercial examples are listed in the article on collaborative filtering systems.

One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.[21] Other examples include:

Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity.[22]

A particular type of collaborative filtering algorithm uses matrix factorization, a low-rank matrix approximation technique.[23][24][25]

Collaborative filtering methods are classified as memory-based and model based collaborative filtering. A well known example of memory-based approaches is user-based algorithm[26] and that of model-based approaches is Kernel-Mapping Recommender.[27]

Content-based filtering

Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user’s preference.[28] In a content-based recommender system, keywords are used to describe the items and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items that are similar to those that a user liked in the past (or is examining in the present). In particular, various candidate items are compared with items previously rated by the user and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research.

To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf–idf representation (also called vector space representation).

To create a user profile, the system mostly focuses on two types of information: 1. A model of the user's preference. 2. A history of the user's interaction with the recommender system.

Basically, these methods use an item profile (i.e. a set of discrete attributes and features) characterizing the item within the system. The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item.

Direct feedback from a user, usually in the form of a like or dislike button, can be used to assign higher or lower weights on the importance of certain attributes (using Rocchio classification or other similar techniques).

A key issue with content-based filtering is whether the system is able to learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on browsing of news is useful, but would be much more useful when music, videos, products, discussions etc. from different services can be recommended based on news browsing.

As previously detailed, Pandora Radio is a popular example of a content-based recommender system that plays music with similar characteristics to that of a song provided by the user as an initial seed. There are also a large number of content-based recommender systems aimed at providing movie recommendations, a few such examples include Rotten Tomatoes, Internet Movie Database, Jinni, Rovi Corporation, Jaman and See This Next.

Hybrid recommender systems

Recent research has demonstrated that a hybrid approach, combining collaborative filtering and content-based filtering could be more effective in some cases. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model (see[13] for a complete review of recommender systems). Several studies empirically compare the performance of the hybrid with the pure collaborative and content-based methods and demonstrate that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem.

Netflix is a good example of the use of hybrid recommender systems. They make recommendations by comparing the watching and searching habits of similar users (i.e. collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).

A variety of techniques have been proposed as the basis for recommender systems: collaborative, content-based, knowledge-based, and demographic techniques. Each of these techniques has known shortcomings, such as the well known cold-start problem for collaborative and content-based systems (what to do with new users with few ratings) and the knowledge engineering bottleneck[29] in knowledge-based approaches. A hybrid recommender system is one that combines multiple techniques together to achieve some synergy between them.

The term hybrid recommender system is used here to describe any recommender system that combines multiple recommendation techniques together to produce its output. There is no reason why several different techniques of the same type could not be hybridized, for example, two different content-based recommenders could work together, and a number of projects have investigated this type of hybrid: NewsDude, which uses both naive Bayes and kNN classifiers in its news recommendations is just one example.[30]

Seven hybridization techniques:

Beyond accuracy

Typically, research on recommender systems is concerned about finding the most accurate recommendation algorithms. However, there is a number of factors that are also important.

Mobile recommender systems

One growing area of research in the area of recommender systems is mobile recommender systems. With the increasing ubiquity of internet-accessing smart phones, it is now possible to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than recommender systems often have to deal with (it is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems[45]). Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).

One example of a mobile recommender system is one that offers potentially profitable driving routes for taxi drivers in a city.[45] This system takes as input data in the form of GPS traces of the routes that taxi drivers took while working, which include location (latitude and longitude), time stamps, and operational status (with or without passengers). It then recommends a list of pickup points along a route that will lead to optimal occupancy times and profits. This type of system is obviously location-dependent, and as it must operate on a handheld or embedded device, the computation and energy requirements must remain low.

Another example of mobile recommendation is what (Bouneffouf et al., 2012) developed for professional users. This system takes as input data the GPS traces of the user and his agenda to suggest him suitable information depending on his situation and interests. The system uses machine learning techniques and reasoning process in order to adapt dynamically the mobile recommender system to the evolution of the user’s interest. The author called his algorithm hybrid-ε-greedy.[46]

Mobile recommendation systems have also been successfully built using the Web of Data as a source for structured information. A good example of such system is SMARTMUSEUM[47] The system uses semantic modelling, information retrieval and machine learning techniques in order to recommend contents matching user’s interest, even when the evidence of user's interests is initially vague and based on heterogeneous information.

Risk-aware recommender systems

The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information and do not take into account the risk of disturbing the user in specific situation. However, in many applications, such as recommending personalized content, it is also important to consider the risk of upsetting the user so as not to push recommendations in certain circumstances, for instance, during a professional meeting, early morning, late-night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process.

Risk definition

"The risk in recommender systems is the possibility to disturb or to upset the user which leads to a bad answer of the user".[48]

In response to these challenges, the authors in[48] have developed a dynamic risk sensitive recommendation system called DRARS (Dynamic Risk-Aware Recommender System), which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm. They have shown that DRARS improves the Upper Condence Bound (UCB) policy, the currently available best algorithm, by calculating the most optimal exploration value to maintain a trade-off between exploration and exploitation based on the risk level of the current user's situation. The authors conducted experiments in an industrial context with real data and real users and have shown that taking into account the risk level of users' situations significantly increased the performance of the recommender systems.

The Netflix Prize

Main article: Netflix Prize

One of the key events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[49]

The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction:[50]

Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods.

Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community.[49][51] 4-Tell, Inc. created a Netflix project-derived solution for ecommerce websites.

A second contest was planned, but was ultimately canceled in response to an ongoing lawsuit and concerns from the Federal Trade Commission.[38]

Performance measures

Evaluation is important in assessing the effectiveness of recommendation algorithms. The commonly used metrics are the mean squared error and root mean squared error. The latter was used in the Netflix Prize. The information retrieval metrics such as precision and recall or DCG are useful to assess the quality of a recommendation method. Recently, the diversity, novelty and coverage are also considered as important aspects in evaluation.[52] However, many of the classic evaluation measures are highly criticized.[53] Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[54] The authors conclude "we would suggest treating results of offline evaluations [i.e. classic performance measures] with skepticism".

Multi-criteria recommender systems

Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion values, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a Multi-criteria Decision Making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[55] See this chapter for an extended introduction.

See also

References

  1. 1 2 Francesco Ricci and Lior Rokach and Bracha Shapira, Introduction to Recommender Systems Handbook, Recommender Systems Handbook, Springer, 2011, pp. 1-35
  2. "Facebook, Pandora Lead Rise of Recommendation Engines - TIME". TIME.com. 27 May 2010. Retrieved 1 June 2015.
  3. Buettner, Ricardo (2014). A Framework for Recommender Systems in Online Social Network Recruiting: An Interdisciplinary Call to Arms. 47th Annual Hawaii International Conference on System Sciences. Big Island, Hawaii: IEEE. pp. 1415–1424. doi:10.13140/RG.2.1.2127.3048.
  4. H. Chen, A. G. Ororbia II, C. L. Giles ExpertSeer: a Keyphrase Based Expert Recommender for Digital Libraries, in arXiv preprint 2015
  5. H. Chen, L. Gou, X. Zhang, C. Giles Collabseer: a search engine for collaboration discovery, in ACM/IEEE Joint Conference on Digital Libraries (JCDL) 2011
  6. Alexander Felfernig, Klaus Isak, Kalman Szabo, Peter Zachar, The VITA Financial Services Sales Support Environment, in AAAI/IAAI 2007, pp. 1692-1699, Vancouver, Canada, 2007.
  7. 1 2 Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Bosagh Zadeh WTF:The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide Web
  8. Hosein Jafarkarimi; A.T.H. Sim and R. Saadatdoost A Naïve Recommendation Model for Large Databases, International Journal of Information and Education Technology, June 2012
  9. Prem Melville and Vikas Sindhwani, Recommender Systems, Encyclopedia of Machine Learning, 2010.
  10. R. J. Mooney and L. Roy (1999). Content-based book recommendation using learning for text categorization. In Workshop Recom. Sys.: Algo. and Evaluation.
  11. Andrew I. Schein, Alexandrin Popescul, Lyle H. Ungar, David M. Pennock (2002). Methods and Metrics for Cold-Start Recommendations. Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2002). New York City, New York: ACM. pp. 253–260. ISBN 1-58113-561-0. Retrieved 2008-02-02.
  12. Montaner, M.; Lopez, B.; de la Rosa, J. L. (June 2003). "A Taxonomy of Recommender Agents on the Internet". Artificial Intelligence Review 19 (4): 285–330. doi:10.1023/A:1022850703159.
  13. 1 2 Adomavicius, G.; Tuzhilin, A. (June 2005). "Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions". IEEE Transactions on Knowledge and Data Engineering 17 (6): 734–749. doi:10.1109/TKDE.2005.99.
  14. Herlocker, J. L.; Konstan, J. A.; Terveen, L. G.; Riedl, J. T. (January 2004). "Evaluating collaborative filtering recommender systems". ACM Trans. Inf. Syst. 22 (1): 5–53. doi:10.1145/963770.963772.
  15. Beel, J.; Langer, S.; Genzmehr, M.; Gipp, B. (October 2013). "A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation" (PDF). Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys).
  16. Beel, J.; Langer, S.; Genzmehr, M.; Gipp, B. (October 2013). "Research Paper Recommender System Evaluation: A Quantitative Literature Survey" (PDF). Proceedings of the Workshop on Reproducibility and Replication in Recommender Systems Evaluation (RepSys) at the ACM Recommender System Conference (RecSys).
  17. John S. Breese, David Heckerman, and Carl Kadie (1998). Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence (UAI'98).
  18. Sarwar, B.; Karypis, G.; Konstan, J.; Riedl, J. (2000). "Application of Dimensionality Reduction in Recommender System A Case Study",
  19. Allen, R.B. (1990). "User Models: Theory, Method, Practice". International J. Man-Machine Studies.
  20. Parsons, J.; Ralph, P.; Gallagher, K. (July 2004). "Using viewing time to infer user preference in recommender systems". AAAI Workshop in Semantic Web Personalization, San Jose, California.
  21. Collaborative Recommendations Using Item-to-Item Similarity Mappings
  22. Sanghack Lee and Jihoon Yang and Sung-Yong Park, Discovery of Hidden Similarity on Collaborative Filtering to Overcome Sparsity Problem, Discovery Science, 2007.
  23. I. Markovsky, Low-Rank Approximation: Algorithms, Implementation, Applications, Springer, 2012, ISBN 978-1-4471-2226-5
  24. Takács, G.; Pilászy, I.; Németh, B.; Tikk, D. (March 2009). "Scalable Collaborative Filtering Approaches for Large Recommender Systems" (PDF). Journal of Machine Learning Research 10: 623–656
  25. Rennie, J.; Srebro, N. (2005). Luc De Raedt, Stefan Wrobel, ed. Fast Maximum Margin Matrix Factorization for Collaborative Prediction (PDF). Proceedings of the 22nd Annual International Conference on Machine Learning. ACM Press.
  26. Breese, John S.; Heckerman, David; Kadie, Carl (1998). Empirical Analysis of Predictive Algorithms for Collaborative Filtering (PDF) (Report). Microsoft Research.
  27. "Kernel-Mapping Recommender system algorithms". Information Sciences 208: 81–104. doi:10.1016/j.ins.2012.04.012. Retrieved 1 June 2015.
  28. Peter, Brusilovsky (2007). The Adaptive Web. p. 325. ISBN 978-3-540-72078-2.
  29. Rinke Hoekstra, The Knowledge Reengineering Bottleneck, Semantic Web – Interoperability, Usability, Applicability 1 (2010) 1 ,IOS Press
  30. 1 2 3 Robin Burke , Hybrid Web Recommender Systems, pp. 377-408, The Adaptive Web, Peter Brusilovsky, Alfred Kobsa, Wolfgang Nejdl (Ed.), Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, Lecture Notes in Computer Science, Vol. 4321, May 2007, 978-3-540-72078-2.
  31. Alexander Felfernig and Robin Burke. Constraint-based Recommender Systems: Technologies and Research Issues, Proceedings of the ACM International Conference on Electronic Commerce (ICEC'08), Innsbruck, Austria, Aug. 19-22, pp. 17-26, 2008.
  32. Ziegler, C.N., McNee, S.M., Konstan, J.A. and Lausen, G. (2005). "Improving recommendation lists through topic diversification". Proceedings of the 14th international conference on World Wide Web. pp. 22–32.
  33. Joeran Beel, Stefan Langer, Marcel Genzmehr, Andreas Nürnberger (September 2013). "Persistence in Recommender Systems: Giving the Same Recommendations to the Same Users Multiple Times". In Trond Aalberg and Milena Dobreva and Christos Papatheodorou and Giannis Tsakonas and Charles Farrugia. Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013) (PDF). Lecture Notes of Computer Science (LNCS) 8092. Springer. pp. 390–394. Retrieved 1 November 2013.
  34. Cosley, D., Lam, S.K., Albert, I., Konstan, J.A., Riedl, {J}. (2003). "Is seeing believing?: how recommender system interfaces affect users' opinions". Proceedings of the SIGCHI conference on Human factors in computing systems. pp. 585–592.
  35. {P}u, {P}., {C}hen, {L}., {H}u, {R}. (2012). "Evaluating recommender systems from the user's perspective: survey of the state of the art". User Modeling and User-Adapted Interaction (Springer): 1–39.
  36. Rise of the Netflix Hackers Archived January 24, 2012, at the Wayback Machine.
  37. "Netflix Spilled Your Brokeback Mountain Secret, Lawsuit Claims". WIRED. 17 December 2009. Retrieved 1 June 2015.
  38. 1 2 "Netflix Prize Update". Netflix Prize Forum. 2010-03-12.
  39. Naren Ramakrishnan, Benjamin J. Keller, Batul J. Mirza, Ananth Y. Grama, George Karypis (2001). "Privacy Risks in Recommender Systems". IEEE Internet Computing (Piscataway, NJ: IEEE Educational Activities Department) 5 (6): 54–62. doi:10.1109/4236.968832. ISBN 1-58113-561-0.
  40. Joeran Beel, Stefan Langer, Andreas Nürnberger, Marcel Genzmehr (September 2013). "The Impact of Demographics (Age and Gender) and Other User Characteristics on Evaluating Recommender Systems". In Trond Aalberg and Milena Dobreva and Christos Papatheodorou and Giannis Tsakonas and Charles Farrugia. Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013) (PDF). Springer. pp. 400–404. Retrieved 1 November 2013.
  41. {K}onstan, {J}.{A}., {R}iedl, {J}. (2012). "Recommender systems: from algorithms to user experience". User Modeling and User-Adapted Interaction (Springer): 1–23.
  42. {R}icci, {F}., {R}okach, {L}., {S}hapira, {B}., {K}antor {B}. {P}. (2011). "Recommender systems handbook". Recommender Systems Handbook (Springer): 1–35.
  43. Montaner, Miquel, L{\'o}pez, Beatriz, de la Rosa, Josep Llu{\'\i}s (2002). "Developing trust in recommender agents". Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1. pp. 304–305.
  44. Beel, Joeran, Langer, Stefan, Genzmehr, Marcel (September 2013). "Sponsored vs. Organic (Research Paper) Recommendations and the Impact of Labeling". In Trond Aalberg and Milena Dobreva and Christos Papatheodorou and Giannis Tsakonas and Charles Farrugia. Proceedings of the 17th International Conference on Theory and Practice of Digital Libraries (TPDL 2013) (PDF). pp. 395–399. Retrieved 2 December 2013.
  45. 1 2 Yong Ge, Hui Xiong, Alexander Tuzhilin, Keli Xiao, Marco Gruteser,Michael J. Pazzani (2010). An Energy-Efficient Mobile Recommender System (PDF). Proceedings of the 16th ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining. New York City, New York: ACM. pp. 899–908. Retrieved 2011-11-17.
  46. Bouneffouf, Djallel (2012), "Following the User's Interests in Mobile Context-Aware Recommender Systems: The Hybrid-e-greedy Algorithm", Proceedings of the 2012 26th International Conference on Advanced Information Networking and Applications Workshops (PDF), Lecture Notes in Computer Science, IEEE Computer Society, pp. 657–662, ISBN 978-0-7695-4652-0, archived from the original (PDF) on May 14, 2014
  47. Tuukka Ruotsalo, Krister Haav, Antony Stoyanov, Sylvain Roche, Elena Fani, Romina Deliai, Eetu Mäkelä, Tomi Kauppinen, Eero Hyvönen (2013). "SMARTMUSEUM: A Mobile Recommender System for the Web of Data". Web Semantics: Science, Services and Agents on the World Wide Web (Elsevier) 20: 657–662. doi:10.1016/j.websem.2013.03.001.
  48. 1 2 Bouneffouf, Djallel (2013), DRARS, A Dynamic Risk-Aware Recommender System (Ph.D.), Institut National des Télécommunications
  49. 1 2 Lohr, Steve. "A $1 Million Research Bargain for Netflix, and Maybe a Model for Others". The New York Times.
  50. R. Bell, Y. Koren, C. Volinsky (2007). "The BellKor solution to the Netflix Prize" (PDF).
  51. Bodoky, Thomas. "Mátrixfaktorizáció one million dollars". Index.
  52. Lathia, N., Hailes, S., Capra, L., Amatriain, X.: Temporal diversity in recommender systems. In: Proceeding of the 33rd International ACMSIGIR Conference on Research and Development in Information Retrieval, SIGIR 2010, pp. 210–217. ACM, New York
  53. Turpin, Andrew H, Hersh, William (2001). "Why batch and user evaluations do not give the same results". Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. pp. 225–231.
  54. Beel, Joeran; Genzmehr, Marcel; Langer, Stefan; Nürnberger, Andreas; Gipp, Bela (2013-01-01). "A Comparative Analysis of Offline and Online Evaluations and Discussion of Research Paper Recommender System Evaluation". Proceedings of the International Workshop on Reproducibility and Replication in Recommender Systems Evaluation. RepSys '13 (New York, NY, USA: ACM): 7–14. doi:10.1145/2532508.2532511. ISBN 9781450324656.
  55. Lakiotaki, K.; Matsatsinis; Tsoukias, A. "Multicriteria User Modeling in Recommender Systems". IEEE Intelligent Systems 26 (2): 64–76. doi:10.1109/mis.2011.33.

Further reading

Books

Kim Falk (2015). Practical Recommender Systems. ISBN 9781617292705.

Scientific articles

External links

This article is issued from Wikipedia - version of the Sunday, April 10, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.