My Blog List

Tuesday, January 31, 2017

Week 5: One-Class Collaborative Filtering

Title: One-class collaborative filtering, Rong Pan, Yunhong Zhou, Bin Cao, Nathan N. Liu, Rajan Lukose, ICDE 2008.

Background: In Amazon and Netflix, users are allowed to express their preference by explicitly providing ratings of different scales. However, in some other applications, we can only observe implicit user behaviors such as click or not click, purchase or not purchase and bookmark or not bookmark. However, it would cause ambiguity because we are not able to interpret the true meaning of negative examples. For example, not clicking does not necessarily correspond to no interest, it may represent no awareness of its existence. The collaborative filtering with positive examples (ambiguous negative examples) is One-Class Collaborative Filtering (OCCF). This paper aims to address this issue.

Methods: Intuitive methods are "all missing as negative (AMAN)" and "all missing as unknown (AMAU)". AMAN takes all missing data as negative examples, which will bias recommendation results because some missing data may be positive. AMAU takes all missing data as unknown examples. It will produce a trivial situation where all predictions for missing values are positive.

To balance the two extremes and improve recommendation performance, this paper proposes two frameworks - (1) weighted low rank approximation and (2) negative example sampling. The main idea of weighted low rank approximation is to approximate user-to-item matrix by a low-rank matrix with different scales of weights based on the confidence on positive and so-called "negative" examples. To do so, the paper presents a weighted alternating least squares (wALS) method. As shown in Figure 1, its object function is a weighted Frobenius loss function with regularization to avoid overfitting, where R is user-to-item matrix, U and V are decompositions of low-rank approximation, W represent weighting matrix. In wALS algorithm, U or V are iteratively fixed and optimized.

Figure 1. Object function of wALS.
Figure 2. Algorithm of wALS.
They provide three weighting schemes. The first scheme is uniform weighting which assumes that all missing data has equal chance of being negative in [0,1]; The second schemes is user-oriented which assumes that if a user has more positive ratings, it is more likely that missing data represent negative examples; The third scheme is item-oriented which posits that if an item is less positively rated, its missing values will represent negative with higher probability.

There are two disadvantages lying in wALS: high computational cost and imbalanced classes. Therefore, they further incorporate negative example sampling approach to decrease learning set. The framework can be illustrated by Figure 3. In phase I, a set of "negative" examples are sampled from missing data and generate a series of reduced matrices whose sizes are much smaller than that of original one. The sampling strategies consist of uniform random sampling, user-oriented sampling and item-oriented sampling. In phase II, wALS is used to approximate those reduced matrices, and finally combined to obtain the final approximation result. This strategy is referred as sampling ALS ensemble (sALS-ENS).
Figure 3. Ensemble of negative sampling based wALS for OCCF.

Experiments: Two datasets are used in experiments - Yahoo news dataset with clicks and social bookmarking dataset. Evaluation measures mean average precision (MAP) and half-life utility (HLU) are used. Baselines methods are AMAN and AMAU. For AMAN case, they use traditional CF approaches such as ALS-AMAN and SVD, user-user similarity and item-item similarity algorithms. For AMAU case, they rank items by popularity and one-class classification method. Results show that sALS-ENS and wALS-UserOriented approaches maintain better performance than other competing approaches.

Figure 4. Performance of different methods as a function of ratio between negative sampling and positive examples.


No comments:

Post a Comment