In this lesson, we will continue to learn about collaborative filtering from the previous lesson. I will give some simple code by implementing and explaining. These are the contents of this video. First, I want to say about the CF considering the user bias. Also, I want to say about the CF performance improvement. We can talk about the user-based CF versus item-based CF. Lastly, let's talk about the performance metric of RSs. First, let's talk about the CF Considering the User Bias. Here is our user's evaluation data. There are four different persons on here and someone gives four stars and the other gives one star for the movies. We can find the tendency of user's feedback. Without considering the user bias, we can just calculate the failures of user a to item i, and a is for the target value and u means neighbors and n means number of neighbors. The p_a, i stands for the expected rating of user a for item i, w_ a, u stand for the similarity-based between user a and u, and r_ u, i stands for the rating of user u for item i. Then we can update the rating with this values, and r bar means overrating average of user a, and r bar u is for overall rating average of user u. We can do the implementing the CF considering the user bias like this. But first, we need to calculate the rating mean and rating bias. Then the method of CF_knn_bias is like this. First, we need to keep this condition. The last part is very similar, we have implemented it before. The second part is for when neighboring size is not specified. In this case, neighboring sides equals 0. We can calculate the predicted value with this line and add the mean of the current user to predicted variance. In this case, we can calculate only if at least two users have rated the movies. We can get the length from the similarity scores and then combat to array, to user argsort. We can also get the user index without sock. Then get similar tn rating as it matches neighbor size. At the end of this method, we can return the prediction. Now, let's check about the performance of KNN bias model with the K value is 30, is less than 1. More accurately, it's 0.94. Let's make our performance improvement with this model. We can give an idea with significance weighting. Significance is higher when there are many items that have been evaluated in common when calculating the similarity. However, if this is given as a weight, the RMSE value changes significantly. Therefore, we can use significance as a threshold to filtering neighbors. In this example we connecting binary 1 and 2. We can also calculate account with this lines. For implementing the CF, KNN bias with significance weighting, we took those same menace as we've implemented before. The difference is rose significance value. With this, and we need to define the significance level as a macro. Then finally, we can calculate the similarity scores with all of these ideas. This is the same way as we've done. Also, we can also implement as part with the same menace as we've done. Finally, we can return the prediction value as a return of this method. We decided the significance level as of three and mean rating is 2. Let's look at the result over CF, KNN by a significant weight K value of 30. It's about 0.94. User-based, CF and item-based, CF is a different implementation of collaborative filtering. The difference according to whether the criterion for calculating the similarity is a user or an item. User-based collaborative filtering it binds neighboring user with similar tastes and recommends items that users rated well. In other words, item-based CF, calculate the similarity between items to predict the user's rating for a specific item. In this example, we have a full matrix like this. There are items from 1 to 4 and the users from A to D. These green lines is a comparing based on user-based CF. With item-based CF, we can comparing the two items as a factor of users. It's the simple implementation of these two collaborative filtering method. The formal part is the same way for pre-processing and photo item-based CF. We can check with this condition the current movie is in the train set or not. Then we can get all training values and indexes of movies. Then we can also remove for the unusual values. Then we can also set our default rating value is three. In this method return the mean rating. Let's get about the result of this method. It's about 1.01. Lets say about the performance metric of RSs. For continuous value, as I said, we can use RMSE. It's based on the MSE, which means, mean squared error. You can also use different or similar method. One is MAD, that means, mean absolute deviation. It's a very simple equation with this. The other is MSE. As I said, MSE is very similar weight RMSE. This are the metric based on the errors with predicting value and the true value. With the confusion matrix also can be used for the index of RSs. There are four kind of situation. One is true positive, second is false positive, and others two is about negative false and true. We can make this confusion matrix into system recommended or not. The ground of truth can be satisfied or not satisfied. The system is superior with the recommended at for the satisfied situation. Also it is superior with not recommended the item at the not satisfied situation. We can easily calculate with this metric with ground truth and prediction. Two hundred case we can calculate all of the confusion matrix values. There are representative four indexes. One is accuracy. Accuracy can be calculated with this term. Precision is about the how the model is precise or weigh. Recall is for the how the model can find the whole true positive things. F1 measure harmonic mean with precision and recall. I can sum up with this slide. We've considering about the user bias with collaborative filtering, and we've some implementation of performance improvement for CF. I've said about the difference with user-based CF and item-based CF. Lastly, I introduced about some performance metrics for RSs. Thank you.