Stronger Privacy for Federated Collaborative Filtering With Implicit Feedback

Lorenzo Minto (Brave Software), Moritz Haller (Brave Software), Hamed Haddadi (Brave Software), Benjamin Livshits (Brave Software, Imperial College London) | Machine Learning, Privacy

Recommender systems are commonly trained on centrally-collected user interaction data like views or clicks. This practice however raises serious privacy concerns regarding the recommender’s col-lection and handling of potentially sensitive data. Several privacy-aware recommender systems have been proposed in recent literature, but comparatively little attention has been given to systems at the intersection of implicit feedback and privacy. To address this shortcoming, we propose a practical federated recommender system for implicit data under user-level local differential privacy(LDP). The privacy-utility trade-off is controlled by parameters  and k, regulating the per-update privacy budget and the number of -LDP gradient updates sent by each user, respectively. To further protect the user’s privacy, we introduce a proxy network to reduce the fingerprinting surface by anonymizing and shuffling the re-ports before forwarding them to the recommender. We empirically demonstrate the effectiveness of our framework on the MovieLens dataset, achieving up to Hit Ratio with K=10 (HR@10) 0.68 on 50,000users with 5,000 items. Even on the full dataset, we show that it is possible to achieve reasonable utility with HR@10>0.5 without compromising user privacy.

View paper

Links

Ready for a better Internet?

Brave’s easy-to-use browser blocks ads by default, making the Web cleaner, faster, and safer for people all over the world.