학술논문

Fighting Filterbubbles with Adversarial Training
Document Type
Conference
Source
Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia. :20-22
Subject
adversarial learning
natural language processing
news recommendation
representation learning
Language
English
Abstract
Recommender engines play a role in the emergence and reinforcement of filter bubbles. When these systems learn that a user prefers content from a particular site, the user will be less likely to be exposed to different sources or opinions and, ultimately, is more likely to develop extremist tendencies. We trace roots of this phenomenon to the way the recommender engine represents news articles. The vectorial features modern systems extract from the plain text of news articles are already highly predictive of the associated news outlet. We propose a new training scheme based on adversarial machine learning to tackle this issue . Our preliminary experiments show that the features we can extract this way are significantly less predictive of the news outlet and thus offer the possibility to reduce the risk of manifestation of new filter bubbles.

Online Access