학술논문

Semantics-Aware Autoencoder
Document Type
Periodical
Source
IEEE Access Access, IEEE. 7:166122-166137 2019
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Deep learning
Recommender systems
Task analysis
Computational modeling
Engines
Companies
Tools
Autoencoder neural network
cold start problem
deep learning
explanation
knowledge graph
recommender system
Language
ISSN
2169-3536
Abstract
Recommender Systems are widely adopted in nowadays services such as e-commerce websites, multimedia streaming platforms, and many others. They help users to find what they are looking for by suggesting relevant items leveraging their past preferences. Deep Learning models are very effective in solving the recommendation problem; as a matter of fact, many deep learning architectures have been proposed over the years. Even if deep learning models outperform many state-of-the-art algorithms, the worst disadvantage is about their interpretability: explaining the reason a specific item has been recommended to a user is quite a difficult task since the model is not interpretable. Accuracy in the recommendation is no more enough since users are also expecting a useful explanation for the suggested items. Users, on the other hand, want to know why. In this paper, we present SemAuto, a novel approach based on an Autoencoder Neural Network that makes it possible to semantically label neurons in hidden layers, thus paving the way to the model’s interpretability and consequently to the explanation of a recommendation. We tested our semantics-aware approach with respect to other state-of-the-art algorithms to prove the recommendation’s accuracy. Furthermore, we performed an extensive A/B test with real users to evaluate the explanation we generate.