학술논문

Revisiting Supervised Word Embeddings
Document Type
Article
Source
Journal of Information Science and Engineering. Vol. 38 Issue 2, p413-427. 15 p.
Subject
supervised word embeddings
topic models
supervised learning
supervised topic models
word vectors
Language
英文
ISSN
1016-2364
Abstract
Word embeddings are playing a crucial role in a variety of applications. However, most previous works focus on word embeddings which are either non-discriminative or hardly interpretable. In this work, we investigate a novel approach, referred to as SWET, which learns supervised word embeddings using topic models from labeled corpora. SWET inherits the interpretability of topic models, the discriminativeness of supervised inference from labels. More importantly, SWET enables us to directly exploit a large class of existing unsupervised and supervised topic models to learn supervised word embeddings. Extensive experiments show that SWET outperforms unsupervised approaches by a large margin, and are highly competitive with supervised baselines.