학술논문

Graph Receptive Transformer Encoder for Text Classification
Document Type
Periodical
Source
IEEE Transactions on Signal and Information Processing over Networks IEEE Trans. on Signal and Inf. Process. over Networks Signal and Information Processing over Networks, IEEE Transactions on. 10:347-359 2024
Subject
Signal Processing and Analysis
Computing and Processing
Communication, Networking and Broadcast Technologies
Text categorization
Context modeling
Training
Task analysis
Transformers
Computational modeling
Semantics
BERT
graph convolutional networks (GCNs)
graph neural networks (GNNs)
inductive
text classification
transductive
transformers
Language
ISSN
2373-776X
2373-7778
Abstract
By employing attention mechanisms, transformers have made great improvements in nearly all NLP tasks, including text classification. However, the context of the transformer's attention mechanism is limited to single sequences, and their fine-tuning stage can utilize only inductive learning. Focusing on broader contexts by representing texts as graphs, previous works have generalized transformer models to graph domains to employ attention mechanisms beyond single sequences. However, these approaches either require exhaustive pre-training stages, learn only transductively, or can learn inductively without utilizing pre-trained models. To address these problems simultaneously, we propose the Graph Receptive Transformer Encoder (GRTE), which combines graph neural networks (GNNs) with large-scale pre-trained models for text classification in both inductive and transductive fashions. By constructing heterogeneous and homogeneous graphs over given corpora and not requiring a pre-training stage, GRTE can utilize information from both large-scale pre-trained models and graph-structured relations. Our proposed method retrieves global and contextual information in documents and generates word embeddings as a by-product of inductive inference. We compared the proposed GRTE with a wide range of baseline models through comprehensive experiments. Compared to the state-of-the-art, we demonstrated that GRTE improves model performances and offers computational savings up to ˜100×.