학술논문

The Transformer Neural Network Architecture for Part-of-Speech Tagging
Document Type
Conference
Source
2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus) Electrical and Electronic Engineering (ElConRus), 2021 IEEE Conference of Russian Young Researchers in. :536-540 Jan, 2021
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
General Topics for Engineers
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Training
Neural networks
Computer architecture
Tagging
Nonhomogeneous media
Task analysis
Speech processing
part-of-speech tagging
natural language processing
the Transformer
neural network
Language
ISSN
2376-6565
Abstract
Part-of-speech tagging (POS tagging) is one of the most important tasks in natural language processing. This process implies determining part of speech and assigning an appropriate tag for each word in given sentence. The resulting tag sequence can be used as is and as a part of more complicated tasks, such as dependency and constituency parsing. This task belongs to sequence-to-sequence tasks and multilayer bidirectional LSTM networks are commonly used for POS tagging. Such networks are rather slow in terms of training and processing large amounts of information due to sequential computation of each timestamp from the input sequence. This paper is focused on developing an accurate model for POS tagging that uses the original Transformer neural network architecture.