학술논문
The Transformer Neural Network Architecture for Part-of-Speech Tagging
Document Type
Conference
Author
Source
2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus) Electrical and Electronic Engineering (ElConRus), 2021 IEEE Conference of Russian Young Researchers in. :536-540 Jan, 2021
Subject
Language
ISSN
2376-6565
Abstract
Part-of-speech tagging (POS tagging) is one of the most important tasks in natural language processing. This process implies determining part of speech and assigning an appropriate tag for each word in given sentence. The resulting tag sequence can be used as is and as a part of more complicated tasks, such as dependency and constituency parsing. This task belongs to sequence-to-sequence tasks and multilayer bidirectional LSTM networks are commonly used for POS tagging. Such networks are rather slow in terms of training and processing large amounts of information due to sequential computation of each timestamp from the input sequence. This paper is focused on developing an accurate model for POS tagging that uses the original Transformer neural network architecture.