학술논문

Improving Transformers using Faithful Positional Encoding
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Language
Abstract
We propose a new positional encoding method for a neural network architecture called the Transformer. Unlike the standard sinusoidal positional encoding, our approach is based on solid mathematical grounds and has a guarantee of not losing information about the positional order of the input sequence. We show that the new encoding approach systematically improves the prediction performance in the time-series classification task.
Comment: arXiv admin note: text overlap with arXiv:2305.17149