학술논문

Natural Language Processing based Machine Translation for Hindi-English using GRU and Attention
Document Type
Conference
Source
2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC) Applied Artificial Intelligence and Computing (ICAAIC), 2022 International Conference on. :965-969 May, 2022
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Training
Location awareness
Deep learning
Target recognition
Computational modeling
Neural networks
Pipelines
Translation
Machine
Words
Text
Localization
Subfield
Target
Meanings
Phrases
Software
Language
Abstract
Machine translation, abbreviated as MT, is basically an extension of language computation that studies the concept of translating text or speech from one language to another. On the most rudimentary level, machine translation basically conducts mechanical reinstatement of words from one particular language into another, however, this alone rarely yields a good translation because it needs to recognize the closest match between the complete sentence and the target language. Not every term in one language has an equivalent word in another, and many words have many meanings. Unlike our everyday normal phrase-based translation systems, which are usually made up of many small and tiny sub-components that can be tweaked individually, NMT i.e., neural machine translation seeks to design and train a simple, massive, end-to-end neural network system that analyses a sentence and provides an accurate translation for the respective language. Neural Machine translation is a radical newer technique in the field of language translation and localization that trains neural models using deep neural networks and artificial intelligence. The main advantage of this approach is that you can train a single system directly with source and target text. This eliminates the need for special system pipelines required for statistical machine learning. This means you can perform training and translation on an end-to-end model at once.