학술논문
English-Twi Parallel Corpus for Machine Translation
Document Type
Working Paper
Author
Azunre, Paul; Osei, Salomey; Addo, Salomey; Adu-Gyamfi, Lawrence Asamoah; Moore, Stephen; Adabankah, Bernard; Opoku, Bernard; Asare-Nyarko, Clara; Nyarko, Samuel; Amoaba, Cynthia; Appiah, Esther Dansoa; Akwerh, Felix; Lawson, Richard Nii Lante; Budu, Joel; Debrah, Emmanuel; Boateng, Nana; Ofori, Wisdom; Buabeng-Munkoh, Edwin; Adjei, Franklin; Ampomah, Isaac Kojo Essel; Otoo, Joseph; Borkor, Reindorf; Mensah, Standylove Birago; Mensah, Lucien; Marcel, Mark Amoako; Amponsah, Anokye Acheampong; Hayfron-Acquah, James Ben
Source
Subject
Language
Abstract
We present a parallel machine translation training corpus for English and Akuapem Twi of 25,421 sentence pairs. We used a transformer-based translator to generate initial translations in Akuapem Twi, which were later verified and corrected where necessary by native speakers to eliminate any occurrence of translationese. In addition, 697 higher quality crowd-sourced sentences are provided for use as an evaluation set for downstream Natural Language Processing (NLP) tasks. The typical use case for the larger human-verified dataset is for further training of machine translation models in Akuapem Twi. The higher quality 697 crowd-sourced dataset is recommended as a testing dataset for machine translation of English to Twi and Twi to English models. Furthermore, the Twi part of the crowd-sourced data may also be used for other tasks, such as representation learning, classification, etc. We fine-tune the transformer translation model on the training corpus and report benchmarks on the crowd-sourced test set.
Comment: 9 pages paper, Accepted at African NLP workshop @EACL 2021
Comment: 9 pages paper, Accepted at African NLP workshop @EACL 2021