학술논문

An Enhanced Phrase Matching Method Based on Cross-Attention.
Document Type
Article
Source
Mathematical Problems in Engineering. 4/12/2023, p1-11. 11p.
Subject
*LANGUAGE models
*DATA fusion (Statistics)
*PEARSON correlation (Statistics)
*TERMS & phrases
*NATURAL languages
*MULTISENSOR data fusion
Language
ISSN
1024-123X
Abstract
Text matching is a core problem in the field of natural language understanding. It aims to analyze and judge the semantic relevance or similarity between two texts. In the past, much work on text matching focused on long text and little on phrase optimization. However, phrase matching also has essential application scenarios. Compared with long texts, phrases have difficulties covering less semantics and polysemy, and because of the weak expression ability of phrases, it is hard to match phrases accurately. On the Kaggle patent phrase matching dataset, due to the few words in the phrase and the repeated occurrences under different patent classification numbers, it is difficult to match each other accurately. In the data-processing stage, this work proposes aggregating related targets for data fusion, expanding the background semantic information, and enhancing the expression ability of phrases. In the training stage, this work considers adding a cross-attention network to the model to make the additional related targets better used and learned. The added cross-attention network makes the model pay more attention to the most related information. The proposed method has experimented on pretrained language models such as BERT-for-patent, DeBERTa, and RoBERTa. The results show that the proposed method in this work improves by 2–4 points more than the general method without any information or additional network layers in its evaluation metric called the Pearson correlation, which enhances the performance in short-text matching. [ABSTRACT FROM AUTHOR]