학술논문

Therapeutic Prediction task on Electronic Health Record using DeBERTa
Document Type
Conference
Source
2022 IEEE 9th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering (UPCON) Electrical, Electronics and Computer Engineering (UPCON), 2022 IEEE 9th Uttar Pradesh Section International Conference on. :1-6 Dec, 2022
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Computational modeling
Bit error rate
Training data
Predictive models
Transformers
Data models
Natural language processing
Natural language processing (NLP) Electronic health record (EHR)
BERT
Masked language modeling
Next sentence prediction
Language
ISSN
2687-7767
Abstract
The performance of several deep learning models used in the past to conduct jobs on electronic health records (EHR), such as prediction tasks, is fairly good, but they need a lot of training data. There are many rare disorders, such as CTX, Type II glycogen disease caused by metabolic issue, which may explain why there are not as many patients with the disease as there should be. The amount of data needed is very high, which the healthcare system can not provide in most situations. A large percentage of EHR data, including patient data from a single visit, is lost because the majority of the work done in the past employs patients with multiple visits, which causes a problem with therapy prediction. In order to maintain the performance of the prediction task, This paper includes EHR data from a patient with a single visit in to address the issue of the limited dataset. We use two pre-trained models for this purpose: BERT (Bidirectional Encoder Representations from Transformers) and its extension DeBERTa (Decoding improved BERT disentangled attention), which we then fine-tune using electronic medical records. This paper uses the models BERT and DeBERTa to show the effectiveness of the suggested approach. BERT and DeBERTa outperform other baseline models on a number of evaluation metrics like the Jaccard score, AUC, and F1. We also got to the conclusion that DeBERTa performed better than BERT.