학술논문
MedFuseNet: Fusion of Multi-Modal Data for Improved Cervical Cancer Diagnostic Accuracy
Document Type
Conference
Author
Source
2025 3rd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT) Intelligent Da Communication Technologies and Internet of Things (IDCIoT), 2025 3rd International Conference on. :1138-1144 Feb, 2025
Subject
Language
Abstract
Timely identification of cervical cancer is essential for effective therapy and enhanced patient outcomes. Conventional imaging methods, although fundamental, frequently inade-quately represent the Detailed characteristics of cancer pathology because they depend on isolated data sources. MedFuseN et tackles these difficulties by presenting an advanced architecture that integrates multi-modal data to improve diagnostic accuracy and dependability significantly. This advanced model employs high-resolution medical imaging and incorporates patient-specific clinical data, establishing a comprehensive analytical foundation for diagnosis. Our dataset includes 4,049 annotated cervical cell images across a spectrum from normal to malignant, each enriched with detailed clinical parameters. MedFuseNet utilizes a hybrid architecture that integrates CNNs for image data and RNNs for sequential clinical data, facilitating a thorough analysis of various data sources. The methodological integration enables MedFuseN et to surpass conventional single-source models, with an accuracy of 98.5 %, alongside significant enhancements in pre-cision (97.9 %) and recall (98.1 %). These substantial diagnostic improvements highlight the promise of multi-modal data fusion in medical imaging, paving the way for creating more sophisticated, AI -driven diagnostic instruments that may revolutionize early cancer diagnosis and treatment approaches.