학술논문

A CNN-Model to Classify Low-Grade and High-Grade Glioma From MRI Images
Document Type
Periodical
Source
IEEE Access Access, IEEE. 11:46283-46296 2023
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Tumors
Convolutional neural networks
Magnetic resonance imaging
Feature extraction
Support vector machines
Brain modeling
Surgery
Low and high-grade glioma grading
convolutional neural networks
MRI images
Language
ISSN
2169-3536
Abstract
Glioma is the most occurring brain tumor in the world. Its grade (level of severity) identification, crucial in its treatment planning, is most demanding in a clinical environment. Computer-aided methods have been experimented with to identify the grade of glioma, out of which deep learning-based methods, due to their auto features engineering, have a good impact in terms of their achieved outcomes. In this study, convolutional neural networks (CNNs) have been explored and utilized for the classification of glioma grading, for example, low grade (grade I-II) and high grade (grade III-IV). A CNN-based model, which is light-weighted in terms of layers, size, and learnable parameters, has been proposed. Experimental tests were carried out on benchmarked publicly available datasets, for example, Brats-2017, Brats-2018, & Brats-2019. A locally developed dataset from Bahawal Victoria Hospital, Bahawalpur, Pakistan, has also been employed for experimentation and research to cross-validate the outcomes. Additionally, experiments have been carried out to compare the effectiveness of the proposed model, and results have been compared with the results of state-of-the-art pertained CNN models, i.e., resnet18, squeezenet, and alexnet. The proposed model achieved maximum standard evaluation measures on the benchmarked dataset, i.e., accuracy, specificity, and sensitivity at 97.85%, 98.88%, and 99.88%, respectively. Similarly, these measures were 98.89%, 99.28%, and 99.77% on a locally developed dataset, which is the best compared to the recent state-of-the-art related studies.