학술논문

Explainability of Neural Networks for Symbol Detection in Molecular Communication Channels
Document Type
Periodical
Source
IEEE Transactions on Molecular, Biological and Multi-Scale Communications IEEE Trans. Mol. Biol. Multi-Scale Commun. Molecular, Biological and Multi-Scale Communications, IEEE Transactions on. 9(3):323-328 Sep, 2023
Subject
Communication, Networking and Broadcast Technologies
Bioengineering
Computing and Processing
Signal Processing and Analysis
Artificial neural networks
Symbols
Detectors
Receivers
Standards
Mathematical models
Channel models
Explainable AI
individual conditional expectation
local interpretable model-agnostic explanation
machine learning
molecular communication
neural network
testbed
Language
ISSN
2372-2061
2332-7804
Abstract
Recent molecular communication (MC) research suggests machine learning (ML) models for symbol detection, avoiding the unfeasibility of end-to-end channel models. However, ML models are applied as black boxes, lacking proof of correctness of the underlying neural networks (NNs) to detect incoming symbols. This paper studies approaches to the explainability of NNs for symbol detection in MC channels. Based on MC channel models and real testbed measurements, we generate synthesized data and train a NN model to detect of binary transmissions in MC channels. Using the local interpretable model-agnostic explanation (LIME) method and the individual conditional expectation (ICE), the findings in this paper demonstrate the analogy between the trained NN and the standard peak and slope detectors.