학술논문

DFP-Net: An explainable and trustworthy framework for detecting deepfakes using interpretable prototypes
Document Type
Conference
Source
2023 IEEE International Joint Conference on Biometrics (IJCB) Biometrics (IJCB), 2023 IEEE International Joint Conference on. :1-9 Sep, 2023
Subject
Computing and Processing
Signal Processing and Analysis
Heating systems
Deepfakes
Visualization
Forensics
Decision making
Prototypes
Predictive models
Deepfakes detection
DFP-Net
Interpretable prototypes
Explainable AI
FaceForensics++
Language
ISSN
2474-9699
Abstract
The rise of deepfake videos poses a serious threat to the authenticity of visual media, as they have a potential to manipulate public opinion, mislead individuals or groups, harm reputation, etc. Traditional methods for detecting deepfakes rely on deep learning models, which lack transparency and interpretability. To gain the confidence of forensic experts in AI-based deepfakes detector, we present a novel DFP-Net for detecting deepfakes using interpretable and explainable prototypes. Our method makes use of the power of prototype-based learning to generate a set of representative images that capture the essential features of genuine and deepfake images. These prototypes are then used to explain our model’s decision-making process and to provide insights into the features most relevant for deepfake detection. We then use these prototypes to train a classification model that can detect deepfakes accurately and with high interpretability. To further improve the interpretability of our method, we also utilize the Grad-CAM technique to generate heatmaps that highlight the regions of the image that contribute the most towards the decision of the model. These heatmaps can be used to explain the reasoning behind the model’s decision and provide insights into the visual cues that distinguish deepfakes from real images. Experimental results on a large-scale FaceForensics++, Celeb-DF and DFDC-P datasets demonstrate that our method achieves state-of-the-art performance in deepfakes detection. Moreover, the interpretability and explainability of our method make it more trustworthy to forensic experts by allowing them to understand how the model works and makes predictions.