학술논문

Adversarial Attacks Against LipNet: End-to-End Sentence Level Lipreading
Document Type
Conference
Source
2020 IEEE Security and Privacy Workshops (SPW) SPW Security and Privacy Workshops (SPW), 2020 IEEE. :15-19 May, 2020
Subject
Computing and Processing
Visualization
Privacy
Social networking (online)
Neural networks
Iterative methods
Security
Task analysis
Carlini-Wagner attacks
fast gradient sign method
LipNet
Language
Abstract
Visual adversarial attacks inspired by Carlini-Wagner targeted audiovisual attacks can fool the state-of-the-art Google DeepMind LipNet model to subtitle anything with over 99% similarity. We explore several methods of visual adversarial attacks, including the vanilla fast gradient sign method (FGSM), the $L_{\infty}$ iterative fast gradient sign method, and the $L_{2}$ modified Carlini-Wagner attacks. The feasibility of these attacks raise privacy and false information threats, as video transcriptions are used to recommend and inform people worldwide and on social media.