학술논문

Improving the Security of Audio CAPTCHAs With Adversarial Examples
Document Type
Periodical
Source
IEEE Transactions on Dependable and Secure Computing IEEE Trans. Dependable and Secure Comput. Dependable and Secure Computing, IEEE Transactions on. 21(2):650-667 Apr, 2024
Subject
Computing and Processing
CAPTCHAs
Perturbation methods
Security
Image recognition
Training
Internet
Task analysis
Audio CAPTCHA
reCAPTCHA
adversarial examples
generative adversarial networks
Language
ISSN
1545-5971
1941-0018
2160-9209
Abstract
CAPTCHAs (completely automated public Turing tests to tell computers and humans apart) have been the main protection against malicious attacks on public systems for many years. Audio CAPTCHAs, as one of the most important CAPTCHA forms, provide an effective test for visually impaired users. However, in recent years, most of the existing audio CAPTCHAs have been successfully attacked by machine learning-based audio recognition algorithms, showing their insecurity. In this article, a generative adversarial network (GAN)-based method is proposed to generate adversarial audio CAPTCHAs. This method is implemented by using a generator to synthesize noise, a discriminator to make it similar to the target and a threshold function to limit the size of the perturbation; then, the synthetic perturbation is combined with the original audio to generate the adversarial audio CAPTCHA. The experimental results demonstrate that the addition of adversarial examples can greatly reduce the recognition accuracy of automatic models and improve the robustness of different types of audio CAPTCHAs. We also explore ensemble learning strategies to improve the transferability of the proposed adversarial audio CAPTCHA methods. To investigate the effect of adversarial CAPTCHAs on human users, a user study is also conducted.