학술논문

How many dimensions are required to find an adversarial example?
Document Type
Conference
Source
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) CVPRW Computer Vision and Pattern Recognition Workshops (CVPRW), 2023 IEEE/CVF Conference on. :2353-2360 Jun, 2023
Subject
Computing and Processing
Engineering Profession
Computer vision
Computational modeling
Perturbation methods
Conferences
Toy manufacturing industry
Pattern recognition
Standards
Language
ISSN
2160-7516
Abstract
Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input. On the other hand, a range of recent works consider the case where either (i) an adversary can perturb a limited number of input parameters or (ii) a subset of modalities in a multimodal problem. In both of these cases, adversarial examples are effectively constrained to a subspace V in the ambient input space $\mathcal{X}$. Motivated by this, in this work we investigate how adversarial vulnerability depends on dim(V). In particular, we show that the adversarial success of standard PGD attacks with ℓ p norm constraints behaves like a monotonically increasing function of $\varepsilon {\left( {\frac{{\dim (V)}}{{\dim \mathcal{X}}}} \right)^{\frac{1}{q}}}$ where ϵ is the perturbation budget and $\frac{1}{p} + \frac{1}{q} = 1$, provided p > 1 (the case p = 1 presents additional subtleties which we analyze in some detail). This functional form can be easily derived from a simple toy linear model, and as such our results land further credence to arguments that adversarial examples are endemic to locally linear models on high dimensional spaces.