Evading Black-box Classifiers Without Breaking Eggs

Edoardo Debenedetti, Nicholas Carlini and Florian Tramèr

IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) 2024 (Best paper award runner-up)

Previously presented at ICML 2023 Workshop on Adversarial Machine Learning (Oral presentation)



Abstract

Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples. Prior work measures the cost of such attacks by the total number of queries made to the classifier. We argue this metric is flawed. Most security-critical machine learning systems aim to weed out "bad" data (e.g., malware, harmful content, etc). Queries to such systems carry a fundamentally asymmetric cost: queries detected as "bad" come at a higher cost because they trigger additional security filters, e.g., usage throttling or account suspension. Yet, we find that existing decision-based attacks issue a large number of "bad" queries, which likely renders them ineffective against security-critical systems. We then design new attacks that reduce the number of bad queries by 1.5-7.3x, but often at a significant increase in total (non-bad) queries. We thus pose it as an open problem to build black-box attacks that are more effective under realistic cost metrics.


BibTeX
@inproceedings{DCT24,
  author   =   {Debenedetti, Edoardo and Carlini, Nicholas and Tram{\`e}r, Florian},
  title   =   {Evading Black-box Classifiers Without Breaking Eggs},
  booktitle   =   {IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)},
  year   =   {2024},
  howpublished   =   {arXiv preprint arXiv:2306.02895},
  url   =   {https://arxiv.org/abs/2306.02895}
}