Lujo Bauer :: projects :: adversarial machine learning

Machine learning (ML) algorithms are becoming ubiquitous; they're used in applications from playing chess and predicting the weather to cancer diagnosis and self-driving cars. In this project we first try to understand how robust ML algorithms are in the face of an adversary. Specifically, we study whether an adversary can fool ML classifiers in practical settings without arousing the suspicion of a human. For instance, we showed that it is possible to 3d print a pair of eyeglasses that, when worn by an adversary, can cause a state-of-the-art face-recognition algorithm to identify the adversary as (a specific) someone else. We leverage what we learn of ML algorithms' weaknesses to design ML algorithms that are more resistant to attack.

Video demonstrating targeted impersonation: Mahmood impersonates Ariel against VGG10. The video shows that the face recognizer isn't confused by non-adversarial eyeglasses, including large, bright ones, but adversarial eyeglasses generated specifically to fool the recognizer into classifying Mahmood as Ariel are overwhelmingly successful at doing so. Targeted impersonation is achieved via the method described in “Adversarial generative nets: neural network attacks on state-of-the-art face recognition” (see below).

Publications

new!  Training robust ML-based raw-binary malware detectors in hours, not months.   [PDF, BibTeX, ]
Keane Lucas, Weiran Lin, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 31st ACM SIGSAC Conference on Computer and Communications Security, October 2024. © authors

Group-based robustness: A general framework for customized robustness in the real world.   [PDF, BibTeX, talk video, ]
Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 31st Network and Distributed System Security Symposium, February 2024. Internet Society. © authors  DOI:10.14722/ndss.2024.24084

RS-Del: Edit distance robustness certificates for sequence classifiers via randomized deletion.   [PDF, BibTeX, ]
Zhuoqun Huang, Neil G. Marchant, Keane Lucas, Lujo Bauer, Olga Ohrimenko, and Benjamin I. P. Rubinstein.
In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), 2023.

Adversarial training for raw-binary malware classifiers.   [PDF, BibTeX, talk video and slides, ]
Keane Lucas, Samruddhi Pai, Weiran Lin, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 32nd USENIX Security Symposium, August 2023. USENIX.

Constrained Gradient Descent: a powerful and principled evasion attack against neural networks.   [PDF, BibTeX, talk video, talk slides, ]
Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 39th International Conference on Machine Learning, ICML 2022, July 2022.

Malware makeover: breaking ML-based static analysis by modifying executable bytes.   [PDF, BibTeX, slides, talk video, code, ]
Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre.
In Proceedings of the ACM Asia Conference on Computer and Communications Security, June 2021. © authors  DOI:10.1145/3433210.3453086

n-ML: Mitigating adversarial examples via ensembles of topologically manipulated classifiers.   [PDF, BibTeX, project page, ]
Mahmood Sharif, Lujo Bauer, and Michael K. Reiter.
arXiv preprint 1912.09059, December 2019.

Optimization-guided binary diversification to mislead neural networks for malware detection.   [PDF, BibTeX, project page, ]
Mahmood Sharif, Keane Lucas, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre.
arXiv preprint 1912.09064, December 2019.

A general framework for adversarial examples with objectives.   [PDF, BibTeX, project page, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
ACM Transactions on Privacy and Security, 22 (3). June 2019. (Revised version of arXiv preprint 1801.00349.) © authors  DOI:10.1145/3317611

On the suitability of Lp-norms for creating and preventing adversarial examples.   [PDF, BibTeX, project page, ]
Mahmood Sharif, Lujo Bauer, and Michael K. Reiter.
In Proceedings of The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (in conjunction with the 2018 IEEE Conference on Computer Vision and Pattern Recognition), June 2018. © IEEE

Adversarial Generative Nets: Neural network attacks on state-of-the-art face recognition.   [PDF, BibTeX, project page, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
arXiv preprint 1801.00349, December 2017.

Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition.   [PDF, BibTeX, talk video, project page, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
In Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security, October 2016.  DOI:10.1145/2976749.2978392


Parts of this work have been supported by the MURI Cyber Deception grant under ARO award W911NF-17-1-0370; the National Security Agency; National Science Foundation; gifts from Google and NVIDIA; gifts from NATO and Lockheed Martin through Carnegie Mellon CyLab; a CyLab Presidential Fellowship; a NortonLifeLock Research Group Fellowship; and a DoD National Defense Science and Engineering Graduate fellowship.