new! Training robust ML-based raw-binary malware detectors in hours, not months. [PDF, BibTeX, ]
Keane Lucas, Weiran Lin, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 31st ACM SIGSAC Conference on Computer and Communications Security, October 2024. © authors
Group-based robustness: A general framework for customized robustness in the real world. [PDF, BibTeX, talk video, ]
Weiran Lin, Keane Lucas, Neo Eyal, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 31st Network and Distributed System Security Symposium, February 2024. Internet Society. © authors DOI:10.14722/ndss.2024.24084
RS-Del: Edit distance robustness certificates for sequence classifiers via randomized deletion. [PDF, BibTeX, ]
Zhuoqun Huang, Neil G. Marchant, Keane Lucas, Lujo Bauer, Olga Ohrimenko, and Benjamin I. P. Rubinstein.
In Advances in Neural Information Processing Systems 36 (NeurIPS 2023), 2023.
Adversarial training for raw-binary malware classifiers. [PDF, BibTeX, talk video and slides, ]
Keane Lucas, Samruddhi Pai, Weiran Lin, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 32nd USENIX Security Symposium, August 2023. USENIX.
Constrained Gradient Descent: a powerful and principled evasion attack against neural networks. [PDF, BibTeX, talk video, talk slides, ]
Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif.
In Proceedings of the 39th International Conference on Machine Learning, ICML 2022, July 2022.
Malware makeover: breaking ML-based static analysis by modifying executable bytes. [PDF, BibTeX, slides, talk video, code, ]
Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre.
In Proceedings of the ACM Asia Conference on Computer and Communications Security, June 2021. © authors DOI:10.1145/3433210.3453086
n-ML: Mitigating adversarial examples via ensembles of topologically manipulated classifiers. [PDF, BibTeX, project page, ]
Mahmood Sharif, Lujo Bauer, and Michael K. Reiter.
arXiv preprint 1912.09059, December 2019.
Optimization-guided binary diversification to mislead neural networks for malware detection. [PDF, BibTeX, project page, ]
Mahmood Sharif, Keane Lucas, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre.
arXiv preprint 1912.09064, December 2019.
A general framework for adversarial examples with objectives. [PDF, BibTeX, project page, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
ACM Transactions on Privacy and Security, 22 (3). June 2019. (Revised version of arXiv preprint 1801.00349.) © authors DOI:10.1145/3317611
On the suitability of Lp-norms for creating and preventing adversarial examples. [PDF, BibTeX, project page, ]
Mahmood Sharif, Lujo Bauer, and Michael K. Reiter.
In Proceedings of The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (in conjunction with the 2018 IEEE Conference on Computer Vision and Pattern Recognition), June 2018. © IEEE
Adversarial Generative Nets: Neural network attacks on state-of-the-art face recognition. [PDF, BibTeX, project page, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
arXiv preprint 1801.00349, December 2017.
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. [PDF, BibTeX, talk video, project page, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
In Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security, October 2016. DOI:10.1145/2976749.2978392