n-ML: Mitigating adversarial examples via ensembles of topologically manipulated classifiers. [PDF, BibTeX, ]
Mahmood Sharif, Lujo Bauer, and Michael K. Reiter.
arXiv preprint 1912.09059, December 2019.
Optimization-guided binary diversification to mislead neural networks for malware detection. [PDF, BibTeX, ]
Mahmood Sharif, Keane Lucas, Lujo Bauer, Michael K. Reiter, and Saurabh Shintre.
arXiv preprint 1912.09064, December 2019.
A general framework for adversarial examples with objectives. [PDF, BibTeX, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
ACM Transactions on Privacy and Security, 22(3), June 2019. (Revised version of arXiv preprint 1801.00349.) © authors DOI:10.1145/3317611
On the suitability of Lp-norms for creating and preventing adversarial examples. [PDF, BibTeX, ]
Mahmood Sharif, Lujo Bauer, and Michael K. Reiter.
In Proceedings of The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (in conjunction with the 2018 IEEE Conference on Computer Vision and Pattern Recognition), June 2018. © IEEE
Adversarial Generative Nets: Neural network attacks on state-of-the-art face recognition. [PDF, BibTeX, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
arXiv preprint 1801.00349, December 2017.
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. [PDF, BibTeX, ]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter.
In Proceedings of the 23rd ACM SIGSAC Conference on Computer and Communications Security, October 2016. DOI:10.1145/2976749.2978392