A curated list of awesome adversarial machine learning resources, inspired by awesome-computer-vision.
- Breaking Linear Classifiers on ImageNet, A. Karpathy et al.
- Breaking things is easy, N. Papernot & I. Goodfellow et al.
- Intriguing properties of neural networks, C. Szegedy et al., arxiv 2014
- Explaining and Harnessing Adversarial Examples, I. Goodfellow et al., ICLR 2015
- Adversarial Examples In The Physical World
- Adversarial Examples For Generative Models
- The Limitations of Deep Learning in Adversarial Settings, N. Papernot et al., ESSP 2016
- Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, N. Papernot et al., arxiv 2016
- Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, A. Nguyen et al., CVPR 2015
- DeepFool: a simple and accurate method to fool deep neural networks
- Do Statistical Models Understand the World?, I. Goodfellow, 2015
License
To the extent possible under law, Yen-Chen Lin has waived all copyright and related or neighboring rights to this work.