Epsilon in linf (l2) training is 0.3 (1.5).
Note that in testing mode, the target label used in creating the adversarial example is the most confident prediction of the model, not the ground truth. Therefore, sometimes the testing robustness is higher than training robustness, when the prediction is wrong at first.
The maximum epsilon is set to 4 (l2 norm) in this part.
python >= 3.5
torch == 1.0
torchvision == 0.2.1
numpy >= 1.16.1
matplotlib >= 3.0.2
Standard training:
python main.py --data_root [data directory]
linf training:
python main.py --data_root [data directory] -e 0.3 -p 'linf' --adv_train
l2 training:
python main.py --data_root [data directory] -e 1.5 -p 'l2' --adv_train
change the setting if you want to do linf testing.
python main.py --todo test --data_root [data directory] -e 0.314 -p 'l2' --load_checkpoint [your_model.pth]
change the setting in visualize.py
visualize_attack.py
and if you want to do linf visualization.
visualize gradient to input:
python visualize.py --load_checkpoint [your_model.pth]
visualize adversarial examples with larger epsilon
python visualize_attack.py --load_checkpoint [your_model.pth]
Standard training: 0.64 s / 100 iterations
Adversarial training: 16 s / 100 iterations
where the batch size is 64 and train on NVIDIA GeForce GTX 1080.