Skip to content

Latest commit

 

History

History
26 lines (19 loc) · 1.07 KB

README.md

File metadata and controls

26 lines (19 loc) · 1.07 KB

Adversarial Attacks on Images by Relighting

Find artificial relightings that fool classifiers with PyTorch.

Project Description

  • perform research on the field of available image augmentation techniques that simulate different illumination
  • adapt the most suitable frameworks to attack the classifier
  • evaluate the robustness of the targeted model against this type of input pertubations
  • analyze whether the adversarial training approach helps improving its robustness.

Repo Structure

Attacks: this package contains all our adversarial attack algorithms
Classifiers: this package contains all classifiers that we are attacking in our experiments
Data: contains data we train the classifiers on
Dep: deprecated notebooks and scripts
Experiments: notebooks that execute our adversarial attack experiments
Relighters: model implementations of all relighters

Contributors