By Xuan Li, Xiao Feng, Changran Huang, Xiangying Wei Transfer from https://github.com/eziaowonder/Lambda-
link to paper, YouTube This project implements Lambda-Resnet and analyzes its performance on small datasets namely ImageNette, the subset of ImageNet. With the same training configurations, we compare the differences of both training and validation loss on this dataset, with different levels of noisy labels and model architecture.
Fullgrad of lambdaResNet50(pre-trained in ImageNet)(left) and ResNet50(pre-trained)(right)
data
The imageNette data class and image augmentation transform.figure
Some grad cam figures and their original image(from imageNette and gradCAM repo).grad_cam
The grad CAM code(edit from grad CAM repo).layers
Lambda layer and SE attention block.model
The ResNet with lambda layer.models
Our trained models.reports
Our analysis paper.KaggleTraining.ipynb
The original code we use for training our model.train.py
test.ipynb
Some tests during implementation.
- kaggle or colab(recommand)
UsingKaggleTraining.ipynb
in kaggle or colab, searchimagenette
to obtain the dataset, then run all the sections. python train.py
You can modified theConfigs
inside this file.
available in wandb
You can simply search imagenette
in Kaggle if you use the KaggleTraining.ipynb
. Alternatively, you can refer to the original repo
../input/imagenette/imagenette
├── train
│ ├── n01440764
│ ├── n02102040
│ ├── n02979186
│ ├── n03000684
│ ├── n03028079
│ ├── n03394916
│ ├── n03417042
│ ├── n03425413
│ ├── n03445777
│ └── n03888257
├── train_noisy_imagenette.csv
├── val
│ ├── n01440764
│ ├── n02102040
│ ├── n02979186
│ ├── n03000684
│ ├── n03028079
│ ├── n03394916
│ ├── n03417042
│ ├── n03425413
│ ├── n03445777
│ └── n03888257
└── val_noisy_imagenette.csv