This is the official code for TMLR paper, "Can We Break Free from Strong Data Augmentations in Self-Supervised Learning? " by Shruthi Gowda, Elahe Arani and Bahram Zonooz.
Schematic of SSL method with Prior knowledge integration. The SSL module can incorporate any SSL method, such as Contrastive, Asymmetric, and Feature Decorrelation-based, and one method is selected from each category for this study. The prior network extracts implicit semantic knowledge and supervises the SSL module net- work to learn better representations. The resulting network from the SSL module is then used for inference purposes. This approach is expected to improve the quality of learned features and enhance the generalization capability of the resulting network
- python==3.8.0
- torch==1.10.0
- torchvision==0.8.0
Hyper-parameters for different SSL techniques and datasets - //: # (![image info](./src/hyper.png))
If you find the code useful in your research, please consider citing our paper:
@article{gowda2024can,
title={Can We Break Free from Strong Data Augmentations in Self-Supervised Learning?},
author={Gowda, Shruthi and Arani, Elahe and Zonooz, Bahram},
journal={arXiv preprint arXiv:2404.09752},
year={2024}
}
This project is licensed under the terms of the MIT license.