Releases
v1.2.37
Changes
Added VICReg Model (Special thanks to Boris Albar @b-albar ). (See VICReg example )
The memory bank now works with distributed training.
Speedup all code working with runs and datasets.
Add helpers for predictions.
Models
Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021
Bootstrap your own latent: A new approach to self-supervised Learning, 2020
DCL: Decoupled Contrastive Learning, 2021
DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021
MAE: Masked Autoencoders Are Scalable Vision Learners, 2021
MSN: Masked Siamese Networks for Label-Efficient Learning, 2022
MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019
NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020
SimSiam: Exploring Simple Siamese Representation Learning, 2020
SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022
SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022
You can’t perform that action at this time.