This repo contains the Pytorch implementation of our paper:
Huasong Zhong*, Jianlong Wu*, Chong Chen, Jianqiang Huang, Minghua Deng, Liqiang Nie, Zhouchen Lin, Xian-Sheng Hua
- Accepted at ICCV 2021.
- Different from basic contrastive clustering that only assumes an image and its augmentation should share similar representation and clustering assignments, we lift the instancelevel consistency to the cluster-level consistency with the assumption that samples in one cluster and their augmentations should all be similar.
- Motivation of GCC. (a) Existing contrastive learning based clustering methods mainly focus on instancelevel consistency, which maximizes the correlation between selfaugmented samples and treats all other samples as negative samples. (b) GCC incorporates the category information to perform the contrastive learning at both the instance and the cluster levels, which can better minimize the intra-cluster variance and maximize the inter-cluster variance.
- Framework of the proposed Graph Contrastive Clustering. GCC has two heads with shared CNN parameters. The first head is a representation graph contrastive (RGC) module, which helps to learn clustering-friendly features. The second head is an assignment graph contrastive (AGC) module, which leads to a more compact cluster assignment.
pip install -r requirements.txt
CUDA_VISIBLE_DEVICES=0 python end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_cifar10.yml
CUDA_VISIBLE_DEVICES=0 python end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_cifar20.yml
CUDA_VISIBLE_DEVICES=0 python end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_imagenet10.yml
CUDA_VISIBLE_DEVICES=0 python end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_imagenet_dogs.yml
CUDA_VISIBLE_DEVICES=0 python end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_tiny_imagenet.yml
CUDA_VISIBLE_DEVICES=0 python end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_stl10.yml
CUDA_VISIBLE_DEVICES=0 python test_end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_cifar10.yml
CUDA_VISIBLE_DEVICES=0 python test_end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_cifar20.yml
CUDA_VISIBLE_DEVICES=0 python test_end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_imagenet10.yml
CUDA_VISIBLE_DEVICES=0 python test_end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_imagenet_dogs.yml
CUDA_VISIBLE_DEVICES=0 python test_end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_tiny_imagenet.yml
CUDA_VISIBLE_DEVICES=0 python test_end2end.py --config_env configs/env.yml --config_exp configs/end2end/end2end_stl10.yml
CUDA_VISIBLE_DEVICES=0 python selflabel.py --config_env configs/env.yml --config_exp configs/selflabel/selflabel_cifar10.yml
CUDA_VISIBLE_DEVICES=0 python selflabel.py --config_env configs/env.yml --config_exp configs/selflabel/selflabel_cifar20.yml
CUDA_VISIBLE_DEVICES=0 python selflabel.py --config_env configs/env.yml --config_exp configs/selflabel/selflabel_stl10.yml
- Our results are 12.9%, 10.5%, 4.1% higher than that of the second best method DRC on CIFAR10, CIFAR-100 and STL-10, respectively. The results have some volatility, which can be improved by more attempts.
Dataset | Loss | ACC | NMI | ARI | Download link |
---|---|---|---|---|---|
CIFAR-10 | RGC+AGC | 85.9 | 77.2 | 73.4 | Download |
CIFAR-100 | RGC+AGC | 48.1 | 48.1 | 31.8 | Download |
STL-10 | RGC+AGC | 78.0 | 68.6 | 63.5 | Download |
- Set the 'root_dir' in config/env.yml to the download models and run the test scripts.
If you use GCC in your research or wish to refer to the baseline results published in this paper, please use the following BibTeX entry.
@article{zhong2021graph,
title={Graph Contrastive Clustering},
author={Zhong, Huasong and Wu, Jianlong and Chen, Chong and Huang, Jianqiang and Deng, Minghua and Nie, Liqiang and Lin, Zhouchen and Hua, Xian-Sheng},
journal={arXiv preprint arXiv:2104.01429},
year={2021}
}