Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[18] MoCo v3 : An Empirical Study of Training Self-Supervised Vision Transformers #18

Open
Dongwoo-Im opened this issue Jan 5, 2023 · 0 comments

Comments

@Dongwoo-Im
Copy link
Contributor

Links

한 줄 요약

  • BYOL의 prediction head와 large batch size를 적용하여 MoCo v2+의 성능을 끌어올린 MoCo v3를 제안한 논문이다. 추가로, ViT를 backbone으로 여러 실험을 진행하였고, fixed random patch projection trick을 통해 학습의 안정성을 높일 수 있음을 밝혔다.

선택 이유

  • ViT를 backbone으로 여러 실험을 진행한 논문이라 프로젝트 진행에 도움이 될 것 같았다.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant