Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[20] SDMP : A Simple Data Mixing Prior for Improving Self-Supervised Learning #20

Open
Dongwoo-Im opened this issue Feb 4, 2023 · 1 comment

Comments

@Dongwoo-Im
Copy link
Contributor

Dongwoo-Im commented Feb 4, 2023

Links

한 줄 요약

  • Self-supervised learning 분야에서 Mixup, Cutmix, ResizeMix와 유사한 data mixing augmentation을 제안한 논문으로, transformer를 backbone으로 했던 Moco-v3(contrastive learning), DINO(knowledge distilation) 방법론에 적용하여 성능 개선을 보였다.

선택 이유

  • Data mixing을 어떤 방식으로 self-supervised learning에 적용했는지 확인하기 위해 선택했다.
  • 다만, 근본적으로 data mixing이 왜 좋은지를 다룬 것 같지는 않았다. 그리고 github에 Moco-v3 코드만 존재하고 DINO 코드는 없는 점이 아쉽다.
@Dongwoo-Im
Copy link
Contributor Author

notion link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant