You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Congratulations on the great work. I have some questions on the backbone used.
Was it pretrained and frozen for feature extraction? Or was it fine-tuned in a supervised fashion on the self-supervised pretrained features (looks like the latter)?
If fine-tuned, did you unfreeze all layers or few layers only? Did you do an ablation on how many layers to unfreeze?
Did you try to use the frozen features and see how it yields with respect to localization? It would be helpful if you could throw some light on these?
Why did you choose SimMIM, for e.g., why not MAE? Did you try and find out SimMIM works better?
I am sorry if I ask any redundant question. It would be helpful to have some insights into these aspects.
Thanks a lot, again!
The text was updated successfully, but these errors were encountered:
We unfreeze all layers by default and do not try to freeze some layers.
Nonetheless, we employ a learning rate decay strategy for mask-image-modeling (MIM) pre-trained models, a technique commonly used when fine-tuning MIM models. This strategy assigns a smaller learning rate to the shallower layers and a larger learning rate to the deeper ones, following the formula lr = base_lr * decay_rate ** (num_layers - layer_depth), where the decay_rate is less than or equal to 1.
By adjusting the decay_rate, we can potentially achieve an effect similar to freezing some layers.
We have not yet evaluated the performance of frozen features within the DETR framework.
In a previous study(paper), we examined the use of frozen features for downstream dense tasks and compared different pre-training methods. We discovered that the performance of MIM frozen features was subpar, but this could be a result of poor classification. We would evaluate their localization performance later.
We use Swin Transformer as backbone and SimMIM provides pre-trained Swin Transformer checkpoints.
Hello,
Congratulations on the great work. I have some questions on the backbone used.
I am sorry if I ask any redundant question. It would be helpful to have some insights into these aspects.
Thanks a lot, again!
The text was updated successfully, but these errors were encountered: