You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The question is, if there any possibility for end to end efficient training. I mean train backbone (feature extractor) as well?
Would you suggest any efficient training scheme like next example:
training backbone first -> freeze backbone -> train g-cmvae -> unfreeze backbone -> finetune everything with smaller lr?
The text was updated successfully, but these errors were encountered:
Guys, thank you for your paper and code.
The question is, if there any possibility for end to end efficient training. I mean train backbone (feature extractor) as well?
Would you suggest any efficient training scheme like next example:
training backbone first -> freeze backbone -> train g-cmvae -> unfreeze backbone -> finetune everything with smaller lr?
The text was updated successfully, but these errors were encountered: