You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this GitHub repo, we used a pre-trained eta-encoder and frozen-weights. We realized that using a pretrained eta-encoder helps the attention mechanism to better capture subtle differences in different source contrasts. Pretrained weights of eta-encoder are incorporated in the HACA3 weights.
I see that the pretrained weights are indeed provided.
However, pretrained weights can not always be used, especially for training the haca3 on our own dataset.
I see that there is code provided for training every other module (beta encoder, theta encoder, decoder, patchifier, attention module) except the eta encoder. It seems like the contrastive loss for eta encoder is also missing in the implementation (please correct me if I am wrong).
It would have been great if this exclusion of training implementation of eta encoder could have been explicitly mentioned in the readme.md. Furthermore, is it possible for you to provide code / implementation with the training of eta encoder included?
Once again, thanks for your response. Looking forward to hearing back.
Hi,
I see that the eta encoder is not being trained. Is this correct? If yes, is this intentional? Thanks
The text was updated successfully, but these errors were encountered: