You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in your code , you do not use gradient reversal layer when forward domain classifier.
I think may be it's wrong according to the paper domain separation network.
The text was updated successfully, but these errors were encountered:
The loss of the domain classifier and that of the encoder are at conflict with each other (it is desirable that the domain classifier be good at its job and still not able to reliably predict the domain of the encoded representation). That is where GRL comes into play. In the code, this is achieved by backpropagating a task loss that is the sum of the (weighted) losses of the encoders, the decoder, and representation difference, with the (weighted) loss of the domain classifier subtracted from it.
I noticed that in your code , you do not use gradient reversal layer when forward domain classifier.
I think may be it's wrong according to the paper domain separation network.
The text was updated successfully, but these errors were encountered: