Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About gradient reversal layer #1

Open
OleNet opened this issue Jun 19, 2018 · 1 comment
Open

About gradient reversal layer #1

OleNet opened this issue Jun 19, 2018 · 1 comment

Comments

@OleNet
Copy link

OleNet commented Jun 19, 2018

I noticed that in your code , you do not use gradient reversal layer when forward domain classifier.
I think may be it's wrong according to the paper domain separation network.

@farnazj
Copy link
Owner

farnazj commented Jun 20, 2018

The loss of the domain classifier and that of the encoder are at conflict with each other (it is desirable that the domain classifier be good at its job and still not able to reliably predict the domain of the encoded representation). That is where GRL comes into play. In the code, this is achieved by backpropagating a task loss that is the sum of the (weighted) losses of the encoders, the decoder, and representation difference, with the (weighted) loss of the domain classifier subtracted from it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants