-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about loss #6
Comments
Hi @XiaoxxWang, our PPC loss takes the form of InfoNCE, which is in essence a cross-entropy loss. In our context, you can understand PPC as, for each pixel, we aim to identify its assigned (positive) prototype among a set of negative prototypes. |
Thanks for your reply, but I still can't understand the PPC loss well. The code shows it is defined as the cross entropy of the proto_logits with proto_targets. The proto_logits is the product of the feature and the prototype, but what does proto_targets mean? It seems it is not the groundtruth , which is not agree with infoNCE loss. |
@XiaoxxWang Just read the code more carefully... For, InforNCE, you also have the groundtruth -- you know which one is the positive sample, and which one is the negative sample, but the ground-truth is obtained by free. |
I also have a question, the PPC code does not seem to reflect the temperature coefficient setting? Or is the temperature coefficient set to 1 by default, which is different from the 0.1 given in the paper? |
The temperature coefficient In paper is set as 0.1, howerver, in the source code it is neglected. In other words, it is set to 1 by default in source code. Does it have no influnce for the performance ? |
Hi, I'm interested in your work. After reading the paper, I'm confused that the PPC loss is achieved by contrastive learning strategy in your paper. But according to the code, the PPC loss is using cross entropy loss. Hope to receive your reply, thanks.
The text was updated successfully, but these errors were encountered: