Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rank_hard_loss_layer.cpp #2

Open
Robert0812 opened this issue Sep 30, 2015 · 6 comments
Open

rank_hard_loss_layer.cpp #2

Robert0812 opened this issue Sep 30, 2015 · 6 comments

Comments

@Robert0812
Copy link

How much amount of contribution does tloss2 have in the forward process? I notice that you only have the term D(x, x-) rather than D(x+, x-) in your ICCV paper.

@kli-casia
Copy link

I have the same problem

@xiaolonw
Copy link
Owner

xiaolonw commented Dec 6, 2015

Just see it as an easy way to increase training samples.

@kli-casia
Copy link

@xiaolonw that's make sense, thank you xiaolong

@icodingc
Copy link

icodingc commented Jun 7, 2016

@xiaolonw
rank_hard_loss.cpp第228行。
如果loss = max(0,||x - x1||^2 - ||x-x2||^2 + margin)的话,
我的理解是梯度应该这样计算:
dloss/dx = 2(x2-x1)
dloss/dx1 = -2(x-x1)
dloss/dx2 = 2(x-x2)
但是代码中跟我理解的不一样,请问是我理解错么,还请解释一下。

@xiaolonw
Copy link
Owner

xiaolonw commented Jun 7, 2016

@icodingc

that is because we are using cosine distance, which means after normalization layer, ||x|| = 1, ||x1|| = 1. Thus loss = max(0, (2-2x_x1) - (2-2x_x2) + margin )

@icodingc
Copy link

icodingc commented Jun 7, 2016

thank you xiaolong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants