-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No batchnorm layer in your source? #19
Comments
Since this caffe is an old version, it doesn't support batch normalization.
One of the possible solutions is adding our loss functions to a newer
caffe. Then, you can modify the last few layers of the deep network, and
learn binary codes.
2017-07-20 6:49 GMT-07:00 Sarthak Yadav <[email protected]>:
… I am trying to use Resnet50. But your src has no files corresponding to
batch_norm_layers, therefore the binary has no concept of "batch_norm". How
can one build it with batch_norm support? The latest caffe code for the
same obviously throws a lot of errors
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#19>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AJgtMNwUo6PZOdWNevUHCGtvb9bBGj6Pks5sP1rSgaJpZM4OeIIG>
.
--
Best regards,
林可昀
Kevin Lin
|
I actually added the source code for the loss function implementations. They wouldn't compile with new Caffe versions |
Sorry for the late reply. I am too busy with current projects. If you want to generate binary codes quickly and avoid additional implementations, you can just train the model with latent layer + sigmoid activation function + softmax loss (this is exactly our workshop paper). It already learns high-quality and somehow evenly-distributed binary codes. |
I am trying to use Resnet50. But your src has no files corresponding to batch_norm_layers, therefore the binary has no concept of "batch_norm". How can one build it with batch_norm support? The latest caffe code for the same obviously throws a lot of errors
The text was updated successfully, but these errors were encountered: