Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results reproducibility #13

Open
jacopocavazza opened this issue Oct 4, 2016 · 3 comments
Open

Results reproducibility #13

jacopocavazza opened this issue Oct 4, 2016 · 3 comments

Comments

@jacopocavazza
Copy link

Hi @kracwarlock! Thank you for sharing your code of your amazing paper! In order to reproduce your published results, I was wondering how to select the validation split for HMDB-51 and Hollywood2 datasets. Referring to the latter ones, can you please share your files valid_labels.txt, train_labels.txt, test_labels.txt, train_filenames.txt, test_filenames.txt and valid_filenames.txt for that two datasets? I will really appreciate it a lot :) :) :)

@kracwarlock
Copy link
Owner

HMDB-51 provides three train-test splits . We just used split 1. Hollywood2 also provides a train-test split. For validation we used 15% of the training data as validation set and the rest 85% for training. We noted the cost at best performance on the validation set. We then trained on the entire training data until the cost reached the same value as noted with the validation set separated.
Will share the txt files.

@WeihongM
Copy link

@kracwarlock Hi, also hope you can share your txt files, Thanks

@mpkuse
Copy link

mpkuse commented Oct 22, 2018

I am trying to figure out how was the 1st attention initialized. In the attached figure, l1?

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants