Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can we compare the true labels to the predicted labels? #233

Open
ghost opened this issue Jul 13, 2018 · 0 comments
Open

How can we compare the true labels to the predicted labels? #233

ghost opened this issue Jul 13, 2018 · 0 comments

Comments

@ghost
Copy link

ghost commented Jul 13, 2018

Hi,
It is really cool competition. Unfortunately, we could not join this competition. I wish we could. But, I have checked the most of repos and I have realised that "True labels", considering that file 'blind_stuart_crawford_core_facies.csv', have 890 data when the predicted submissions of teams have only 830 data. Is it a mistake or update in the repo? How can we decide our test accuracy ourselves? Thank you for your time,
Vural

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

0 participants