Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing trained model #243

Open
benliu961 opened this issue Apr 18, 2023 · 7 comments
Open

Testing trained model #243

benliu961 opened this issue Apr 18, 2023 · 7 comments

Comments

@benliu961
Copy link

Hello,

I have a trained model. How would I get predictions and testing accuracy using a test dataset?

Thank you!

@vondele
Copy link
Member

vondele commented Apr 18, 2023

That would be the validation loss, but generally we test newly trained nets by playing games with run_games.py. easy_train.py shows how this can be done. Nets that test well locally can be tested on fishtest as well.

@benliu961
Copy link
Author

If I fine tuned the model so that it doesn't predict board evaluations anymore, how would I test it?

@Blimpyway
Copy link

What does it predict?

@benliu961
Copy link
Author

It predicts a value that I have defined as Value of Computation. Its just some value between 0 and 1

@Sopel97
Copy link
Member

Sopel97 commented Apr 18, 2023

Unless this value represents some truth there is no way to test other than play games.

@benliu961
Copy link
Author

Is there no way to test it via tensorflow?

@Sopel97
Copy link
Member

Sopel97 commented Apr 18, 2023

best you can do is look at loss

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants