We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instead of training move played to 100% prior should be an option to say have it be X% (say 50%) and use leela network at Y nodes (say 800) to determine the rest. Would save data in format that has Q and node counts so could be added to later for ensembling, kinda like cyan's patch for training: LeelaChessZero/lczero-training@adc3ddd And described in this blog post https://blog.lczero.org/2018/10/understanding-training-against-q-as.html#more
But I believe he does it in realtime instead of ahead of time, and doesnt have such flexibility...
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Instead of training move played to 100% prior should be an option to say have it be X% (say 50%) and use leela network at Y nodes (say 800) to determine the rest. Would save data in format that has Q and node counts so could be added to later for ensembling, kinda like cyan's patch for training:
LeelaChessZero/lczero-training@adc3ddd
And described in this blog post https://blog.lczero.org/2018/10/understanding-training-against-q-as.html#more
But I believe he does it in realtime instead of ahead of time, and doesnt have such flexibility...
The text was updated successfully, but these errors were encountered: