-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thermostability models on SaprotHub Huggingface – reproducibility #15
Comments
Hi, thank you very much for pointing that out! The 35M version was just a test model when we developed SaprotHub. It was trained on different dataset splits (we split data based on structure similarity now but not at that time). We have deleted it to avoid misunderstanding.
Could you give more information about what you input to the model? If you want to reproduce the training of models in the paper, you could just load the dataset from SaprotHub and train it with default config. That is a general config that is suitable for all tasks. |
Thanks for the fast respose. Yes I can post the notebook I used to evaluate the 650M (and 35M) model. It is the colab notebook stripped down. The most interesting cell is the last one, where the predictions are made and for some reason, only zeros are returned, even though the batch seems fine (and worked with the 35M): https://gist.github.com/adam-kral/d8c82f02f77ae0ec1c0f9255b29c3ab6 I ran it in cloned SaprotHub repo in the colab folder. In addition, I downloaded the lmdb dataset from huggingface manually. |
An interesting bug! However it's weird that on cpu the model output nans. I transfered the model to cpu and it still output normally. I'm not sure whether it is caused by the imcompatibility between the packages and your hardware... |
Hi, I have a couple of questions, regarding the two thermostability models published on huggingface:
I have downloaded both to evaluate them. In the readme, there is some training info and the test spearman for the 650M but for the 35M there is none.
When I evaluated the 35M I got spearman 0.87 (0.91) for valid (test), which is much better than the spearman 0.697 reported for 650M in the paper (or 0.706 in model's readme). Was the 35M model trained on the same dataset splits?
Also when I tried to evaluate the downloaded 650M model in the exact same way as I did successfully with the 35M model, I got model outputs/predictions that were just zeros.
So, why does the 35M model perform so well, and how do I make the 650M return non-zero predictions?
Or, is it possible to rerun the training of the model in the paper or the published models with their original configs, and where would I find them?
Thanks!
The text was updated successfully, but these errors were encountered: