-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model.evalute() reprodusibility problem #432
Comments
This is a better starting point for a batch-mode model, can you try adapting your code/model to this example instead? Cheers |
Yes I can but not sure how to do this - what is the goal and what should be adapted...
So can I change model definition (as 1 and 2) in example with qm9 and and my loader? |
For 1 and 2, you don't have to use the same model, but I think it would be easier for you to start from that code since you were struggling with batch mode. I also suggest using model subclassing as in the batch mode example, instead of the old functional API of Keras that you are using in your code. Cheers |
Thx for reply I did following
and this end up with
This because loader.load() returns:
and I got I think somehow similar problem cause by dtype=object of Y - both version (when y in Graph constructor is (N, ) and second when shape is (N, 1) ) of errors are attached. |
If the labels have dtype Anyway, the loaders are there to simplify users' lives but if they become a problem you can always write your data loading pipeline from scratch so that you have full control over it. Writing a training loop in TF is pretty easy nowadays, there's an example here. The issue you mentioned is no longer relevant, and as you see in the documentation:
Cheers |
I tried to mimic this example https://github.com/danielegrattarola/spektral/blob/master/examples/node_prediction/citation_gcn.py
with custom data (multiple graph and BatchLoader) and regression task, full python script in in attached file
batchGCN.py.txt
crucial part of the code:
I have question: what should be N in my case? I guess it should be at least of size of biggest graph and highest value means extra padding and nothing more than extra training time. Is my guess correct? What happen when N is smaller than number of node?
and problem: Multiple run of the same model.evaluate() results in different result and it not depends on N (i.e. for any testes N, bigger or smaller than the biggest graph, the fluctuations are observed). Is this bug in my code or issue with spectral?
epochs=20, N=10
epochs=20 N=100
The text was updated successfully, but these errors were encountered: