You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, and sorry for the slow trickle of reports, I am testing this in between projects and as time allows.
I am pretty new to python, but still I am pretty sure that I have found a bug in "peptide_coefficient_predictor.py": iterating the script over the datasets provided in the package, using the parameters from train_models.sh (same directory), it failed for some but not all datasets. After investigating, I realized that when splitting the data between train_runs and test_runs for all datasets with odd numbers of samples, it was trying to access a non existing column number. The problem did not occur when there was an even number of samples.
Proposed partial fix:
After line
test_runs = np.array([[2*i, 2*i+1] for i in test_runs]).astype(int).ravel()
I have tested it. I say it is a partial fix in that it fixes the immediate error, however those datasets still fail: first, I get the following warning:
>>> model = define_model()
... (some messages)
WARNING:tensorflow:Output {0} missing from loss dictionary. We assume this was done on purpose. The fit and evaluate APIs will not be expecting any data to be passed to custom_loss_layer_7.
then the following error:
>>> patience_count = 0
>>>
>>> for epoch in range(1000):
...
... history = model.fit(x = [X_train, C_train, joined_intensities],
... y = None,
... epochs=1, batch_size = batch_size)
... (etc)
AttributeError: 'Adam' object has no attribute 'get_updates'
TypeError: object of type 'NoneType' has no len()
I do not know yet whether they are related. Anyway, I will investigate further in the next few days.
The text was updated successfully, but these errors were encountered:
Actually, the solution seemed only partial because I was using reticulate and running the script through RStudio. From Window's command line, it works for all datasets tested, so seems to be complete.
Hi, and sorry for the slow trickle of reports, I am testing this in between projects and as time allows.
I am pretty new to python, but still I am pretty sure that I have found a bug in "peptide_coefficient_predictor.py": iterating the script over the datasets provided in the package, using the parameters from train_models.sh (same directory), it failed for some but not all datasets. After investigating, I realized that when splitting the data between train_runs and test_runs for all datasets with odd numbers of samples, it was trying to access a non existing column number. The problem did not occur when there was an even number of samples.
Proposed partial fix:
After line
add:
I have tested it. I say it is a partial fix in that it fixes the immediate error, however those datasets still fail: first, I get the following warning:
then the following error:
I do not know yet whether they are related. Anyway, I will investigate further in the next few days.
The text was updated successfully, but these errors were encountered: