-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem running the training script #10
Comments
Do you change the any hyperparameters in hyperparam.json file? What python version do you use? cv = Lambda(getCostVolume, arguments = {'max_d':d/2}, output_shape = (d/2, None, None, num_filters * 2))(unifeature) with cv = Lambda(getCostVolume, arguments = {'max_d':int(d/2)}, output_shape = (int(d/2), None, None, num_filters * 2))(unifeature) |
Yes, I am using Python 3.5 and I have not changed the hyperparameters. I have replaced the code and the previous error has been corrected. However, now I get |
Hi, I have updated the code, please download the new version, and run "python train.py" to see if it works. |
Hi. I have had to unify the use of tabs and spaces in the files (python 3.5 complains a lot ...) and I have had to do some minor modifications to the load_pfm file. It is now running although it is very slow. I am running it on a TITAN X but it seems the program is not using it properly. The % of Volatile GPU-Util is most of the time 0% while running. Any idea of what I am missing? Thanks |
Are you sure that you’re running the job on GPU?
On Mon, Oct 9, 2017 at 11:27 jucab ***@***.***> wrote:
Hi. I have had to unify the use of tabs and spaces in the files (python
3.5 complains a lot ...) and I have had to do some minor modifications to
the load_pfm file. It is now running although it is very slow. I am running
it on a TITAN X but it seems the program is not using it properly. The % of
Volatile GPU-Util is most of the time 0% while running. Any idea of what I
am missing? Thanks
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#10 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AMAM6Uq2N2G8Y22xSBHBRkV-ikdsria8ks5sqjtWgaJpZM4Pl21O>
.
--
Hung Shi Lin
Data Science Institute, Columbia University, New York, New York, U.S
|
I think so. With log_device_placement flag I can see that the tasks are assigned to the gpu. Indeed the the gpu memory is allocated |
I don't know how that happened, but you can look up the similar issue here tensorflow/tensorflow#543 |
Thanks. Since I have been changing tabs and spaces, I may have unintentionally change the code. Could you check that the GCnetwork is right in this file? thanks |
I found out the problem. It was in the generator.py file. I messed up with the tabs and spaces. It is running now properly on the GPU. |
It seems they change API in Keras 2.0. This is just a warning though. |
Hi,
Congrats for your nice work. I am trying to run the training script but I get the following error:
"TypeError: Value passed to parameter 'paddings' has DataType float32 not in list of allowed values: int32, int64"
in gcnetwork.py, when calling
cv = Lambda(getCostVolume, arguments = {'max_d':d/2}, output_shape = (d/2, None, None, num_filters * 2))(unifeature)
Any suggestion?
Thanks,
Julian
The text was updated successfully, but these errors were encountered: