-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ Speedster] With Hugging Face notebook code on nebulydocker/nebullvm container: RuntimeError: Expected all tensors to be on the same device #349
Comments
Thank you very much for taking a look at this. That is a good point. The "cannot dlopen some GPU libraries" message sounds serious. I have a question about the workaround you suggested. I tried to perform
Is there another way to move the model to cuda? Thanks! |
It's inference Learner object I am not exactly sure how to move it, but a higher level view would be to get the model out of the inference learner, and move it to gpu |
Thanks! That sounds like a good suggestion. I will try that! |
Seems related to pytorch/pytorch#72175, solution is to first export to onnx on CPU, then optimize it on the GPU. |
Hi! Thank you for your continued work with this project! I would like to report a possible TensorFlow GPU configuration issue with the documented nebulydocker/nebullvm container that appears to prevent notebook code from running.
I am trying to use code in the Hugging Face notebook found at
https://github.com/nebuly-ai/nebuly/blob/main/optimization/speedster/notebooks/huggingface/Accelerate_Hugging_Face_PyTorch_BERT_with_Speedster.ipynb
And am running in the current nebulydocker/nebullvm docker container documented at
https://docs.nebuly.com/Speedster/installation/#optional-download-docker-images-with-frameworks-and-optimizers
Here is exact Python code I am trying to run (essentially code from the notebook with a couple of diagnostic lines added.):
Just in case it is useful, starting up the container looks like this:
And, this is the output that I get running the above code:
Attempting to call the model appears to cause the final
RuntimeError
This seems like it may be related to
optimized_model.device
beingnone
.Just FYI, GPU seems to be accessible on this container:
Thank you for looking at this.
The text was updated successfully, but these errors were encountered: