-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transferring data on the GPU #244
Comments
Hi @Zhaoli2042 Can you write here the code you are trying to run? |
Hi @serizba , Thanks for your reply. Here is a simple example that I tested. I am using the nvidia-hpc compiler (nvc++), the version is 22.5 |
Hi @Zhaoli2042, were you able to make this work? I'm also interested in having a TF model sit inside a custom GPU pipeline. |
I also have similar issue (feed input from GPU) right now and I'm also very interested. |
Hi!
I find cppflow very useful; however, I have some small questions for now (I may have more in the future :D).
I can use cppflow in a CUDA/C++ program, and cppflow can find my GPUs.
Since the model is making the predictions on the GPU, and all my data is stored on the GPU, is there a way to let the model read data directly from the device without transferring and preparing the data on the host?
And I am having issues when I try to put cppflow::model in std::vector. The program is able to run and make correct predictions, but it generates a "Segmentation fault" when it finishes. Is there a way to avoid this?
Thanks! I appreciate any advice you can give.
The text was updated successfully, but these errors were encountered: