You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be helpful to have an option to specify which GPU to use when running inference on a machine with multiple GPUs. In my case, I am running multiple MONAILabel servers, each with its own dedicated GPU.
The text was updated successfully, but these errors were encountered:
The script uses the first available GPU. We could use CUDA_VISIBILE_DEVICES environment variable, but I am not sure how this behaves when running a server which will run a subprocess with sys.executable
It would be helpful to have an option to specify which GPU to use when running inference on a machine with multiple GPUs. In my case, I am running multiple MONAILabel servers, each with its own dedicated GPU.
The text was updated successfully, but these errors were encountered: