-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dlpack
interface with pytorch
fails on CPU (both ways)
#1240
Comments
I cross posted the issues with intel extension for pytorch repo at intel/intel-extension-for-pytorch#368 . Environment informations:
|
dlpack
interface with pytorch fails on CPU (both ways)dlpack
interface with pytorch
fails on CPU (both ways)
@fcharras This is to be expected. The CPU in The torch's CPU device means host (in SYCL's terminology). Accessing USM allocations from host is only defined (per spec) for USM allocations of kind 'shared' and 'host', but torch only works with USM allocations of kind 'device' for performance. DLPack support in intel-extension-for-torch only recognizes DLPack representation of tensors allocated on Level-Zero device for GPU using platforms' default context (default) and of USM-device kind. |
@fcharras I think this issue is ready to be resolved. |
From
dpctl
topytorch
:fails with:
from
pytorch
todpctl
:fails with
Should those conversions be possible ?
The text was updated successfully, but these errors were encountered: