-
Notifications
You must be signed in to change notification settings - Fork 23
Issues: intel/torch-xpu-ops
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
torch.nextafter has an incorrect result for bf16 on XPU
bug
Something isn't working
#1169
opened Dec 16, 2024 by
guangyey
torch._standard_gamma() has accuracy gap compared to scipy and torch.cpu
#1163
opened Dec 12, 2024 by
daisyden
What is the expected result of float64 div when divisor and dividend are the same?
#1160
opened Dec 11, 2024 by
daisyden
xpu: implement aten::_thnn_fused_lstm_cell for XPU backend #141539
#1157
opened Dec 11, 2024 by
yinghu5
softshrink is expected to return nan when the input is nan on ARC
#1152
opened Dec 9, 2024 by
daisyden
Investigate whether pad mm is useful on XPU
enhancement
New feature or request
#1129
opened Nov 29, 2024 by
jianyizh
add xpu support to align with cuda for inductor sdpa fusion
enhancement
New feature or request
#1128
opened Nov 29, 2024 by
jianyizh
[Driver] [Ubuntu 24.10] Command Hang after driver installation
bug
Something isn't working
client
os: Ubuntu
#1122
opened Nov 27, 2024 by
Stonepia
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.