Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

_scaled_dot_product_efficient_attention fallbacked but CPU does not support it #811

Open
daisyden opened this issue Aug 23, 2024 · 0 comments
Assignees
Milestone

Comments

@daisyden
Copy link
Contributor

🐛 Describe the bug

The op is expected to fallback to CPU, see https://github.com/intel/torch-xpu-ops/blob/main/src/ATen/native/xpu/XPUFallback.template#L239, but it is not implemented in CPU backend.

image

Versions

latest version

@daisyden daisyden added this to the PT2.6 milestone Aug 23, 2024
@riverliuintel riverliuintel modified the milestones: PT2.6, PT2.7 Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants