You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
There are many times in our python code where we force tensors to be contiguous. This is because before pytorch introduced packed_accessor it was pretty annoying to use strided tensors when writing extensions
Expected behavior
We should be able to run most operations on strided tensors without copies.
Solution
We just need to convert most of our kernels and calling code to use packed_accessor32.
The text was updated successfully, but these errors were encountered:
Describe the bug
There are many times in our python code where we force tensors to be contiguous. This is because before pytorch introduced
packed_accessor
it was pretty annoying to use strided tensors when writing extensionsExpected behavior
We should be able to run most operations on strided tensors without copies.
Solution
We just need to convert most of our kernels and calling code to use
packed_accessor32
.The text was updated successfully, but these errors were encountered: