-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow plans to have fewer dimensions than the data #122
Comments
This is already supported: just pass the |
Oh, I see, you want to plan a lower-dimensional FFT without the This is do-able, in principle. The only wrinkle is that the data alignment may change for different batches, which could cause problems with FFTW where the plan is specialized for the array alignment (mod 16 bytes) |
The using AbstractFFTs, FFTW
A= rand(100); B=rand(100,10); C=rand(100,20);
planA = plan_fft(A, 1);
planB = plan_fft(B, 1);
planA * A # works
planA * B # doesn't work
planB * B # works
planB * C # doesn't work The second example with To be honest, I am only really interested in this functionality for CUFFT. The CUFFT PR did already implement it, but it wasn't included into the final accepted one, due to the AbstractFFT design. It would be great if it could be added. |
I think it's a good idea to allow FFT plans to be applied to data that has more dimensions than the plan, as long as those are only the trailing dimensions. The expected behaviour in this case would be a batched FFT.
CUDA.jl PR JuliaGPU/CUDA.jl#1903 initially implemented this behaviour for CUFFT, but it was removed because it is not in accordance with AbstractFFTs.
As an example
In this case the expected behaviour would be
(plan * B)[:,i] == (plan * B[:,i])
.You'd only need a single plan to apply the plan to data in different batch sizes as well. E.g. in the following situation:
I'd like to have a plan that performs an FFT along the first dimension, but can apply to both
A
andB
. It's a very common setup e.g. in ML applications, that the batch size is variable and not set when allocating the model.The text was updated successfully, but these errors were encountered: