-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA.CUFFT.plan_*fft
functions do not accept keyword arguments
#1559
Comments
FWIW, I was able to work around this issue (for my usage of AbstractFFTs.plan_brfft(A::CuArray, d::Integer, region; kwargs...) = plan_brfft(A, d, region) |
CUFFT doesn't support planner flags or time limits, so it's probably better to assert that the user doesn't set these flags. |
These planning methods in CUDA.CUFFT are implementations of the planning functions defined in The use case for this (i.e. how I ran into this problem) is to maintain the ability to use the same code with FFTW when the input is an mul!(@view(output_matrix[:,i]), plan, input_vector) For some If I add CUDA as a dependency to my code, I could create the plan this way: if A isa CuArray
plan = plan_brfft(A, d)
else
plan = plan_brfft(A, d, flags=FFTW.ESTIMATE|FFTW.UNALIGNED)
end but that goes against the grain of multiple dispatch and requires that I add CUDA as a dependency to my otherwise CUDA-unaware code (and looks ugly). Being able to use the same code with either |
Some of these flags seem semantically meaningful though, e.g., |
The cuFFT docs say that a complex-to-real out-of-place transform always overwrites its input buffer, but it doesn't mention any other case when that happens. This corresponds to
I think those are in order of preference. Option 3 is bad practice and should be ignored, I only included it for completeness. Option 2 is not so clean either, but wouldn't be horrible. Option 1 seems better than option 0 (i.e. the status quo) IMHO. BTW, I just noticed that Relevant cuFFT docs here: https://docs.nvidia.com/cuda/cufft/index.html#data-layout TBH, it seems a bit not-so-abstract for |
The only package that seems to use PRESERVE_INPUT is LazyAlgebra.jl, to implement its |
Great, it sounds like we've converged! I opened issue JuliaMath/AbstractFFTs.jl#71 regarding |
Add CUDA package as a dependency so that we can engage in a little type piracy to workaround CUDA.jl issue #1559. Once CUDA.jl has resolved the issue, we can drop CUDA as a dependency and remove the type piracy. For more details see: JuliaGPU/CUDA.jl#1559
The
plan_*fft
functions inAbstractFFTs
take keyword arguments, but the methods of these functions provided byCUDA.CUFFT
do not. Code that passes keyword arguments to these functions, e.g. to influence FFTW planning withArray
s, does not work withCuArray
s because the presence of keyword arguments prevents dispatch to theCUDA.CUFFT
methods. This results in an aMethodError
if FFTW is not being used. If FFTW is being used, this dispatches to an FFTW method which then throws anArgumentError
because it tries to take the CPU address of theCuArray
.Since
CUDA.CUFFT
ignores these keyword arguments, I think it would be sufficient to just add; ignored_kwargs...
to the function signatures or create alternate methods that include; ignored_kwargs...
and just call the non-kwarg methods.Here is a MWE showing the problem both without and without FFTW:
The text was updated successfully, but these errors were encountered: