-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance dip when using ForwardDiff compared to Turing #81
Comments
This is really strange. Any hints why the degraded performance? |
The only difference that I could think of was that TuringGLM uses the
The results from this are
which shows the same slowdown as the TuringGLM model benchmarks. So it looks like it's to do with this, but I don't know how. Click here for in detail output
Turing model 3 (custom prior struct)
|
Yeah that might a little bit of overhead. |
I've noticed that there's a performance dip when using ForwardDiff with a model defined in TuringGLM, compared to defining the model directly in Turing. I've set up a MWE to show this.
First I set up 4 models, two in TuringGLM (with and without custom priors), and two in Turing, with the default and custom priors given to the TuringGLM models.
Then using TuringBenchmarking.jl, I benchmark each of the four models with both Forward and Reverse diff backends:
The results of the benchmark are shown in the table below. You can see that for Reversediff the benchmarks are the same, but with ForwardDiff TuringGLM is ~20-30% slower than Turing (I've included the full results below).
Click here for in detail output
TuringGLM model 1 (default priors)
Output:
Turing model 1 (default priors)
Output:
TuringGLM model 2 (custom priors)
Output:
Turing model 2 (custom priors)
Output:
The text was updated successfully, but these errors were encountered: