-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speedup sample #7578
Draft
ricardoV94
wants to merge
11
commits into
pymc-devs:main
Choose a base branch
from
ricardoV94:speedup_nuts
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Speedup sample #7578
+586
−326
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ricardoV94
force-pushed
the
speedup_nuts
branch
from
November 22, 2024 01:29
009959b
to
161b859
Compare
ricardoV94
force-pushed
the
speedup_nuts
branch
4 times, most recently
from
November 23, 2024 23:06
d6f9e14
to
87fd299
Compare
ricardoV94
added
major
Include in major changes release notes section
enhancements
samplers
labels
Nov 23, 2024
ricardoV94
force-pushed
the
speedup_nuts
branch
2 times, most recently
from
November 24, 2024 00:44
0f4972c
to
874ae65
Compare
Also initialize empty trace and set `trust_input=True`
Also removes costly `model.check_start_vals()` Also makes all Step arguments expect vars keyword-only Also set `trust_input=True` for automatically generated logp_dlogp_function
ricardoV94
force-pushed
the
speedup_nuts
branch
from
November 24, 2024 01:20
874ae65
to
cb8d51e
Compare
10 minutes seem to be saved in pytest CI time compared to previous runs |
ricardoV94
commented
Nov 24, 2024
last_idx += arr_len | ||
for name, shape, size, dtype in array.point_map_info: | ||
result[name] = array.data[last_idx : last_idx + size].reshape(shape).astype(dtype) | ||
last_idx += size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't compute end twice
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
TODO
Major changes
ravel_inputs
is specified explicitly. Eventually it will only be possible to useravel_inputs=True
.assign_step_method
does not callinstantiate_steppers
, but returns arguments needed for the latter.compile_kwargs
topm.sample
which is then forwarded to the step samplers functionsEnhancement
This PR speedups
NUTS
(and other step samplers), by:trust_input=True
which can have a large overhead.This PR speedups sample by:
init_nuts
. This will also reduce the path towards external samplers with nutpie/numpyro as it avoids the costly and useless compilation of the logp_dlogp_functiontrust_input
and avoiding deepcopies in the trace function by usingpytensor.In(borrow=True)
andpytensor.Out(borrow=True)
.Further speedups should come for free from #7539, specially for the Numba backend.
Benchmark
In the example below, sampling time is now only 7x slower than nutpie (5s vs 0.7s), compared to 13.5x slower (9.45s vs 0.7s) before. This assuming the same number of logp evals, in fact nutpie tuning allows us to get out with half the evals! We can hopefully bring it over.
Full time until from
pm.sample
to getting a trace is roughly halved as well (7.5s vs 14.4s), although this gain is not proportional to the number of draws.With
compile_kwargs=(mode="NUMBA")
, sampling time is only 3x slower (2.3s).📚 Documentation preview 📚: https://pymc--7578.org.readthedocs.build/en/7578/