-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add comparison to pre-computed loglike to tests #32
Comments
I think it would be a good idea actually for there to be a standard way to store this information for each likelihood, similar to how we have a standard convention for |
This sounds good. Would you be happy that evaluating the likelihood at some number of points in parameter space could replace the current mcmc test runs? Say, the number of points necessary for computing the Fisher matrix? I suggest replacing the mcmc test runs as they currently take ~20 minutes to run the tests, which feels a bit long to me(?). |
I like the idea of testing that MCMC runs just start just to make sure it can initialize a fit properly, etc., but I wouldn't say it's 100% necessary. Is it really the MCMC tests that are taking so long? I think the lensing likelihood as currently implemented takes a good bit of time to initialize, so maybe just having to do that multiple times is taking most of the time? Feel free to experiment, and if dropping MCMC testing really cuts down the time, then it might be worth it to improve development cycle times. There's also an option of marking tests and having separate github actions set up for smaller unit tests and larger longer-running tests; e.g., the former could run on every commit and the latter (which could include the MCMC runs and other longer-running things) just on PRs? |
Comparison to pre-computed chi2 is done in test_mflike, test_lensing, test_clusters. I think everything is also in place to add such a comparison in test_xcorr. Is there anything more specific we want to add here? Like, e.g., use the same set of theory params (cosmo, fg, accuracy) to be consistent among likelihoods? |
To check the stability of likelihood calculations after any refactoring is done, we should add a test which compares the loglike at a specified point(s) in parameter space to a pre-computed value stored in the test.
The text was updated successfully, but these errors were encountered: