Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ia der term #1137

Open
wants to merge 16 commits into
base: master
Choose a base branch
from
Open

Ia der term #1137

wants to merge 16 commits into from

Conversation

CarterDW
Copy link

I implemented in pyccl/ept.py a new IA k2 derivative term, adding it to the existing auto and cross correlations with a ck constant defined in pyccl/tracers.py based on the pre-existing c1 constant. The user can either choose to use an in-built CCL k2 term, or call FAST-PT and use the newly implemented FAST-PT derivative term. This is set to use CCL's term by default so older versions of FAST-PT can still be used with ept.py.

I also added in pyccl/ept.py functionality to check if the installed FAST-PT can be imported, and then checks if the installed FAST-PT has the newest derivative term if the user wishes to use the FAST-PT derivative term

@CarterDW CarterDW marked this pull request as ready for review November 17, 2023 17:17
@coveralls
Copy link

coveralls commented Nov 27, 2023

Pull Request Test Coverage Report for Build 7133713817

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.2%) to 97.272%

Totals Coverage Status
Change from base Build 7020570041: -0.2%
Covered Lines: 6524
Relevant Lines: 6707

💛 - Coveralls

@damonge
Copy link
Collaborator

damonge commented Nov 27, 2023

@CarterDW it seems like flake8 is failing, and the coverage (i.e. lines of code tested by new unit tests) has fallen a bit. I may be able to look at the actual implementation towards the end of the week.

Attempted reformatting to fit flake8 format
Attempted to better fit flake8 formatting requirements
@CarterDW
Copy link
Author

Im not sure about the coverage but I went through and reformatted the code so hopefully it should pass flake8 now

@jablazek
Copy link
Collaborator

jablazek commented Nov 28, 2023

@damonge , I am talking with @CarterDW about this PR now and will also do a code review ASAP. Would also be great to get your expert eyes on it (even quickly).

Copy link
Collaborator

@damonge damonge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @CarterDW and @jablazek (and apologies for the delay!). Just a couple of comments.

Also, we probably want to add a benchmark test, the same way we did for the previous FastPT terms.

BTW, I remember before FastPT couldn't compute all the galaxy x IA perturbative terms. Is that still the case? I remember we have a TODO for this :-)

Comment on lines +315 to +332
# ak power spectrum
pksa = {}
if 'nonlinear' in [self.ak2_pk_kind]:
pksa['nonlinear'] = np.array([cosmo.nonlin_matter_power(
self.k_s, a)for a in self.a_s])
if 'linear' in [self.ak2_pk_kind]:
pksa['linear'] = np.array([cosmo.linear_matter_power(self.k_s, a)
for a in self.a_s])
if 'pt' in [self.ak2_pk_kind]:
if 'linear' in pksa:
pka = pksa['linear']
else:
pka = np.array([cosmo.linear_matter_power(self.k_s, a)
for a in self.a_s])
pka += self._g4T*self.one_loop_dd[0]
pksa['pt'] = pka
self.pk_ak = pksa[self.ak2_pk_kind]

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I missed something, but wouldn't you achieve the same just adding self.ak2_pk_kind to the previous if-else statement, and then doing

self.pk_ak = pks[self.ak2_pk_kind]

?

Comment on lines +138 to +139
usefptk (::obj:`bool`):) if `True``, will use the FASTPT IA k2
term instead of the CCL IA k2 term
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not require that people have the latest version of FastPT? My first reaction is that this adds unnecessary complexity that will need to be deprecated afterwards anyway (with the headache it entails, since it changes the API).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are two issues. One, for the basic k^2 term, we don't need FAST-PT, and we want the flexibility to choose the P_delta from the options CCL has access to. There may be future versions of this term that need some calculation to be done in FAST-PT.

Two, more generally, do we want to demand the latest (or fairly recent) FAST-PT for all CCL ept users? Probably good in the long term. Right now, FAST-PT is also changing rapidly, and is actually a bit behind in some functionality, so we don't expect all CCL users to actually have access to the bleeding edge.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I see, thanks @jablazek .

Then, how about:

  1. Remove the usefptk option from this function.
  2. Allow for ak2_pk_kind to take an additional option (e.g. 'pt_IA_k2'), in which case the calculator uses the fast-pt prediction.

Then, if users select this option, you check if the fast-pt version allows for it and throw an error if it doesn't. My main aim here is not polluting the API with arguments that we may need to deprecate soon-ish, since they become a headache.

@jablazek
Copy link
Collaborator

jablazek commented Jan 2, 2024

Thanks @damonge! (and Happy New Year!) Indeed, the nl bias x IA terms have been a goal for a long time, and we now have them! Thanks to some work from @CarterDW , as well as from the UTD group and a former student working with me, we have a couple of independent FAST-PT implementations of (most of) these terms. There is an active dev branch of FAST-PT where we are doing the cross checks.

Regarding benchmarks, we can generate the Pks from FAST-PT at a given cosmology and redshift (or two) and then store as a static dat file for comparison. It's not independent, since they both use FAST-PT, but it does test the plumbing and API.

@damonge
Copy link
Collaborator

damonge commented Jan 2, 2024

Great news! Happy to have those terms in CCL when you think they're ready!

Regarding benchmarks: yep, that sounds good. I agree it's not independent, but it's a good test nevertheless, in case we modify anything further down the line (e.g. interpolation/extrapolation schemes or whatever) that changes the accuracy of the calculation. This is what we did for all the other fast-pt terms (in fact, you can probably build on this benchmark test script and on the script that was used to generate the associated data from fastpt).

@damonge
Copy link
Collaborator

damonge commented Jan 2, 2024

(and Happy New Year!)

@damonge
Copy link
Collaborator

damonge commented Feb 21, 2024

@jablazek @CarterDW can I check what the status of this PR is?

@damonge
Copy link
Collaborator

damonge commented Jun 18, 2024

@CarterDW @jablazek can I check what the status is? Thanks!

@CarterDW
Copy link
Author

CarterDW commented Jun 18, 2024 via email

@damonge
Copy link
Collaborator

damonge commented Jun 18, 2024

OK, thanks!

@jablazek
Copy link
Collaborator

Hi @damonge . Sorry for the slow updates. A bit more info: we did our code comparison between the two implementations of the NL bias x IA terms. Most of them agreed, with two exceptions. We have been tracking down the differences and should have that resolved soon.

However, @CarterDW , is there a reason to not finish the PR for the derivative term and then do a separate one for the NL bias terms and the t_ij terms?

@CarterDW
Copy link
Author

CarterDW commented Jun 18, 2024 via email

@damonge
Copy link
Collaborator

damonge commented Nov 13, 2024

@CarterDW @jablazek : any updates on this one?

@jablazek
Copy link
Collaborator

@CarterDW , we sorted out the unexpected high-k behavior , right?

@CarterDW
Copy link
Author

@jablazek @damonge yes, we figured out the divergent behavior I was seeing, so we should be able to finish this pull request. I'll spend this week resolving some of the things we discussed in here such as changing the implementation of checking for fast-pt, and adding the benchmark checks!

@damonge
Copy link
Collaborator

damonge commented Nov 13, 2024

OK, great!

@dderienzo
Copy link

Hello @damonge ! I am new to writing unit tests/benchmarks and wanted to ask you a question. I am trying to write tests to help close out this PR associated with the derivative terms @CarterDW is working on adding in. If I wrote something along the lines of this (using the current update, I know we don't want to continue with the useftptk argument, but just hypothetically):

def test_ept_derivative_eq():
# Compare derivative defined in CCL vs derivative
# defined internally in FAST-PT.
ptc1 = ccl.nl_pt.EulerianPTCalculator(
with_NC = True, with_IA = True,
usefptk = False)
pk1 = ptc1.get_biased_pk2d(TRS['TI'])
ptc2 = ccl.nl_pt.EulerianPTCalculator(
with_NC = True, with_IA = True,
usefptk = True)
pk2 = ptc2.get_biased_pk2d(TRS['TI'])
err = np.abs(pk1/pk2 -1)
assert np.allclose(err, 0, rtol = 0, atol = BBKS_TOLERANCE)

Would this be considered a unit test? My idea was just to compare two pks that will both use the Pak2 derivative term in their corrections, and comparing the built in CCL vs built in FAST-PT term. Would this be rigorous enough or would something along the lines of the script you linked above using data from an edited version of this with the derivative coded in be more rigorous and necessary?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants