-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swap apfel with eko #1490
Comments
It'd be nice if, while we're at it, we could make the evolution happen automatically (or with an optional flag) after the fit is done. Currently when I submit a fit to the cluster I always need to remember to submit a job to evolve the fit too.This can often take longer than if it happened for each replica automatically, because I forget that I have to do this (or that I actually have to be in front of my laptop to physically submit the job). I always liked the "fire and forget" strategy |
It might make more sense to make it at the level of the postfit since it is part of the operation that creates the LHAPDF file. That way it can still be done to all PDFs at once. |
But postfit is an operation that takes <1min and can be done on my laptop. The evolution is another hour of compute time and I usually submit it to the cluster. |
It seems that the benefit of doing it at once would be small if the operator is stored. So I’d say it should be done in the fit job again. |
The application of the eko as well. Anyway, when we get to the river we cross that bridge depending on how long it actually takes.
Only if you still don't have it suddently you have 100 jobs running the same thing and only one is useful. |
What would be the same thing? Afaict it would be applying once stored operator to different replicas. So we would only be missing on cache performance. |
@scarlehoff is this actually relevant? Can't In case the answer is "it has to be |
I'll deal with this in due time I guess. It is a problem as well (for a given value of problem) for pineappl in the other PR since pineappl depends on eko (and there's no conda for the latest pineappl? or maybe that was fixed) On second thought, if there's no need for apfel and c++ there's no need for conda |
That sounds great 🎉. There will be still compiled |
Just for my personal record: the answer to this question seems to be no, |
For now I'd strongly prefer having and eko conda package (which is really easy if it doesn't use any compiled code), as we have a fair amount of infrastructure depending on it (CI, server package repo, documentation, data file paths, LHAPDF, docker, frozen environments), that I don't think we need to be touching for the moment concurrently with everything else. After we have had the code with no c++ required working for a while, we could start thinking on alternatives for all of the above, and only then see if it makes sense for us to use wheels. |
Ok, fine, but I would scope this to the fit only, and keep instead theory generation wheel based, since it's actually already working that way (and without C++ that's the trend in any case, I would say). Concerning
Instead, concerning wheels:
To make it explicit: I'm not advocating for stepping immediately into wheels (that I'm not against as well), for me having a transition period is also fine. |
Indeed this need to be in working order in the immediate future if we are going to depend on pineappl. |
if I remember correctly @scarrazza volunteered for this at some point 🙃 (since he also did this in the past https://github.com/conda-forge/pineappl-feedstock ) |
@felixhekhorn yes, I did a quick here https://dev.azure.com/conda-forge/feedstock-builds/_build/results?buildId=459202&view=logs&j=9a864fd9-6c8f-52ca-79ce-2aa6dca1a1de&t=10fc5aa2-324e-5982-4c88-6b31fcab16b3, but there still something to understand (I did not tried to hard). |
Here some similar package that might be useful to look at https://github.com/conda-forge/datafusion-feedstock/blob/master/recipe/meta.yaml |
I would remove |
P.S.: take into account that v0.5.1 is a broken release for the python package, even if you're not yet hitting the bug (so I'd rather try with v0.5.0 or wait for v0.5.2) |
As an aside (to the aside) it would probably make sense if these things followed the same philosophy as nnpdf whereby each commit to master makes a new version. |
Effectively, this is what is happening in But this is a bit heavy, and there is actually one branch ( But since tags are tags, I don't think that there is much need for syncing |
Currently the fit workflow requires running
apfel
in order to evolve the PDF from the initial scale to create the LHAPDF grid.In principle
eko
should already be able to do that, at least for the standard settings.Doing this within the framework require a few steps though:
nnpdf
can depend on it.eko_operator
object which is the output of eko and will be stored in the server so that once we have computed one it doesn't need to be computed again.n3fit
PDF at the initial scale, apply theeko_operator
to it and write down a PDF at any scale.evolven3fit
I will start on this after the LHAPDFSet thing is done in pure python.
Mentioning you two so you can keep track of the progress on this
@alecandido @felixhekhorn @giacomomagni @andreab1997
The text was updated successfully, but these errors were encountered: