You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This should be fixed as part of implementing ModelLibrary (see JP-3690).
Here is a small summary of results from profiling memory for this step on its own in my PR branch, using as input an association containing a 46-cal-file subset of the data in
███████████████████████████████████████████
The total size of the input _cal files on disk is roughly 5 GiB.
Setting in_memory=True, the peak memory usage is 15.4 GiB, and I'm attaching a graph of usage over time to the ticket (resample_in_memory.png, sorry for the similar name to what Brett attached originally).
Setting in=memory=False, the peak memory usage is 10.6 GiB, and a graph is again attached (resample_on_disk.png).
This appears to be fixed. Analysis of the memory allocation breakdown from the flamegraph indicates that all the models are never loaded into memory. However, the most memory-intensive part of the step is resample_variance_arrays(), which requires making several large output arrays for the different types of variance that need to be resampled. In this example, each of those arrays has shape (9348, 13432). It is unclear to me how to improve this, and I'd say it's beyond the scope of this ticket.
Issue JP-3621 was created on JIRA by Brett Graham:
ResampleStep when run with
in_memory=False
reads all models into memory. For a 33 member association the memory usage is shown in the attached graph.One line that reads all models is:
jwst/jwst/resample/resample_step.py
Line 67 in 8b254ae
The text was updated successfully, but these errors were encountered: