Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Activate report grid memory consumption #1020

Open
wants to merge 8 commits into
base: dev
Choose a base branch
from

Conversation

ykempf
Copy link
Contributor

@ykempf ykempf commented Aug 22, 2024

Just for laughs? How long has this not been used? Not like I just spent a number of weeks looking into memory consumption...........

We have this function, why not use it?
@ykempf ykempf changed the base branch from master to dev August 22, 2024 13:29
@ykempf
Copy link
Contributor Author

ykempf commented Aug 22, 2024

Suggestion from Markus: implement a similar memory report functionality for fsgrid. Not today.

@ykempf
Copy link
Contributor Author

ykempf commented Aug 22, 2024

Example output:

(MEM) tstep 0 t 0 Resident per node (avg, min, max): 2.1022 2.1022 2.1022
(MEM) tstep 0 t 0 High water mark per node                 (GiB) avg: 2.1022 min: 2.1022 max: 2.1022 sum (TiB): 0.00205293 on 1 nodes
(MEM) Total size: 1.46364e+08
(MEM) Total capacity 1.58685e+08
(MEM)   Average capacity: 9.91781e+06 local cells 1.8856e+06 remote cells 8.0322e+06
(MEM)   Max capacity:     1.25012e+07 on  process 6
(MEM)   Min capacity:     7.54602e+06 on  process 2

So let's discuss if we want more info (value and rank of local vs. remote cells' min and max capacity?), and in what format/ordering.

And also, this should list explicitly that we're talking of the spatial cells here, especially if/when we extend this to also report fsgrid memory usage.

@@ -753,13 +753,26 @@ void shrink_to_fit_grid_data(dccrg::Dccrg<SpatialCell,dccrg::Cartesian_Geometry>
}
}

std::vector<CellID> get_all_remote_cells_on_process_boundary(dccrg::Dccrg<SpatialCell,dccrg::Cartesian_Geometry>& mpiGrid) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as per DCCRG neighborhoods, this can go away

@@ -816,7 +831,7 @@ void report_grid_memory_consumption(dccrg::Dccrg<SpatialCell,dccrg::Cartesian_Ge
*/
void deallocateRemoteCellBlocks(dccrg::Dccrg<SpatialCell,dccrg::Cartesian_Geometry>& mpiGrid) {
const std::vector<uint64_t> incoming_cells
= mpiGrid.get_remote_cells_on_process_boundary(VLASOV_SOLVER_NEIGHBORHOOD_ID);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and by passing no neighborhood to mpiGrid.get_remote_cells_on_process_boundary() it should return for the base neighborhood i.e. all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants