Skip to content

Commit

Permalink
Updated the FOM section in the README.
Browse files Browse the repository at this point in the history
  • Loading branch information
vladotomov committed Nov 19, 2018
1 parent 784be35 commit c6ee2a6
Showing 1 changed file with 8 additions and 7 deletions.
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -294,18 +294,19 @@ A sample run on the [Vulcan](https://computation.llnl.gov/computers/vulcan) BG/Q
machine at LLNL is:

```
srun -n 393216 laghos -pa -p 1 -tf 0.6 -pt 322 -m data/cube_12_hex.mesh \
--cg-tol 0 --cg-max-iter 50 --max-steps 2 -ok 3 -ot 2 -rs 5 -rp 3
srun -n 294912 laghos -pa -p 1 -tf 0.6 -pt 911 -m data/cube_922_hex.mesh \
--ode-solver 7 --max-steps 4
--cg-tol 0 --cg-max-iter 50 -ok 3 -ot 2 -rs 5 -rp 2
```
This is Q3-Q2 3D computation on 393,216 MPI ranks (24,576 nodes) that produces
rates of approximately 168497, 74221, and 16696 megadofs, and a total FOM of
about 2073 megadofs.
This is Q3-Q2 3D computation on 294,912 MPI ranks (18,432 nodes) that produces
rates of approximately 125419, 55588, and 12674 megadofs, and a total FOM of
about 2064 megadofs.

To make the above run 8 times bigger, one can either weak scale by using 8 times
as many MPI tasks and increasing the number of serial refinements: `srun -n
3145728 ... -rs 6 -rp 3`, or use the same number of MPI tasks but increase the
2359296 ... -rs 6 -rp 2`, or use the same number of MPI tasks but increase the
local problem on each of them by doing more parallel refinements: `srun -n
393216 ... -rs 5 -rp 4`.
294912 ... -rs 5 -rp 3`.

## Versions

Expand Down

0 comments on commit c6ee2a6

Please sign in to comment.