You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 23, 2020. It is now read-only.
On the the head of ocean/develop2a6b1b6 I'm getting an MPI_Bcast error on a month boundary on Grizzly, after running several months. Code will then restart and run again. This behavior appears to be new. I have most analysis members on and compiled with threading but using one thread, so could be related to that.
I'm not fixing this right now, but wanted to make a note in case it happens again.
gr0134:c70x$ pwd
/lustre/scratch2/turquoise/mpeterse/runs/c70x
gr0134:c70x$ mpirun -np 576 ocean_model_170925_2a6b1b6_gr_gfortran
[gr0134:117414] *** An error occurred in MPI_Bcast
[gr0134:117414] *** reported by process [1052246017,27]
[gr0134:117414] *** on communicator MPI COMMUNICATOR 3 DUP FROM 0
[gr0134:117414] *** MPI_ERR_TRUNCATE: message truncated
[gr0134:117414] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[gr0134:117414] *** and potentially your MPI job)
gr0134:c70x$ tail -f log.ocean.0000.out -n 2
Doing timestep 0001-12-31_23:50:00
Doing timestep 0002-01-01_00:00:00
The text was updated successfully, but these errors were encountered:
On the the head of
ocean/develop
2a6b1b6 I'm getting an MPI_Bcast error on a month boundary on Grizzly, after running several months. Code will then restart and run again. This behavior appears to be new. I have most analysis members on and compiled with threading but using one thread, so could be related to that.I'm not fixing this right now, but wanted to make a note in case it happens again.
The text was updated successfully, but these errors were encountered: