Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LAMMPS doesn't run in parallel #4

Open
m-barbosa opened this issue Aug 20, 2024 · 5 comments
Open

LAMMPS doesn't run in parallel #4

m-barbosa opened this issue Aug 20, 2024 · 5 comments

Comments

@m-barbosa
Copy link

m-barbosa commented Aug 20, 2024

Hello @xiaochendu,

I was trying to run the code on multiple processors but couldn't find a way to make it work. I tried exporting the environment variables in my .bashrc:

export LAMMPS_COMMAND="/mypath/lmp_mpi"
export ASE_LAMMPSRUN_COMMAND="mpirun --np 32 $LAMMPS_COMMAND"

and

os.environ["LAMMPS_COMMAND"]="/mypath/lmp_mpi"
os.environ["ASE_LAMMPSRUN_COMMAND"]="mpirun -np 4 /mypath/lmp_mpi"

just like on the ASE page, but it doesn't seem to work. Is there a way to run LAMMPS in parallel using surface-sampling?

@xiaochendu
Copy link
Collaborator

xiaochendu commented Aug 20, 2024

I haven't tried running the code using mpirun. I believe running on multiple processors requires 1) a lammps binary that supports mpirun and 2) a potential that is compatible as well. I think it is only faster if the simulation cell is very big (thousands or millions of atoms). For my use case, I haven't done more than 300 or so atoms.

EDIT: To clarify, using mpirun only parallelizes the lammps energy calculation, but not the sampling.

@m-barbosa
Copy link
Author

I have a LAMMPS binary compiled with MPI support. Some potentials are very slow, like ReaxFF and other reactive potentials. For that reason, when running a CG algorithm with a couple of thousand steps (for thousands of sweeps), parallelization comes in handy.

The problem is, I don't know if I'm doing something wrong, but the environment variables don't seem to be working. As I mentioned before, the ASE documentation specifies that I can use something like this:

export ASE_LAMMPSRUN_COMMAND="/path/to/mpirun --np 4 lmp_binary"

However, for some reason, this doesn't work for parallelizing both energy and optimization calculations. I believe the only LAMMPS binary that the software recognizes is the one that comes with conda install -c conda-forge lammps, but I'm not quite sure about it.

@xiaochendu
Copy link
Collaborator

I see what you mean now. ASE_LAMMPSRUN_COMMAND is only used for the copper example in example.ipynb, which uses the ASE built-in lammpsrun module. For the GaN and Si examples, I created a separate LAMMPSSurfCalc that runs directly using the python lammps module. (Apologies if the imports make it confusing.) You might ask why I chose to do that. Because I couldn't find a way using the ASE module to compute relaxations in one LAMMPS call. By interfacing with the lammps module, I can do that in one step, which I believe you are doing.

To run using MPI, I think you can modify LAMMMPSCalc, perhaps around this line:

lmp = lammps(cmdargs=["-log", "none", "-screen", "none", "-nocite"])

using the information over here:
https://docs.lammps.org/Python_launch.html#running-lammps-and-python-in-parallel-with-mpi

@m-barbosa
Copy link
Author

I have followed the instructions on the link you sent, adding something like comm = MPI.COMM_WORLD next to the lmp line, but I've started facing other problems, sometimes with the MPI and other times after few sweeps with the LAMMPS (usually 3-6), where I faced the message Exception: ERROR: Energy was not tallied on needed timestep (src/compute_pe.cpp:88). One thing I've noticed is that the code was much faster with MPI, but I couldn't solve the problem with the compute, which is inexistent in a serial run.

I alse tested an unofficial LAMMPS implementation called pylammpsmpi, which allows running LAMMPS in parallel from a Python serial code. It worked, but was slower than a serial run.

@xiaochendu
Copy link
Collaborator

xiaochendu commented Sep 9, 2024

Hi @m-barbosa, apologies, I was busy last week. I don't have much experience parallelizing LAMMPS. But if you're willing to send over the modified code, scripts, potential, and a test system (doesn't have to be exactly what you're doing), I can try out some runs myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants