You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encountered an issue when I was using DP for MD simulation (.pth model, from pytorch backend). The model can be successfully loaded but following the error:
(sorry for the screen photo)
I wonder where could this issue coming from? Since the model seems to be good since it can be normally used for inference with Python API and there's no error occurs when using dp commands.
Best
DeePMD-kit Version
DeePMD-kit v3.0.0a0
Backend and its version
PyTorch
Python Version, CUDA Version, GCC Version, LAMMPS Version, etc
No response
Details
See above
The text was updated successfully, but these errors were encountered:
MD simulation for dpa2 model is supported at 2024Q1 branch and multiprocess MD is supported at devel branch. Note that your model should be freeze with the same code version with your c++ inference code version. (Sometimes model trained with old version can not be freezed by a newer one )
All of the works are under the same code (train, freeze, test, etc), so I guess there's no version issue at all. I'm now using the devel branch (I'll double check it later), so according to your information, it will be fine to use use that for MD simulation, right? What is the multiprocess MD simulation you meant?
Summary
Hi,
I encountered an issue when I was using DP for MD simulation (.pth model, from pytorch backend). The model can be successfully loaded but following the error:
(sorry for the screen photo)
I wonder where could this issue coming from? Since the model seems to be good since it can be normally used for inference with Python API and there's no error occurs when using dp commands.
Best
DeePMD-kit Version
DeePMD-kit v3.0.0a0
Backend and its version
PyTorch
Python Version, CUDA Version, GCC Version, LAMMPS Version, etc
No response
Details
See above
The text was updated successfully, but these errors were encountered: