Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Paddle Backend] Add spin energy example(revert code format) #3082

Merged
merged 59 commits into from
Dec 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
df60692
Add the Paddle version of Ener model(Ener fit/Descrpt se_a),lr,loss
zhwesky2010 Apr 10, 2021
a4adb02
fix Ener model double grad
zhwesky2010 Apr 10, 2021
f060845
add paddle version of prod_env_mat_a
JiabinYang Apr 10, 2021
1648829
remove addtional log
JiabinYang Apr 10, 2021
b9eadb4
Merge branch 'deemd2paddle' of https://github.com/JiabinYang/deepmd-k…
zhwesky2010 Apr 11, 2021
8e52aa7
add prod_force and prod_virtial op in paddle
JiabinYang Apr 11, 2021
98a0539
remove old file
JiabinYang Apr 11, 2021
4f2496b
remove additional change
JiabinYang Apr 11, 2021
a9bd2c7
remove additional change
JiabinYang Apr 11, 2021
ffd165c
add ut and ut used kernel for prod and virial
JiabinYang Apr 11, 2021
c7723e5
rename test
JiabinYang Apr 11, 2021
7cb1c94
Merge branch 'deemd2paddle' of https://github.com/JiabinYang/deepmd-k…
zhwesky2010 Apr 12, 2021
5bd354c
rename test
JiabinYang Apr 12, 2021
d596c18
Merge branch 'api' of https://github.com/deepmodeling/deepmd-kit into…
JiabinYang Apr 12, 2021
604ee62
Merge branch 'deemd2paddle' of https://github.com/JiabinYang/deepmd-k…
zhwesky2010 Apr 12, 2021
c24e33e
support GPU backward of force and virial
JiabinYang Apr 13, 2021
9426f1c
Merge pull request #499 from JiabinYang/deemd2paddle
amcadmus Apr 13, 2021
5e4795b
Add Ener Model for Paddle
zhwesky2010 Apr 13, 2021
0031e55
Merge branch 'deemd2paddle' of https://github.com/JiabinYang/deepmd-k…
zhwesky2010 Apr 13, 2021
dcb1631
temp support gpu with cpu kernel on virial
JiabinYang Apr 14, 2021
dde23ee
renew api usage to fit latest paddle
JiabinYang Apr 14, 2021
e988870
Merge pull request #512 from JiabinYang/deemd2paddle
amcadmus Apr 14, 2021
13b8e6f
Add Ener Model for Paddle
zhwesky2010 Apr 15, 2021
f813c77
fix Ener Model Infer
zhwesky2010 Apr 18, 2021
ddcb9d7
Merge pull request #529 from zhouwei25/deepmd2paddle
amcadmus Apr 19, 2021
7f3802f
fix error in cpu mode
JiabinYang Apr 22, 2021
f1bccb1
Merge pull request #556 from JiabinYang/fix_prod_mat_env_a_cpu
amcadmus Apr 22, 2021
87effc5
support jist save load
JiabinYang May 7, 2021
4b24e1f
Merge pull request #597 from JiabinYang/support_jit_save_load
amcadmus May 7, 2021
9a92b7f
fix change device error code
JiabinYang Jun 21, 2021
55670b2
Merge pull request #779 from JiabinYang/change_device
amcadmus Jun 22, 2021
75f96f4
[Paddle] Fixed model save issues with Ener model
jim19930609 Jul 20, 2021
a87e7b3
Merge pull request #870 from jim19930609/paddle
amcadmus Jul 20, 2021
156c0d3
Force env_mat force_se_a virial_se_a to fallback on CPU
jim19930609 Jul 23, 2021
d55286d
Merge pull request #880 from jim19930609/paddle
amcadmus Jul 23, 2021
45a2962
Revert "Force env_mat force_se_a virial_se_a to fallback on CPU"
jim19930609 Oct 22, 2021
e5aeb25
Merge pull request #1230 from jim19930609/paddle
amcadmus Oct 25, 2021
fc78e6d
update reprod water_se2_a code
HydrogenSulfate Nov 7, 2023
0af71a0
update ugly but runnable code
HydrogenSulfate Nov 26, 2023
4689924
refine code
HydrogenSulfate Nov 26, 2023
0fd9f23
fix for missing code
HydrogenSulfate Nov 27, 2023
0000512
add unitest code and fix for custom op installation in python
HydrogenSulfate Nov 27, 2023
03c1318
update README for unitest of python custom op
HydrogenSulfate Nov 27, 2023
46dbc9c
refine docs
HydrogenSulfate Nov 27, 2023
a38d4f0
polish code
HydrogenSulfate Nov 28, 2023
dc5f2a1
update CPU train and content in RAEDME
HydrogenSulfate Nov 28, 2023
9fc9a67
merge old paddle branch
HydrogenSulfate Nov 28, 2023
6c290a2
remove old code cuz merge
HydrogenSulfate Nov 28, 2023
1b60bfd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Nov 28, 2023
4eeb08a
remove C++ inference dependency of tensorflow and add more buffer
HydrogenSulfate Nov 30, 2023
095d493
Merge branch 'add_ddle_backend_polish_ver' of https://github.com/Hydr…
HydrogenSulfate Nov 30, 2023
b3c4a6f
update dev code
HydrogenSulfate Dec 1, 2023
691d85e
add spin_energy python train/test code
HydrogenSulfate Dec 4, 2023
7bbc875
refine code
HydrogenSulfate Dec 4, 2023
17223e7
update README.md
HydrogenSulfate Dec 4, 2023
3eb254c
revert code formatting
HydrogenSulfate Dec 22, 2023
52638f2
update reformatting code
HydrogenSulfate Dec 22, 2023
5fa9d59
remove more annotations in code
HydrogenSulfate Dec 22, 2023
7e2a467
fix typo
HydrogenSulfate Dec 25, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,11 @@
# DeePMD-kit(PaddlePaddle backend)

> [!IMPORTANT]
> 本项目为 DeePMD-kit 的 PaddlePaddle 版本,主要修改了部分代码,使其可以运行在 PaddlePaddle 上。运行功能包括 water_se_e2_a 案例的单卡 GPU 训练、单卡 GPU 评估、导出静态图模型、接入 LAMMPS(GPU) 推理 4 部分的功能。
> 本项目为 DeePMD-kit 的 PaddlePaddle 版本,修改了部分代码,使其可以以 PaddlePaddle(GPU) 为后端进行训练、评估、模型导出、LAMMPS 推理等任务。案例支持情况如下所示。
> | example | Train | Test | Export | LAMMPS |
> | :-----: | :--: | :--: | :----: | :---: |
> | water/se_e2_a | ✅ | ✅ | ✅ | ✅ |
> | spin/se_e2_a | ✅ | ✅ | ✅ | TODO |

## 1. 环境安装

Expand All @@ -20,9 +24,8 @@

3. 安装 deepmd-kit


``` sh
git clone https://github.com/HydrogenSulfate/deepmd-kit.git -b add_ddle_backend_polish_ver
git clone https://github.com/deepmodeling/deepmd-kit.git -b paddle2
cd deepmd-kit
# 以 editable 的方式安装,方便调试
pip install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple
Expand Down
53 changes: 33 additions & 20 deletions deepmd/descriptor/se_a.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,9 +203,17 @@ def __init__(
self.useBN = False
self.dstd = None
self.davg = None
self.avg_zero = paddle.zeros([self.ntypes, self.ndescrpt], dtype="float32")
self.std_ones = paddle.ones([self.ntypes, self.ndescrpt], dtype="float32")

# self.compress = False
# self.embedding_net_variables = None
# self.mixed_prec = None
# self.place_holders = {}
# self.nei_type = np.repeat(np.arange(self.ntypes), self.sel_a)
self.avg_zero = paddle.zeros(
[self.ntypes, self.ndescrpt], dtype=GLOBAL_PD_FLOAT_PRECISION
)
self.std_ones = paddle.ones(
[self.ntypes, self.ndescrpt], dtype=GLOBAL_PD_FLOAT_PRECISION
)
nets = []
for type_input in range(self.ntypes):
layer = []
Expand Down Expand Up @@ -242,11 +250,19 @@ def __init__(
}

self.t_rcut = paddle.to_tensor(
np.max([self.rcut_r, self.rcut_a]), dtype="float32"
np.max([self.rcut_r, self.rcut_a]), dtype=GLOBAL_PD_FLOAT_PRECISION
)
self.register_buffer("buffer_sel", paddle.to_tensor(self.sel_a, dtype="int32"))
self.register_buffer(
"buffer_ndescrpt", paddle.to_tensor(self.ndescrpt, dtype="int32")
)
self.register_buffer(
"buffer_original_sel",
paddle.to_tensor(
self.original_sel if self.original_sel is not None else self.sel_a,
dtype="int32",
),
)
self.t_ntypes = paddle.to_tensor(self.ntypes, dtype="int32")
self.t_ndescrpt = paddle.to_tensor(self.ndescrpt, dtype="int32")
self.t_sel = paddle.to_tensor(self.sel_a, dtype="int32")

t_avg = paddle.to_tensor(
np.zeros([self.ntypes, self.ndescrpt]), dtype="float64"
Expand Down Expand Up @@ -539,6 +555,7 @@ def forward(
coord = paddle.reshape(coord_, [-1, natoms[1] * 3])
box = paddle.reshape(box_, [-1, 9])
atype = paddle.reshape(atype_, [-1, natoms[1]])

(
self.descrpt,
self.descrpt_deriv,
Expand Down Expand Up @@ -669,7 +686,7 @@ def _pass_filter(
[0, start_index, 0],
[
inputs.shape[0],
start_index + natoms[2 + type_i],
start_index + natoms[2 + type_i].item(),
inputs.shape[2],
],
)
Expand Down Expand Up @@ -697,7 +714,7 @@ def _pass_filter(
)
output.append(layer)
output_qmat.append(qmat)
start_index += natoms[2 + type_i]
start_index += natoms[2 + type_i].item()
else:
raise NotImplementedError()
# This branch will not be excecuted at current
Expand Down Expand Up @@ -747,13 +764,11 @@ def _compute_dstats_sys_smth(
self, data_coord, data_box, data_atype, natoms_vec, mesh
):
input_dict = {}
input_dict["coord"] = paddle.to_tensor(data_coord, dtype="float32")
input_dict["box"] = paddle.to_tensor(data_box, dtype="float32")
input_dict["type"] = paddle.to_tensor(data_atype, dtype="int32")
input_dict["natoms_vec"] = paddle.to_tensor(
natoms_vec, dtype="int32", place="cpu"
)
input_dict["default_mesh"] = paddle.to_tensor(mesh, dtype="int32")
input_dict["coord"] = paddle.to_tensor(data_coord, GLOBAL_PD_FLOAT_PRECISION)
input_dict["box"] = paddle.to_tensor(data_box, GLOBAL_PD_FLOAT_PRECISION)
input_dict["type"] = paddle.to_tensor(data_atype, "int32")
input_dict["natoms_vec"] = paddle.to_tensor(natoms_vec, "int32", place="cpu")
input_dict["default_mesh"] = paddle.to_tensor(mesh, "int32")

self.stat_descrpt, descrpt_deriv, rij, nlist = op_module.prod_env_mat_a(
input_dict["coord"],
Expand Down Expand Up @@ -949,10 +964,8 @@ def _filter_lower(
# natom x 4 x outputs_size

return paddle.matmul(
paddle.reshape(
inputs_i, [natom, shape_i[1] // 4, 4]
), # [natom, nei_type_i, 4]
xyz_scatter_out, # [natom, nei_type_i, 100]
paddle.reshape(inputs_i, [natom, shape_i[1] // 4, 4]),
xyz_scatter_out,
transpose_x=True,
)

Expand Down
6 changes: 3 additions & 3 deletions deepmd/entrypoints/freeze.py
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,7 @@ def freeze_graph(
input_spec=[
InputSpec(shape=[None], dtype="float64"), # coord_
InputSpec(shape=[None], dtype="int32"), # atype_
InputSpec(shape=[4], dtype="int32"), # natoms
InputSpec(shape=[2 + dp.model.descrpt.ntypes], dtype="int32"), # natoms
InputSpec(shape=[None], dtype="float64"), # box
InputSpec(shape=[6], dtype="int32"), # mesh
{
Expand All @@ -362,9 +362,9 @@ def freeze_graph(
)
for name, param in st_model.named_buffers():
print(
f"[{name}, {param.shape}] generated name in static_model is: {param.name}"
f"[{name}, {param.dtype}, {param.shape}] generated name in static_model is: {param.name}"
)
# skip pruning for program so as to keep buffers into files
# skip pruning for program so as to keep buffers into files
skip_prune_program = True
print(f"==>> Set skip_prune_program = {skip_prune_program}")
paddle.jit.save(st_model, output, skip_prune_program=skip_prune_program)
Expand Down
49 changes: 27 additions & 22 deletions deepmd/fit/ener.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,8 @@ def __init__(
self.atom_ener.append(None)
self.useBN = False
self.bias_atom_e = np.zeros(self.ntypes, dtype=np.float64)
ntypes_atom = self.ntypes - self.ntypes_spin
self.bias_atom_e = self.bias_atom_e[:ntypes_atom]
self.register_buffer(
"t_bias_atom_e",
paddle.to_tensor(self.bias_atom_e),
Expand Down Expand Up @@ -259,7 +261,6 @@ def __init__(
1,
activation_fn=None,
precision=self.fitting_precision,
bavg=self.bias_atom_e,
name=layer_suffix,
seed=self.seed,
trainable=self.trainable[-1],
Expand Down Expand Up @@ -321,6 +322,26 @@ def compute_output_stats(self, all_stat: dict, mixed_type: bool = False) -> None
self.bias_atom_e = self._compute_output_stats(
all_stat, rcond=self.rcond, mixed_type=mixed_type
)
ntypes_atom = self.ntypes - self.ntypes_spin
if self.spin is not None:
for type_i in range(ntypes_atom):
if self.bias_atom_e.shape[0] != self.ntypes:
self.bias_atom_e = np.pad(
self.bias_atom_e,
(0, self.ntypes_spin),
"constant",
constant_values=(0, 0),
)
bias_atom_e = self.bias_atom_e
if self.spin.use_spin[type_i]:
self.bias_atom_e[type_i] = (
self.bias_atom_e[type_i]
+ self.bias_atom_e[type_i + ntypes_atom]
)
else:
self.bias_atom_e[type_i] = self.bias_atom_e[type_i]
self.bias_atom_e = self.bias_atom_e[:ntypes_atom]

paddle.assign(self.bias_atom_e, self.t_bias_atom_e)

def _compute_output_stats(self, all_stat, rcond=1e-3, mixed_type=False):
Expand Down Expand Up @@ -525,26 +546,10 @@ def forward(
self.aparam_inv_std = 1.0

ntypes_atom = self.ntypes - self.ntypes_spin
if self.spin is not None:
for type_i in range(ntypes_atom):
if self.bias_atom_e.shape[0] != self.ntypes:
self.bias_atom_e = np.pad(
self.bias_atom_e,
(0, self.ntypes_spin),
"constant",
constant_values=(0, 0),
)
bias_atom_e = self.bias_atom_e
if self.spin.use_spin[type_i]:
self.bias_atom_e[type_i] = (
self.bias_atom_e[type_i]
+ self.bias_atom_e[type_i + ntypes_atom]
)
else:
self.bias_atom_e[type_i] = self.bias_atom_e[type_i]
self.bias_atom_e = self.bias_atom_e[:ntypes_atom]

inputs = paddle.reshape(inputs, [-1, natoms[0], self.dim_descrpt])
inputs = paddle.reshape(
inputs, [-1, natoms[0], self.dim_descrpt]
) # [1, all_atoms, M1*M2]
if len(self.atom_ener):
# only for atom_ener
nframes = input_dict.get("nframes")
Expand All @@ -558,7 +563,7 @@ def forward(
inputs_zero = paddle.zeros_like(inputs, dtype=GLOBAL_PD_FLOAT_PRECISION)

if bias_atom_e is not None:
assert len(bias_atom_e) == self.ntypes
assert len(bias_atom_e) == self.ntypes - self.ntypes_spin

fparam = None
if self.numb_fparam > 0:
Expand Down Expand Up @@ -590,7 +595,7 @@ def forward(
atype_nall,
[0, 1],
[0, 0],
[-1, paddle.sum(natoms[2 : 2 + ntypes_atom]).item()],
[atype_nall.shape[0], paddle.sum(natoms[2 : 2 + ntypes_atom]).item()],
)
atype_filter = paddle.cast(self.atype_nloc >= 0, GLOBAL_PD_FLOAT_PRECISION)
self.atype_nloc = paddle.reshape(self.atype_nloc, [-1])
Expand Down
Loading
Loading