Skip to content

Commit

Permalink
docs: document the floating-point precision of the model
Browse files Browse the repository at this point in the history
Signed-off-by: Jinzhe Zeng <[email protected]>
  • Loading branch information
njzjz committed Oct 22, 2024
1 parent b4701da commit 9f9c3f6
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 0 deletions.
1 change: 1 addition & 0 deletions doc/model/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,4 @@ Model
linear
pairtab
change-bias
precision
15 changes: 15 additions & 0 deletions doc/model/precision.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Floating-point precision of the model

The following options control the precision of the model:

- The environment variable {envvar}`DP_INTERFACE_PREC` controls the interface precision of the model, the descriptor, and the fitting, the precision of the environmental matrix, and the precision of the normalized parameters for the environmental matrix and the fitting output.
- The training parameter {ref}`model[standard]/fitting_net[ener]/precision <precision>` controls the precision of neural networks in the descriptor and the fitting, and the subsequent operations after the output of neural networks.
- The reduced output (e.g. total energy) is always `float64`.

Usually, the following two combinations of options are recommended:

- Setting {envvar}`DP_INTERFACE_PREC` to `high` (default) and all {ref}`model[standard]/fitting_net[ener]/precision <precision>` options to `float64` (default).
- Setting {envvar}`DP_INTERFACE_PREC` to `high` (default) and all {ref}`model[standard]/fitting_net[ener]/precision <precision>` options to `float32`.

The Python and C++ inference interfaces accept both `float64` and `float32` as the input and output arguments, whatever the floating-point precision of the model interface is.
Usually, the MD programs (such as LAMMPS) only use `float64` in their interfaces.
1 change: 1 addition & 0 deletions doc/troubleshooting/precision.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ See [FAQ: How to tune Fitting/embedding-net size](./howtoset_netsize.md) for det

In some cases, one may want to use the FP32 precision to make the model faster.
For some applications, FP32 is enough and thus is recommended, but one should still be aware that the precision of FP32 is not as high as that of FP64.
See [Floating-point precision of the model](../model/precision.md) section for how to set the precision.

## Training

Expand Down

0 comments on commit 9f9c3f6

Please sign in to comment.