From 5c545f7520de41c341f8a4587cbb2fba0af4c669 Mon Sep 17 00:00:00 2001 From: Jinzhe Zeng Date: Thu, 25 Jan 2024 21:26:23 -0500 Subject: [PATCH] docs: rewrite README; deprecate manually written TOC (#3179) Deprecate per discussion. --------- Signed-off-by: Jinzhe Zeng --- CONTRIBUTING.md | 2 +- README.md | 168 +-- doc/conf.py | 94 -- doc/data/index.md | 9 - doc/freeze/index.md | 4 - doc/getting-started/quick_start.ipynb | 2 +- doc/inference/index.md | 7 - doc/install/index.md | 11 - doc/model/index.md | 20 - doc/test/index.md | 4 - doc/third-party/index.md | 10 - doc/train-input-auto.rst | 1502 ------------------------- doc/train/index.md | 10 - doc/troubleshooting/index.md | 15 - 14 files changed, 46 insertions(+), 1812 deletions(-) delete mode 100644 doc/data/index.md delete mode 100644 doc/freeze/index.md delete mode 100644 doc/inference/index.md delete mode 100644 doc/install/index.md delete mode 100644 doc/model/index.md delete mode 100644 doc/test/index.md delete mode 100644 doc/third-party/index.md delete mode 100644 doc/train-input-auto.rst delete mode 100644 doc/train/index.md delete mode 100644 doc/troubleshooting/index.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e43e23beb6..f2c28ae59b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -38,7 +38,7 @@ Currently, we maintain two main branch: - devel : branch for developers ### Developer guide -See [here](doc/development/index.md) for coding conventions, API and other needs-to-know of the code. +See [documentation](https://deepmd.readthedocs.io/) for coding conventions, API and other needs-to-know of the code. ## How to contribute Please perform the following steps to create your Pull Request to this repository. If don't like to use commands, you can also use [GitHub Desktop](https://desktop.github.com/), which is easier to get started. Go to [git documentation](https://git-scm.com/doc) if you want to really master git. diff --git a/README.md b/README.md index 27c8dab4bc..e61c18dbcb 100644 --- a/README.md +++ b/README.md @@ -2,8 +2,8 @@ -------------------------------------------------------------------------------- -DeePMD-kit Manual -======== +# DeePMD-kit + [![GitHub release](https://img.shields.io/github/release/deepmodeling/deepmd-kit.svg?maxAge=86400)](https://github.com/deepmodeling/deepmd-kit/releases) [![offline packages](https://img.shields.io/github/downloads/deepmodeling/deepmd-kit/total?label=offline%20packages)](https://github.com/deepmodeling/deepmd-kit/releases) [![conda-forge](https://img.shields.io/conda/dn/conda-forge/deepmd-kit?color=red&label=conda-forge&logo=conda-forge)](https://anaconda.org/conda-forge/deepmd-kit) @@ -11,39 +11,19 @@ [![docker pull](https://img.shields.io/docker/pulls/deepmodeling/deepmd-kit)](https://hub.docker.com/r/deepmodeling/deepmd-kit) [![Documentation Status](https://readthedocs.org/projects/deepmd/badge/)](https://deepmd.readthedocs.io/) -# Table of contents -- [About DeePMD-kit](#about-deepmd-kit) - - [Highlights in v2.0](#highlights-in-deepmd-kit-v2.0) - - [Highlighted features](#highlighted-features) - - [License and credits](#license-and-credits) - - [Deep Potential in a nutshell](#deep-potential-in-a-nutshell) -- [Download and install](#download-and-install) -- [Use DeePMD-kit](#use-deepmd-kit) -- [Code structure](#code-structure) -- [Troubleshooting](#troubleshooting) - -# About DeePMD-kit +## About DeePMD-kit DeePMD-kit is a package written in Python/C++, designed to minimize the effort required to build deep learning-based model of interatomic potential energy and force field and to perform molecular dynamics (MD). This brings new hopes to addressing the accuracy-versus-efficiency dilemma in molecular simulations. Applications of DeePMD-kit span from finite molecules to extended systems and from metallic systems to chemically bonded systems. For more information, check the [documentation](https://deepmd.readthedocs.io/). -# Highlights in DeePMD-kit v2.0 -* [Model compression](doc/freeze/compress.md). Accelerate the efficiency of model inference 4-15 times. -* [New descriptors](doc/model/overall.md). Including [`se_e2_r`](doc/model/train-se-e2-r.md) and [`se_e3`](doc/model/train-se-e3.md). -* [Hybridization of descriptors](doc/model/train-hybrid.md). Hybrid descriptor constructed from the concatenation of several descriptors. -* [Atom type embedding](doc/model/train-se-e2-a-tebd.md). Enable atom-type embedding to decline training complexity and refine performance. -* Training and inference of the dipole (vector) and polarizability (matrix). -* Split of training and validation dataset. -* Optimized training on GPUs. - -## Highlighted features -* **interfaced with TensorFlow**, one of the most popular deep learning frameworks, making the training process highly automatic and efficient, in addition, Tensorboard can be used to visualize training procedures. -* **interfaced with high-performance classical MD and quantum (path-integral) MD packages**, i.e., LAMMPS and i-PI, respectively. -* **implements the Deep Potential series models**, which have been successfully applied to finite and extended systems including organic molecules, metals, semiconductors, insulators, etc. +### Highlighted features +* **interfaced with multiple backends**, including TensorFlow and PyTorch, the most popular deep learning frameworks, making the training process highly automatic and efficient. +* **interfaced with high-performance classical MD and quantum (path-integral) MD packages**, including LAMMPS, i-PI, AMBER, CP2K, GROMACS, OpenMM, and ABUCUS. +* **implements the Deep Potential series models**, which have been successfully applied to finite and extended systems, including organic molecules, metals, semiconductors, insulators, etc. * **implements MPI and GPU supports**, making it highly efficient for high-performance parallel and distributed computing. * **highly modularized**, easy to adapt to different descriptors for deep learning-based potential energy models. -## License and credits +### License and credits The project DeePMD-kit is licensed under [GNU LGPLv3.0](./LICENSE). If you use this code in any future publications, please cite the following publications for general purpose: - Han Wang, Linfeng Zhang, Jiequn Han, and Weinan E. "DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics." Computer Physics Communications 228 (2018): 178-184. @@ -55,7 +35,9 @@ If you use this code in any future publications, please cite the following publi In addition, please follow [the bib file](CITATIONS.bib) to cite the methods you used. -## Deep Potential in a nutshell +### Highlights in major versions + +#### Initial version The goal of Deep Potential is to employ deep learning techniques and realize an inter-atomic potential energy model that is general, accurate, computationally efficient and scalable. The key component is to respect the extensive and symmetry-invariant properties of a potential energy model by assigning a local reference frame and a local environment to each atom. Each environment contains a finite number of atoms, whose local coordinates are arranged in a symmetry-preserving way. These local coordinates are then transformed, through a sub-network, to so-called *atomic energy*. Summing up all the atomic energies gives the potential energy of the system. The initial proof of concept is in the [Deep Potential][1] paper, which employed an approach that was devised to train the neural network model with the potential energy only. With typical *ab initio* molecular dynamics (AIMD) datasets this is insufficient to reproduce the trajectories. The Deep Potential Molecular Dynamics ([DeePMD][2]) model overcomes this limitation. In addition, the learning process in DeePMD improves significantly over the Deep Potential method thanks to the introduction of a flexible family of loss functions. The NN potential constructed in this way reproduces accurately the AIMD trajectories, both classical and quantum (path integral), in extended and finite systems, at a cost that scales linearly with system size and is always several orders of magnitude lower than that of equivalent AIMD simulations. @@ -64,110 +46,48 @@ Although highly efficient, the original Deep Potential model satisfies the exten In addition to building up potential energy models, DeePMD-kit can also be used to build up coarse-grained models. In these models, the quantity that we want to parameterize is the free energy, or the coarse-grained potential, of the coarse-grained particles. See the [DeePCG paper][4] for more details. -See [our latest paper](https://doi.org/10.48550/arXiv.2304.09409) for details of all features. - -# Download and install - -Please follow our [GitHub](https://github.com/deepmodeling/deepmd-kit) webpage to download the [latest released version](https://github.com/deepmodeling/deepmd-kit/tree/master) and [development version](https://github.com/deepmodeling/deepmd-kit/tree/devel). - -DeePMD-kit offers multiple installation methods. It is recommended to use easy methods like [offline packages](doc/install/easy-install.md#offline-packages), [conda](doc/install/easy-install.md#with-conda) and [docker](doc/install/easy-install.md#with-docker). - -One may manually install DeePMD-kit by following the instructions on [installing the Python interface](doc/install/install-from-source.md#install-the-python-interface) and [installing the C++ interface](doc/install/install-from-source.md#install-the-c-interface). The C++ interface is necessary when using DeePMD-kit with LAMMPS, i-PI or GROMACS. - - -# Use DeePMD-kit - -A quick start on using DeePMD-kit can be found [here](doc/getting-started/quick_start.ipynb). - -A full [document](doc/train/train-input-auto.rst) on options in the training input script is available. - -# Advanced - -- [Installation](doc/install/index.md) - - [Easy install](doc/install/easy-install.md) - - [Install from source code](doc/install/install-from-source.md) - - [Install from pre-compiled C library](doc/install/install-from-c-library.md) - - [Install LAMMPS](doc/install/install-lammps.md) - - [Install i-PI](doc/install/install-ipi.md) - - [Install GROMACS](doc/install/install-gromacs.md) - - [Building conda packages](doc/install/build-conda.md) - - [Install Node.js interface](doc/install/install-nodejs.md) - - [Easy install the latest development version](doc/install/easy-install-dev.md) -- [Data](doc/data/index.md) - - [System](doc/data/system.md) - - [Formats of a system](doc/data/data-conv.md) - - [Prepare data with dpdata](doc/data/dpdata.md) -- [Model](doc/model/index.md) - - [Overall](doc/model/overall.md) - - [Descriptor `"se_e2_a"`](doc/model/train-se-e2-a.md) - - [Descriptor `"se_e2_r"`](doc/model/train-se-e2-r.md) - - [Descriptor `"se_e3"`](doc/model/train-se-e3.md) - - [Descriptor `"se_atten"`](doc/model/train-se-atten.md) - - [Descriptor `"se_atten_v2"`](doc/model/train-se-atten.md#descriptor-se_atten_v2) - - [Descriptor `"hybrid"`](doc/model/train-hybrid.md) - - [Descriptor `sel`](doc/model/sel.md) - - [Fit energy](doc/model/train-energy.md) - - [Fit spin energy](doc/model/train-energy-spin.md) - - [Fit `tensor` like `Dipole` and `Polarizability`](doc/model/train-fitting-tensor.md) - - [Fit electronic density of states (DOS)](doc/model/train-fitting-dos.md) - - [Train a Deep Potential model using `type embedding` approach](doc/model/train-se-e2-a-tebd.md) - - [Deep potential long-range](doc/model/dplr.md) - - [Deep Potential - Range Correction (DPRc)](doc/model/dprc.md) - - [Linear model](doc/model/linear.md) - - [Interpolation or combination with a pairwise potential](doc/model/pairtab.md) -- [Training](doc/train/index.md) - - [Training a model](doc/train/training.md) - - [Advanced options](doc/train/training-advanced.md) - - [Parallel training](doc/train/parallel-training.md) - - [Multi-task training](doc/train/multi-task-training.md) - - [TensorBoard Usage](doc/train/tensorboard.md) - - [Known limitations of using GPUs](doc/train/gpu-limitations.md) - - [Training Parameters](doc/train-input-auto.rst) -- [Freeze and Compress](doc/freeze/index.rst) - - [Freeze a model](doc/freeze/freeze.md) - - [Compress a model](doc/freeze/compress.md) -- [Test](doc/test/index.rst) - - [Test a model](doc/test/test.md) - - [Calculate Model Deviation](doc/test/model-deviation.md) -- [Inference](doc/inference/index.rst) - - [Python interface](doc/inference/python.md) - - [C++ interface](doc/inference/cxx.md) - - [Node.js interface](doc/inference/nodejs.md) -- [Integrate with third-party packages](doc/third-party/index.rst) - - [Use deep potential with dpdata](doc/third-party/dpdata.md) - - [Use deep potential with ASE](doc/third-party/ase.md) - - [Run MD with LAMMPS](doc/third-party/lammps-command.md) - - [Run path-integral MD with i-PI](doc/third-party/ipi.md) - - [Run MD with GROMACS](doc/third-party/gromacs.md) - - [Interfaces out of DeePMD-kit](doc/third-party/out-of-deepmd-kit.md) -- [Use NVNMD](doc/nvnmd/index.md) - -# Code structure +#### v1 + +* Code refactor to make it highly modularized. +* GPU support for descriptors. + +#### v2 + +* Model compression. Accelerate the efficiency of model inference 4-15 times. +* New descriptors. Including `se_e2_r`, `se_e3`, and `se_atten` (DPA-1). +* Hybridization of descriptors. Hybrid descriptor constructed from the concatenation of several descriptors. +* Atom type embedding. Enable atom-type embedding to decline training complexity and refine performance. +* Training and inference of the dipole (vector) and polarizability (matrix). +* Split of training and validation dataset. +* Optimized training on GPUs, including CUDA and ROCm. +* Non-von-Neumann. +* C API to interface with the third-party packages. + +See [our latest paper](https://doi.org/10.1063/5.0155600) for details of all features until v2.2.3. + +#### v3 + +* Multiple backends supported. Add a PyTorch backend. +* The DPA-2 model. + +## Install and use DeePMD-kit + +Please read the [online documentation](https://deepmd.readthedocs.io/) for how to install and use DeePMD-kit. + +## Code structure The code is organized as follows: -* `data/raw`: tools manipulating the raw data files. * `examples`: examples. * `deepmd`: DeePMD-kit python modules. +* `source/lib`: source code of the core library. +* `source/op`: Operator (OP) implementation. * `source/api_cc`: source code of DeePMD-kit C++ API. +* `source/api_c`: source code of the C API. +* `source/nodejs`: source code of the Node.js API. * `source/ipi`: source code of i-PI client. -* `source/lib`: source code of DeePMD-kit library. * `source/lmp`: source code of Lammps module. * `source/gmx`: source code of Gromacs plugin. -* `source/op`: TensorFlow op implementation. working with the library. - - -# Troubleshooting - -- [Model compatibility](doc/troubleshooting/model_compatability.md) -- [Installation](doc/troubleshooting/installation.md) -- [The temperature undulates violently during the early stages of MD](doc/troubleshooting/md_energy_undulation.md) -- [MD: cannot run LAMMPS after installing a new version of DeePMD-kit](doc/troubleshooting/md_version_compatibility.md) -- [Do we need to set rcut < half boxsize?](doc/troubleshooting/howtoset_rcut.md) -- [How to set sel?](doc/troubleshooting/howtoset_sel.md) -- [How to control the parallelism of a job?](doc/troubleshooting/howtoset_num_nodes.md) -- [How to tune Fitting/embedding-net size?](doc/troubleshooting/howtoset_netsize.md) -- [Why does a model have low precision?](doc/troubleshooting/precision.md) # Contributing diff --git a/doc/conf.py b/doc/conf.py index 8138f82ba4..11803a9e2d 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -28,96 +28,6 @@ sys.path.append(os.path.dirname(__file__)) import sphinx_contrib_exhale_multiproject # noqa: F401 - -def mkindex(dirname): - dirname = dirname + "/" - oldfindex = open(dirname + "index.md") - oldlist = oldfindex.readlines() - oldfindex.close() - - oldnames = [] - for entry in oldlist: - _name = entry[entry.find("(") + 1 : entry.find(")")] - oldnames.append(_name) - - newfindex = open(dirname + "index.md", "a") - for root, dirs, files in os.walk(dirname, topdown=False): - newnames = [ - name for name in files if "index.md" not in name and name not in oldnames - ] - for name in newnames: - f = open(dirname + name) - _lines = f.readlines() - for _headline in _lines: - _headline = _headline.strip("#") - headline = _headline.strip() - if len(headline) == 0 or headline[0] == "." or headline[0] == "=": - continue - else: - break - longname = "- [" + headline + "]" + "(" + name + ")\n" - newfindex.write(longname) - - newfindex.close() - - -def classify_index_TS(): - dirname = "troubleshooting/" - oldfindex = open(dirname + "index.md") - oldlist = oldfindex.readlines() - oldfindex.close() - - oldnames = [] - sub_titles = [] - heads = [] - while len(oldlist) > 0: - entry = oldlist.pop(0) - if entry.find("(") >= 0: - _name = entry[entry.find("(") + 1 : entry.find(")")] - oldnames.append(_name) - continue - if entry.find("##") >= 0: - _name = entry[entry.find("##") + 3 : -1] - sub_titles.append(_name) - continue - entry.strip() - if entry != "\n": - heads.append(entry) - - newfindex = open(dirname + "index.md", "w") - for entry in heads: - newfindex.write(entry) - newfindex.write("\n") - sub_lists = [[], []] - for root, dirs, files in os.walk(dirname, topdown=False): - newnames = [name for name in files if "index.md" not in name] - for name in newnames: - f = open(dirname + name) - _lines = f.readlines() - f.close() - for _headline in _lines: - _headline = _headline.strip("#") - headline = _headline.strip() - if len(headline) == 0 or headline[0] == "." or headline[0] == "=": - continue - else: - break - longname = "- [" + headline + "]" + "(" + name + ")\n" - if "howtoset_" in name: - sub_lists[1].append(longname) - else: - sub_lists[0].append(longname) - - newfindex.write("## Trouble shooting\n") - for entry in sub_lists[0]: - newfindex.write(entry) - newfindex.write("\n") - newfindex.write("## Parameters setting\n") - for entry in sub_lists[1]: - newfindex.write(entry) - newfindex.close() - - # -- Project information ----------------------------------------------------- project = "DeePMD-kit" @@ -169,10 +79,6 @@ def setup(app): # 'sphinx.ext.autosummary' # ] -# mkindex("troubleshooting") -# mkindex("development") -# classify_index_TS() - extensions = [ "deepmodeling_sphinx", "dargs.sphinx", diff --git a/doc/data/index.md b/doc/data/index.md deleted file mode 100644 index 838265427b..0000000000 --- a/doc/data/index.md +++ /dev/null @@ -1,9 +0,0 @@ -# Data - -In this section, we will introduce how to convert the DFT-labeled data into the data format used by DeePMD-kit. - -The DeePMD-kit organizes data in `systems`. Each `system` is composed of a number of `frames`. One may roughly view a `frame` as a snapshot of an MD trajectory, but it does not necessarily come from an MD simulation. A `frame` records the coordinates and types of atoms, cell vectors if the periodic boundary condition is assumed, energy, atomic forces and virials. It is noted that the `frames` in one `system` share the same number of atoms with the same type. - -- [System](system.md) -- [Formats of a system](data-conv.md) -- [Prepare data with dpdata](dpdata.md) diff --git a/doc/freeze/index.md b/doc/freeze/index.md deleted file mode 100644 index 0bc3664144..0000000000 --- a/doc/freeze/index.md +++ /dev/null @@ -1,4 +0,0 @@ -# Freeze and Compress - -- [Freeze a model](freeze.md) -- [Compress a model](compress.md) diff --git a/doc/getting-started/quick_start.ipynb b/doc/getting-started/quick_start.ipynb index d0f7d8db0b..1c53665b7d 100644 --- a/doc/getting-started/quick_start.ipynb +++ b/doc/getting-started/quick_start.ipynb @@ -239,7 +239,7 @@ "id": "a999f41b-e343-4dc2-8499-84fee6e52221", "metadata": {}, "source": [ - "The DeePMD-kit adopts a compressed data format. All training data should first be converted into this format and can then be used by DeePMD-kit. The data format is explained in detail in the DeePMD-kit manual that can be found in [the DeePMD-kit Data Introduction](../data/index.md)." + "The DeePMD-kit adopts a compressed data format. All training data should first be converted into this format and can then be used by DeePMD-kit. The data format is explained in detail in the DeePMD-kit manual that can be found in [the DeePMD-kit Data Introduction](../data/system.md)." ] }, { diff --git a/doc/inference/index.md b/doc/inference/index.md deleted file mode 100644 index fa0a747eb4..0000000000 --- a/doc/inference/index.md +++ /dev/null @@ -1,7 +0,0 @@ -# Inference - -Note that the model for inference is required to be compatible with the DeePMD-kit package. See [Model compatibility](../troubleshooting/model-compatability.html) for details. - -- [Python interface](python.md) -- [C++ interface](cxx.md) -- [Node.js interface](nodejs.md) diff --git a/doc/install/index.md b/doc/install/index.md deleted file mode 100644 index 8428255f5a..0000000000 --- a/doc/install/index.md +++ /dev/null @@ -1,11 +0,0 @@ -# Installation - -- [Easy install](easy-install.md) -- [Install from source code](install-from-source.md) -- [Install from pre-compiled C library](doc/install/install-from-c-library.md) -- [Install LAMMPS](install-lammps.md) -- [Install i-PI](install-ipi.md) -- [Install GROMACS](install-gromacs.md) -- [Building conda packages](build-conda.md) -- [Install Node.js interface](install-nodejs.md) -- [Easy install the latest development version](easy-install-dev.md) diff --git a/doc/model/index.md b/doc/model/index.md deleted file mode 100644 index 589b39b2b5..0000000000 --- a/doc/model/index.md +++ /dev/null @@ -1,20 +0,0 @@ -# Model - -- [Overall](overall.md) -- [Descriptor `"se_e2_a"`](train-se-e2-a.md) -- [Descriptor `"se_e2_r"`](train-se-e2-r.md) -- [Descriptor `"se_e3"`](train-se-e3.md) -- [Descriptor `"se_atten"`](train-se-atten.md) -- [Descriptor `"se_atten_v2"`](train-se-atten.md#descriptor-se_atten_v2) -- [Descriptor `"se_a_mask"`](train-se-a-mask.md) -- [Descriptor `"hybrid"`](train-hybrid.md) -- [Descriptor `sel`](sel.md) -- [Fit energy](train-energy.md) -- [Fit spin energy](train-energy-spin.md) -- [Fit `tensor` like `Dipole` and `Polarizability`](train-fitting-tensor.md) -- [Fit electronic density of states (DOS)](train-fitting-dos.md) -- [Train a Deep Potential model using `type embedding` approach](train-se-e2-a-tebd.md) -- [Deep potential long-range](dplr.md) -- [Deep Potential - Range Correction (DPRc)](dprc.md) -- [Linear model](linear.md) -- [Interpolation or combination with a pairwise potential](pairtab.md) diff --git a/doc/test/index.md b/doc/test/index.md deleted file mode 100644 index 4a502123d9..0000000000 --- a/doc/test/index.md +++ /dev/null @@ -1,4 +0,0 @@ -# Test - -- [Test a model](test.md) -- [Calculate Model Deviation](model-deviation.md) diff --git a/doc/third-party/index.md b/doc/third-party/index.md deleted file mode 100644 index 419f1fbb5c..0000000000 --- a/doc/third-party/index.md +++ /dev/null @@ -1,10 +0,0 @@ -# Integrate with third-party packages - -Note that the model for inference is required to be compatible with the DeePMD-kit package. See [Model compatibility](../troubleshooting/model-compatability.html) for details. - -- [Use deep potential with dpdata](dpdata.md) -- [Use deep potential with ASE](ase.md) -- [Run MD with LAMMPS](lammps-command.md) -- [Run path-integral MD with i-PI](ipi.md) -- [Run MD with GROMACS](gromacs.md) -- [Interfaces out of DeePMD-kit](out-of-deepmd-kit.md) diff --git a/doc/train-input-auto.rst b/doc/train-input-auto.rst deleted file mode 100644 index a3b69eade9..0000000000 --- a/doc/train-input-auto.rst +++ /dev/null @@ -1,1502 +0,0 @@ -.. _`model`: - -model: - | type: ``dict`` - | argument path: ``model`` - - .. _`model/type_map`: - - type_map: - | type: ``list``, optional - | argument path: ``model/type_map`` - - A list of strings. Give the name to each type of atoms. It is noted that the number of atom type of training system must be less than 128 in a GPU environment. - - .. _`model/data_stat_nbatch`: - - data_stat_nbatch: - | type: ``int``, optional, default: ``10`` - | argument path: ``model/data_stat_nbatch`` - - The model determines the normalization from the statistics of the data. This key specifies the number of `frames` in each `system` used for statistics. - - .. _`model/data_stat_protect`: - - data_stat_protect: - | type: ``float``, optional, default: ``0.01`` - | argument path: ``model/data_stat_protect`` - - Protect parameter for atomic energy regression. - - .. _`model/use_srtab`: - - use_srtab: - | type: ``str``, optional - | argument path: ``model/use_srtab`` - - The table for the short-range pairwise interaction added on top of DP. The table is a text data file with (N_t + 1) * N_t / 2 + 1 columes. The first colume is the distance between atoms. The second to the last columes are energies for pairs of certain types. For example we have two atom types, 0 and 1. The columes from 2nd to 4th are for 0-0, 0-1 and 1-1 correspondingly. - - .. _`model/smin_alpha`: - - smin_alpha: - | type: ``float``, optional - | argument path: ``model/smin_alpha`` - - The short-range tabulated interaction will be swithed according to the distance of the nearest neighbor. This distance is calculated by softmin. This parameter is the decaying parameter in the softmin. It is only required when `use_srtab` is provided. - - .. _`model/sw_rmin`: - - sw_rmin: - | type: ``float``, optional - | argument path: ``model/sw_rmin`` - - The lower boundary of the interpolation between short-range tabulated interaction and DP. It is only required when `use_srtab` is provided. - - .. _`model/sw_rmax`: - - sw_rmax: - | type: ``float``, optional - | argument path: ``model/sw_rmax`` - - The upper boundary of the interpolation between short-range tabulated interaction and DP. It is only required when `use_srtab` is provided. - - .. _`model/type_embedding`: - - type_embedding: - | type: ``dict``, optional - | argument path: ``model/type_embedding`` - - The type embedding. - - .. _`model/type_embedding/neuron`: - - neuron: - | type: ``list``, optional, default: ``[2, 4, 8]`` - | argument path: ``model/type_embedding/neuron`` - - Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built. - - .. _`model/type_embedding/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/type_embedding/activation_function`` - - The activation function in the embedding net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/type_embedding/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/type_embedding/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/type_embedding/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/type_embedding/precision`` - - The precision of the embedding net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/type_embedding/trainable`: - - trainable: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/type_embedding/trainable`` - - If the parameters in the embedding net are trainable - - .. _`model/type_embedding/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/type_embedding/seed`` - - Random seed for parameter initialization - - .. _`model/descriptor`: - - descriptor: - | type: ``dict`` - | argument path: ``model/descriptor`` - - The descriptor of atomic environment. - - - Depending on the value of *type*, different sub args are accepted. - - .. _`model/descriptor/type`: - - type: - | type: ``str`` (flag key) - | argument path: ``model/descriptor/type`` - | possible choices: |code:model/descriptor[loc_frame]|_, |code:model/descriptor[se_e2_a]|_, |code:model/descriptor[se_e2_r]|_, |code:model/descriptor[se_e3]|_, |code:model/descriptor[se_a_tpe]|_, |code:model/descriptor[hybrid]|_ - - The type of the descritpor. See explanation below. - - - `loc_frame`: Defines a local frame at each atom, and the compute the descriptor as local coordinates under this frame. - - - `se_e2_a`: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor. - - - `se_e2_r`: Used by the smooth edition of Deep Potential. Only the distance between atoms is used to construct the descriptor. - - - `se_e3`: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor. Three-body embedding will be used by this descriptor. - - - `se_a_tpe`: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor. Type embedding will be used by this descriptor. - - - `hybrid`: Concatenate of a list of descriptors as a new descriptor. - - .. |code:model/descriptor[loc_frame]| replace:: ``loc_frame`` - .. _`code:model/descriptor[loc_frame]`: `model/descriptor[loc_frame]`_ - .. |code:model/descriptor[se_e2_a]| replace:: ``se_e2_a`` - .. _`code:model/descriptor[se_e2_a]`: `model/descriptor[se_e2_a]`_ - .. |code:model/descriptor[se_e2_r]| replace:: ``se_e2_r`` - .. _`code:model/descriptor[se_e2_r]`: `model/descriptor[se_e2_r]`_ - .. |code:model/descriptor[se_e3]| replace:: ``se_e3`` - .. _`code:model/descriptor[se_e3]`: `model/descriptor[se_e3]`_ - .. |code:model/descriptor[se_a_tpe]| replace:: ``se_a_tpe`` - .. _`code:model/descriptor[se_a_tpe]`: `model/descriptor[se_a_tpe]`_ - .. |code:model/descriptor[hybrid]| replace:: ``hybrid`` - .. _`code:model/descriptor[hybrid]`: `model/descriptor[hybrid]`_ - - .. |flag:model/descriptor/type| replace:: *type* - .. _`flag:model/descriptor/type`: `model/descriptor/type`_ - - - .. _`model/descriptor[loc_frame]`: - - When |flag:model/descriptor/type|_ is set to ``loc_frame``: - - .. _`model/descriptor[loc_frame]/sel_a`: - - sel_a: - | type: ``list`` - | argument path: ``model/descriptor[loc_frame]/sel_a`` - - A list of integers. The length of the list should be the same as the number of atom types in the system. `sel_a[i]` gives the selected number of type-i neighbors. The full relative coordinates of the neighbors are used by the descriptor. - - .. _`model/descriptor[loc_frame]/sel_r`: - - sel_r: - | type: ``list`` - | argument path: ``model/descriptor[loc_frame]/sel_r`` - - A list of integers. The length of the list should be the same as the number of atom types in the system. `sel_r[i]` gives the selected number of type-i neighbors. Only relative distance of the neighbors are used by the descriptor. sel_a[i] + sel_r[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. - - .. _`model/descriptor[loc_frame]/rcut`: - - rcut: - | type: ``float``, optional, default: ``6.0`` - | argument path: ``model/descriptor[loc_frame]/rcut`` - - The cut-off radius. The default value is 6.0 - - .. _`model/descriptor[loc_frame]/axis_rule`: - - axis_rule: - | type: ``list`` - | argument path: ``model/descriptor[loc_frame]/axis_rule`` - - A list of integers. The length should be 6 times of the number of types. - - - axis_rule[i*6+0]: class of the atom defining the first axis of type-i atom. 0 for neighbors with full coordinates and 1 for neighbors only with relative distance. - - - axis_rule[i*6+1]: type of the atom defining the first axis of type-i atom. - - - axis_rule[i*6+2]: index of the axis atom defining the first axis. Note that the neighbors with the same class and type are sorted according to their relative distance. - - - axis_rule[i*6+3]: class of the atom defining the first axis of type-i atom. 0 for neighbors with full coordinates and 1 for neighbors only with relative distance. - - - axis_rule[i*6+4]: type of the atom defining the second axis of type-i atom. - - - axis_rule[i*6+5]: class of the atom defining the second axis of type-i atom. 0 for neighbors with full coordinates and 1 for neighbors only with relative distance. - - - .. _`model/descriptor[se_e2_a]`: - - When |flag:model/descriptor/type|_ is set to ``se_e2_a`` (or its alias ``se_a``): - - .. _`model/descriptor[se_e2_a]/sel`: - - sel: - | type: ``list`` | ``str``, optional, default: ``auto`` - | argument path: ``model/descriptor[se_e2_a]/sel`` - - This parameter set the number of selected neighbors for each type of atom. It can be: - - - `List[int]`. The length of the list should be the same as the number of atom types in the system. `sel[i]` gives the selected number of type-i neighbors. `sel[i]` is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment. - - - `str`. Can be "auto:factor" or "auto". "factor" is a float number larger than 1. This option will automatically determine the `sel`. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the "factor". Finally the number is wraped up to 4 divisible. The option "auto" is equivalent to "auto:1.1". - - .. _`model/descriptor[se_e2_a]/rcut`: - - rcut: - | type: ``float``, optional, default: ``6.0`` - | argument path: ``model/descriptor[se_e2_a]/rcut`` - - The cut-off radius. - - .. _`model/descriptor[se_e2_a]/rcut_smth`: - - rcut_smth: - | type: ``float``, optional, default: ``0.5`` - | argument path: ``model/descriptor[se_e2_a]/rcut_smth`` - - Where to start smoothing. For example the 1/r term is smoothed from `rcut` to `rcut_smth` - - .. _`model/descriptor[se_e2_a]/neuron`: - - neuron: - | type: ``list``, optional, default: ``[10, 20, 40]`` - | argument path: ``model/descriptor[se_e2_a]/neuron`` - - Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built. - - .. _`model/descriptor[se_e2_a]/axis_neuron`: - - axis_neuron: - | type: ``int``, optional, default: ``4``, alias: *n_axis_neuron* - | argument path: ``model/descriptor[se_e2_a]/axis_neuron`` - - Size of the submatrix of G (embedding matrix). - - .. _`model/descriptor[se_e2_a]/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/descriptor[se_e2_a]/activation_function`` - - The activation function in the embedding net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/descriptor[se_e2_a]/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e2_a]/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/descriptor[se_e2_a]/type_one_side`: - - type_one_side: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e2_a]/type_one_side`` - - Try to build N_types embedding nets. Otherwise, building N_types^2 embedding nets - - .. _`model/descriptor[se_e2_a]/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/descriptor[se_e2_a]/precision`` - - The precision of the embedding net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/descriptor[se_e2_a]/trainable`: - - trainable: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/descriptor[se_e2_a]/trainable`` - - If the parameters in the embedding net is trainable - - .. _`model/descriptor[se_e2_a]/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/descriptor[se_e2_a]/seed`` - - Random seed for parameter initialization - - .. _`model/descriptor[se_e2_a]/exclude_types`: - - exclude_types: - | type: ``list``, optional, default: ``[]`` - | argument path: ``model/descriptor[se_e2_a]/exclude_types`` - - The excluded pairs of types which have no interaction with each other. For example, `[[0, 1]]` means no interaction between type 0 and type 1. - - .. _`model/descriptor[se_e2_a]/set_davg_zero`: - - set_davg_zero: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e2_a]/set_davg_zero`` - - Set the normalization average to zero. This option should be set when `atom_ener` in the energy fitting is used - - - .. _`model/descriptor[se_e2_r]`: - - When |flag:model/descriptor/type|_ is set to ``se_e2_r`` (or its alias ``se_r``): - - .. _`model/descriptor[se_e2_r]/sel`: - - sel: - | type: ``list`` | ``str``, optional, default: ``auto`` - | argument path: ``model/descriptor[se_e2_r]/sel`` - - This parameter set the number of selected neighbors for each type of atom. It can be: - - - `List[int]`. The length of the list should be the same as the number of atom types in the system. `sel[i]` gives the selected number of type-i neighbors. `sel[i]` is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment. - - - `str`. Can be "auto:factor" or "auto". "factor" is a float number larger than 1. This option will automatically determine the `sel`. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the "factor". Finally the number is wraped up to 4 divisible. The option "auto" is equivalent to "auto:1.1". - - .. _`model/descriptor[se_e2_r]/rcut`: - - rcut: - | type: ``float``, optional, default: ``6.0`` - | argument path: ``model/descriptor[se_e2_r]/rcut`` - - The cut-off radius. - - .. _`model/descriptor[se_e2_r]/rcut_smth`: - - rcut_smth: - | type: ``float``, optional, default: ``0.5`` - | argument path: ``model/descriptor[se_e2_r]/rcut_smth`` - - Where to start smoothing. For example the 1/r term is smoothed from `rcut` to `rcut_smth` - - .. _`model/descriptor[se_e2_r]/neuron`: - - neuron: - | type: ``list``, optional, default: ``[10, 20, 40]`` - | argument path: ``model/descriptor[se_e2_r]/neuron`` - - Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built. - - .. _`model/descriptor[se_e2_r]/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/descriptor[se_e2_r]/activation_function`` - - The activation function in the embedding net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/descriptor[se_e2_r]/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e2_r]/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/descriptor[se_e2_r]/type_one_side`: - - type_one_side: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e2_r]/type_one_side`` - - Try to build N_types embedding nets. Otherwise, building N_types^2 embedding nets - - .. _`model/descriptor[se_e2_r]/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/descriptor[se_e2_r]/precision`` - - The precision of the embedding net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/descriptor[se_e2_r]/trainable`: - - trainable: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/descriptor[se_e2_r]/trainable`` - - If the parameters in the embedding net are trainable - - .. _`model/descriptor[se_e2_r]/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/descriptor[se_e2_r]/seed`` - - Random seed for parameter initialization - - .. _`model/descriptor[se_e2_r]/exclude_types`: - - exclude_types: - | type: ``list``, optional, default: ``[]`` - | argument path: ``model/descriptor[se_e2_r]/exclude_types`` - - The excluded pairs of types which have no interaction with each other. For example, `[[0, 1]]` means no interaction between type 0 and type 1. - - .. _`model/descriptor[se_e2_r]/set_davg_zero`: - - set_davg_zero: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e2_r]/set_davg_zero`` - - Set the normalization average to zero. This option should be set when `atom_ener` in the energy fitting is used - - - .. _`model/descriptor[se_e3]`: - - When |flag:model/descriptor/type|_ is set to ``se_e3`` (or its aliases ``se_at``, ``se_a_3be``, ``se_t``): - - .. _`model/descriptor[se_e3]/sel`: - - sel: - | type: ``list`` | ``str``, optional, default: ``auto`` - | argument path: ``model/descriptor[se_e3]/sel`` - - This parameter set the number of selected neighbors for each type of atom. It can be: - - - `List[int]`. The length of the list should be the same as the number of atom types in the system. `sel[i]` gives the selected number of type-i neighbors. `sel[i]` is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment. - - - `str`. Can be "auto:factor" or "auto". "factor" is a float number larger than 1. This option will automatically determine the `sel`. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the "factor". Finally the number is wraped up to 4 divisible. The option "auto" is equivalent to "auto:1.1". - - .. _`model/descriptor[se_e3]/rcut`: - - rcut: - | type: ``float``, optional, default: ``6.0`` - | argument path: ``model/descriptor[se_e3]/rcut`` - - The cut-off radius. - - .. _`model/descriptor[se_e3]/rcut_smth`: - - rcut_smth: - | type: ``float``, optional, default: ``0.5`` - | argument path: ``model/descriptor[se_e3]/rcut_smth`` - - Where to start smoothing. For example the 1/r term is smoothed from `rcut` to `rcut_smth` - - .. _`model/descriptor[se_e3]/neuron`: - - neuron: - | type: ``list``, optional, default: ``[10, 20, 40]`` - | argument path: ``model/descriptor[se_e3]/neuron`` - - Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built. - - .. _`model/descriptor[se_e3]/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/descriptor[se_e3]/activation_function`` - - The activation function in the embedding net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/descriptor[se_e3]/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e3]/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/descriptor[se_e3]/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/descriptor[se_e3]/precision`` - - The precision of the embedding net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/descriptor[se_e3]/trainable`: - - trainable: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/descriptor[se_e3]/trainable`` - - If the parameters in the embedding net are trainable - - .. _`model/descriptor[se_e3]/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/descriptor[se_e3]/seed`` - - Random seed for parameter initialization - - .. _`model/descriptor[se_e3]/set_davg_zero`: - - set_davg_zero: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_e3]/set_davg_zero`` - - Set the normalization average to zero. This option should be set when `atom_ener` in the energy fitting is used - - - .. _`model/descriptor[se_a_tpe]`: - - When |flag:model/descriptor/type|_ is set to ``se_a_tpe`` (or its alias ``se_a_ebd``): - - .. _`model/descriptor[se_a_tpe]/sel`: - - sel: - | type: ``list`` | ``str``, optional, default: ``auto`` - | argument path: ``model/descriptor[se_a_tpe]/sel`` - - This parameter set the number of selected neighbors for each type of atom. It can be: - - - `List[int]`. The length of the list should be the same as the number of atom types in the system. `sel[i]` gives the selected number of type-i neighbors. `sel[i]` is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment. - - - `str`. Can be "auto:factor" or "auto". "factor" is a float number larger than 1. This option will automatically determine the `sel`. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the "factor". Finally the number is wraped up to 4 divisible. The option "auto" is equivalent to "auto:1.1". - - .. _`model/descriptor[se_a_tpe]/rcut`: - - rcut: - | type: ``float``, optional, default: ``6.0`` - | argument path: ``model/descriptor[se_a_tpe]/rcut`` - - The cut-off radius. - - .. _`model/descriptor[se_a_tpe]/rcut_smth`: - - rcut_smth: - | type: ``float``, optional, default: ``0.5`` - | argument path: ``model/descriptor[se_a_tpe]/rcut_smth`` - - Where to start smoothing. For example the 1/r term is smoothed from `rcut` to `rcut_smth` - - .. _`model/descriptor[se_a_tpe]/neuron`: - - neuron: - | type: ``list``, optional, default: ``[10, 20, 40]`` - | argument path: ``model/descriptor[se_a_tpe]/neuron`` - - Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built. - - .. _`model/descriptor[se_a_tpe]/axis_neuron`: - - axis_neuron: - | type: ``int``, optional, default: ``4``, alias: *n_axis_neuron* - | argument path: ``model/descriptor[se_a_tpe]/axis_neuron`` - - Size of the submatrix of G (embedding matrix). - - .. _`model/descriptor[se_a_tpe]/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/descriptor[se_a_tpe]/activation_function`` - - The activation function in the embedding net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/descriptor[se_a_tpe]/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_a_tpe]/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/descriptor[se_a_tpe]/type_one_side`: - - type_one_side: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_a_tpe]/type_one_side`` - - Try to build N_types embedding nets. Otherwise, building N_types^2 embedding nets - - .. _`model/descriptor[se_a_tpe]/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/descriptor[se_a_tpe]/precision`` - - The precision of the embedding net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/descriptor[se_a_tpe]/trainable`: - - trainable: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/descriptor[se_a_tpe]/trainable`` - - If the parameters in the embedding net is trainable - - .. _`model/descriptor[se_a_tpe]/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/descriptor[se_a_tpe]/seed`` - - Random seed for parameter initialization - - .. _`model/descriptor[se_a_tpe]/exclude_types`: - - exclude_types: - | type: ``list``, optional, default: ``[]`` - | argument path: ``model/descriptor[se_a_tpe]/exclude_types`` - - The excluded pairs of types which have no interaction with each other. For example, `[[0, 1]]` means no interaction between type 0 and type 1. - - .. _`model/descriptor[se_a_tpe]/set_davg_zero`: - - set_davg_zero: - | type: ``bool``, optional, default: ``False`` - | argument path: ``model/descriptor[se_a_tpe]/set_davg_zero`` - - Set the normalization average to zero. This option should be set when `atom_ener` in the energy fitting is used - - .. _`model/descriptor[se_a_tpe]/type_nchanl`: - - type_nchanl: - | type: ``int``, optional, default: ``4`` - | argument path: ``model/descriptor[se_a_tpe]/type_nchanl`` - - number of channels for type embedding - - .. _`model/descriptor[se_a_tpe]/type_nlayer`: - - type_nlayer: - | type: ``int``, optional, default: ``2`` - | argument path: ``model/descriptor[se_a_tpe]/type_nlayer`` - - number of hidden layers of type embedding net - - .. _`model/descriptor[se_a_tpe]/numb_aparam`: - - numb_aparam: - | type: ``int``, optional, default: ``0`` - | argument path: ``model/descriptor[se_a_tpe]/numb_aparam`` - - dimension of atomic parameter. if set to a value > 0, the atomic parameters are embedded. - - - .. _`model/descriptor[hybrid]`: - - When |flag:model/descriptor/type|_ is set to ``hybrid``: - - .. _`model/descriptor[hybrid]/list`: - - list: - | type: ``list`` - | argument path: ``model/descriptor[hybrid]/list`` - - A list of descriptor definitions - - .. _`model/fitting_net`: - - fitting_net: - | type: ``dict`` - | argument path: ``model/fitting_net`` - - The fitting of physical properties. - - - Depending on the value of *type*, different sub args are accepted. - - .. _`model/fitting_net/type`: - - type: - | type: ``str`` (flag key), default: ``ener`` - | argument path: ``model/fitting_net/type`` - | possible choices: |code:model/fitting_net[ener]|_, |code:model/fitting_net[dipole]|_, |code:model/fitting_net[polar]|_ - - The type of the fitting. See explanation below. - - - `ener`: Fit an energy model (potential energy surface). - - - `dipole`: Fit an atomic dipole model. Global dipole labels or atomic dipole labels for all the selected atoms (see `sel_type`) should be provided by `dipole.npy` in each data system. The file either has number of frames lines and 3 times of number of selected atoms columns, or has number of frames lines and 3 columns. See `loss` parameter. - - - `polar`: Fit an atomic polarizability model. Global polarizazbility labels or atomic polarizability labels for all the selected atoms (see `sel_type`) should be provided by `polarizability.npy` in each data system. The file eith has number of frames lines and 9 times of number of selected atoms columns, or has number of frames lines and 9 columns. See `loss` parameter. - - - - .. |code:model/fitting_net[ener]| replace:: ``ener`` - .. _`code:model/fitting_net[ener]`: `model/fitting_net[ener]`_ - .. |code:model/fitting_net[dipole]| replace:: ``dipole`` - .. _`code:model/fitting_net[dipole]`: `model/fitting_net[dipole]`_ - .. |code:model/fitting_net[polar]| replace:: ``polar`` - .. _`code:model/fitting_net[polar]`: `model/fitting_net[polar]`_ - - .. |flag:model/fitting_net/type| replace:: *type* - .. _`flag:model/fitting_net/type`: `model/fitting_net/type`_ - - - .. _`model/fitting_net[ener]`: - - When |flag:model/fitting_net/type|_ is set to ``ener``: - - .. _`model/fitting_net[ener]/numb_fparam`: - - numb_fparam: - | type: ``int``, optional, default: ``0`` - | argument path: ``model/fitting_net[ener]/numb_fparam`` - - The dimension of the frame parameter. If set to >0, file `fparam.npy` should be included to provided the input fparams. - - .. _`model/fitting_net[ener]/numb_aparam`: - - numb_aparam: - | type: ``int``, optional, default: ``0`` - | argument path: ``model/fitting_net[ener]/numb_aparam`` - - The dimension of the atomic parameter. If set to >0, file `aparam.npy` should be included to provided the input aparams. - - .. _`model/fitting_net[ener]/neuron`: - - neuron: - | type: ``list``, optional, default: ``[120, 120, 120]``, alias: *n_neuron* - | argument path: ``model/fitting_net[ener]/neuron`` - - The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built. - - .. _`model/fitting_net[ener]/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/fitting_net[ener]/activation_function`` - - The activation function in the fitting net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/fitting_net[ener]/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/fitting_net[ener]/precision`` - - The precision of the fitting net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/fitting_net[ener]/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/fitting_net[ener]/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/fitting_net[ener]/trainable`: - - trainable: - | type: ``list`` | ``bool``, optional, default: ``True`` - | argument path: ``model/fitting_net[ener]/trainable`` - - Whether the parameters in the fitting net are trainable. This option can be - - - bool: True if all parameters of the fitting net are trainable, False otherwise. - - - list of bool: Specifies if each layer is trainable. Since the fitting net is composed by hidden layers followed by a output layer, the length of tihs list should be equal to len(`neuron`)+1. - - .. _`model/fitting_net[ener]/rcond`: - - rcond: - | type: ``float``, optional, default: ``0.001`` - | argument path: ``model/fitting_net[ener]/rcond`` - - The condition number used to determine the inital energy shift for each type of atoms. - - .. _`model/fitting_net[ener]/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/fitting_net[ener]/seed`` - - Random seed for parameter initialization of the fitting net - - .. _`model/fitting_net[ener]/atom_ener`: - - atom_ener: - | type: ``list``, optional, default: ``[]`` - | argument path: ``model/fitting_net[ener]/atom_ener`` - - Specify the atomic energy in vacuum for each type - - - .. _`model/fitting_net[dipole]`: - - When |flag:model/fitting_net/type|_ is set to ``dipole``: - - .. _`model/fitting_net[dipole]/neuron`: - - neuron: - | type: ``list``, optional, default: ``[120, 120, 120]``, alias: *n_neuron* - | argument path: ``model/fitting_net[dipole]/neuron`` - - The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built. - - .. _`model/fitting_net[dipole]/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/fitting_net[dipole]/activation_function`` - - The activation function in the fitting net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/fitting_net[dipole]/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/fitting_net[dipole]/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/fitting_net[dipole]/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/fitting_net[dipole]/precision`` - - The precision of the fitting net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/fitting_net[dipole]/sel_type`: - - sel_type: - | type: ``list`` | ``int`` | ``NoneType``, optional, alias: *dipole_type* - | argument path: ``model/fitting_net[dipole]/sel_type`` - - The atom types for which the atomic dipole will be provided. If not set, all types will be selected. - - .. _`model/fitting_net[dipole]/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/fitting_net[dipole]/seed`` - - Random seed for parameter initialization of the fitting net - - - .. _`model/fitting_net[polar]`: - - When |flag:model/fitting_net/type|_ is set to ``polar``: - - .. _`model/fitting_net[polar]/neuron`: - - neuron: - | type: ``list``, optional, default: ``[120, 120, 120]``, alias: *n_neuron* - | argument path: ``model/fitting_net[polar]/neuron`` - - The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built. - - .. _`model/fitting_net[polar]/activation_function`: - - activation_function: - | type: ``str``, optional, default: ``tanh`` - | argument path: ``model/fitting_net[polar]/activation_function`` - - The activation function in the fitting net. Supported activation functions are "relu", "relu6", "softplus", "sigmoid", "tanh", "gelu". - - .. _`model/fitting_net[polar]/resnet_dt`: - - resnet_dt: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/fitting_net[polar]/resnet_dt`` - - Whether to use a "Timestep" in the skip connection - - .. _`model/fitting_net[polar]/precision`: - - precision: - | type: ``str``, optional, default: ``float64`` - | argument path: ``model/fitting_net[polar]/precision`` - - The precision of the fitting net parameters, supported options are "default", "float16", "float32", "float64". - - .. _`model/fitting_net[polar]/fit_diag`: - - fit_diag: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/fitting_net[polar]/fit_diag`` - - Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix. - - .. _`model/fitting_net[polar]/scale`: - - scale: - | type: ``float`` | ``list``, optional, default: ``1.0`` - | argument path: ``model/fitting_net[polar]/scale`` - - The output of the fitting net (polarizability matrix) will be scaled by ``scale`` - - .. _`model/fitting_net[polar]/shift_diag`: - - shift_diag: - | type: ``bool``, optional, default: ``True`` - | argument path: ``model/fitting_net[polar]/shift_diag`` - - Whether to shift the diagonal of polar, which is beneficial to training. Default is true. - - .. _`model/fitting_net[polar]/sel_type`: - - sel_type: - | type: ``list`` | ``int`` | ``NoneType``, optional, alias: *pol_type* - | argument path: ``model/fitting_net[polar]/sel_type`` - - The atom types for which the atomic polarizability will be provided. If not set, all types will be selected. - - .. _`model/fitting_net[polar]/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``model/fitting_net[polar]/seed`` - - Random seed for parameter initialization of the fitting net - - .. _`model/modifier`: - - modifier: - | type: ``dict``, optional - | argument path: ``model/modifier`` - - The modifier of model output. - - - Depending on the value of *type*, different sub args are accepted. - - .. _`model/modifier/type`: - - type: - | type: ``str`` (flag key) - | argument path: ``model/modifier/type`` - | possible choices: |code:model/modifier[dipole_charge]|_ - - The type of modifier. See explanation below. - - -`dipole_charge`: Use WFCC to model the electronic structure of the system. Correct the long-range interaction - - .. |code:model/modifier[dipole_charge]| replace:: ``dipole_charge`` - .. _`code:model/modifier[dipole_charge]`: `model/modifier[dipole_charge]`_ - - .. |flag:model/modifier/type| replace:: *type* - .. _`flag:model/modifier/type`: `model/modifier/type`_ - - - .. _`model/modifier[dipole_charge]`: - - When |flag:model/modifier/type|_ is set to ``dipole_charge``: - - .. _`model/modifier[dipole_charge]/model_name`: - - model_name: - | type: ``str`` - | argument path: ``model/modifier[dipole_charge]/model_name`` - - The name of the frozen dipole model file. - - .. _`model/modifier[dipole_charge]/model_charge_map`: - - model_charge_map: - | type: ``list`` - | argument path: ``model/modifier[dipole_charge]/model_charge_map`` - - The charge of the WFCC. The list length should be the same as the `sel_type `_. - - .. _`model/modifier[dipole_charge]/sys_charge_map`: - - sys_charge_map: - | type: ``list`` - | argument path: ``model/modifier[dipole_charge]/sys_charge_map`` - - The charge of real atoms. The list length should be the same as the `type_map `_ - - .. _`model/modifier[dipole_charge]/ewald_beta`: - - ewald_beta: - | type: ``float``, optional, default: ``0.4`` - | argument path: ``model/modifier[dipole_charge]/ewald_beta`` - - The splitting parameter of Ewald sum. Unit is A^-1 - - .. _`model/modifier[dipole_charge]/ewald_h`: - - ewald_h: - | type: ``float``, optional, default: ``1.0`` - | argument path: ``model/modifier[dipole_charge]/ewald_h`` - - The grid spacing of the FFT grid. Unit is A - - .. _`model/compress`: - - compress: - | type: ``dict``, optional - | argument path: ``model/compress`` - - Model compression configurations - - - Depending on the value of *type*, different sub args are accepted. - - .. _`model/compress/type`: - - type: - | type: ``str`` (flag key), default: ``se_e2_a`` - | argument path: ``model/compress/type`` - | possible choices: |code:model/compress[se_e2_a]|_ - - The type of model compression, which should be consistent with the descriptor type. - - .. |code:model/compress[se_e2_a]| replace:: ``se_e2_a`` - .. _`code:model/compress[se_e2_a]`: `model/compress[se_e2_a]`_ - - .. |flag:model/compress/type| replace:: *type* - .. _`flag:model/compress/type`: `model/compress/type`_ - - - .. _`model/compress[se_e2_a]`: - - When |flag:model/compress/type|_ is set to ``se_e2_a`` (or its alias ``se_a``): - - .. _`model/compress[se_e2_a]/compress`: - - compress: - | type: ``bool`` - | argument path: ``model/compress[se_e2_a]/compress`` - - The name of the frozen model file. - - .. _`model/compress[se_e2_a]/model_file`: - - model_file: - | type: ``str`` - | argument path: ``model/compress[se_e2_a]/model_file`` - - The input model file, which will be compressed by the DeePMD-kit. - - .. _`model/compress[se_e2_a]/table_config`: - - table_config: - | type: ``list`` - | argument path: ``model/compress[se_e2_a]/table_config`` - - The arguments of model compression, including extrapolate(scale of model extrapolation), stride(uniform stride of tabulation's first and second table), and frequency(frequency of tabulation overflow check). - - .. _`model/compress[se_e2_a]/min_nbor_dist`: - - min_nbor_dist: - | type: ``float`` - | argument path: ``model/compress[se_e2_a]/min_nbor_dist`` - - The nearest distance between neighbor atoms saved in the frozen model. - - -.. _`loss`: - -loss: - | type: ``dict``, optional - | argument path: ``loss`` - - The definition of loss function. The loss type should be set to `tensor`, `ener` or left unset. - \. - - - Depending on the value of *type*, different sub args are accepted. - - .. _`loss/type`: - - type: - | type: ``str`` (flag key), default: ``ener`` - | argument path: ``loss/type`` - | possible choices: |code:loss[ener]|_, |code:loss[tensor]|_ - - The type of the loss. When the fitting type is `ener`, the loss type should be set to `ener` or left unset. When the fitting type is `dipole` or `polar`, the loss type should be set to `tensor`. - \. - - .. |code:loss[ener]| replace:: ``ener`` - .. _`code:loss[ener]`: `loss[ener]`_ - .. |code:loss[tensor]| replace:: ``tensor`` - .. _`code:loss[tensor]`: `loss[tensor]`_ - - .. |flag:loss/type| replace:: *type* - .. _`flag:loss/type`: `loss/type`_ - - - .. _`loss[ener]`: - - When |flag:loss/type|_ is set to ``ener``: - - .. _`loss[ener]/start_pref_e`: - - start_pref_e: - | type: ``float`` | ``int``, optional, default: ``0.02`` - | argument path: ``loss[ener]/start_pref_e`` - - The prefactor of energy loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the energy label should be provided by file energy.npy in each data system. If both start_pref_energy and limit_pref_energy are set to 0, then the energy will be ignored. - - .. _`loss[ener]/limit_pref_e`: - - limit_pref_e: - | type: ``float`` | ``int``, optional, default: ``1.0`` - | argument path: ``loss[ener]/limit_pref_e`` - - The prefactor of energy loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity. - - .. _`loss[ener]/start_pref_f`: - - start_pref_f: - | type: ``float`` | ``int``, optional, default: ``1000`` - | argument path: ``loss[ener]/start_pref_f`` - - The prefactor of force loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the force label should be provided by file force.npy in each data system. If both start_pref_force and limit_pref_force are set to 0, then the force will be ignored. - - .. _`loss[ener]/limit_pref_f`: - - limit_pref_f: - | type: ``float`` | ``int``, optional, default: ``1.0`` - | argument path: ``loss[ener]/limit_pref_f`` - - The prefactor of force loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity. - - .. _`loss[ener]/start_pref_v`: - - start_pref_v: - | type: ``float`` | ``int``, optional, default: ``0.0`` - | argument path: ``loss[ener]/start_pref_v`` - - The prefactor of virial loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the virial label should be provided by file virial.npy in each data system. If both start_pref_virial and limit_pref_virial are set to 0, then the virial will be ignored. - - .. _`loss[ener]/limit_pref_v`: - - limit_pref_v: - | type: ``float`` | ``int``, optional, default: ``0.0`` - | argument path: ``loss[ener]/limit_pref_v`` - - The prefactor of virial loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity. - - .. _`loss[ener]/start_pref_ae`: - - start_pref_ae: - | type: ``float`` | ``int``, optional, default: ``0.0`` - | argument path: ``loss[ener]/start_pref_ae`` - - The prefactor of atom_ener loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atom_ener label should be provided by file atom_ener.npy in each data system. If both start_pref_atom_ener and limit_pref_atom_ener are set to 0, then the atom_ener will be ignored. - - .. _`loss[ener]/limit_pref_ae`: - - limit_pref_ae: - | type: ``float`` | ``int``, optional, default: ``0.0`` - | argument path: ``loss[ener]/limit_pref_ae`` - - The prefactor of atom_ener loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity. - - .. _`loss[ener]/relative_f`: - - relative_f: - | type: ``float`` | ``NoneType``, optional - | argument path: ``loss[ener]/relative_f`` - - If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by `relative_f`, i.e. DF_i / ( || F || + relative_f ) with DF denoting the difference between prediction and label and || F || denoting the L2 norm of the label. - - - .. _`loss[tensor]`: - - When |flag:loss/type|_ is set to ``tensor``: - - .. _`loss[tensor]/pref`: - - pref: - | type: ``float`` | ``int`` - | argument path: ``loss[tensor]/pref`` - - The prefactor of the weight of global loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to global label, i.e. 'polarizability.npy` or `dipole.npy`, whose shape should be #frames x [9 or 3]. If it's larger than 0.0, this npy should be included. - - .. _`loss[tensor]/pref_atomic`: - - pref_atomic: - | type: ``float`` | ``int`` - | argument path: ``loss[tensor]/pref_atomic`` - - The prefactor of the weight of atomic loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to atomic label, i.e. `atomic_polarizability.npy` or `atomic_dipole.npy`, whose shape should be #frames x ([9 or 3] x #selected atoms). If it's larger than 0.0, this npy should be included. Both `pref` and `pref_atomic` should be provided, and either can be set to 0.0. - - -.. _`learning_rate`: - -learning_rate: - | type: ``dict`` - | argument path: ``learning_rate`` - - The definitio of learning rate - - - Depending on the value of *type*, different sub args are accepted. - - .. _`learning_rate/type`: - - type: - | type: ``str`` (flag key), default: ``exp`` - | argument path: ``learning_rate/type`` - | possible choices: |code:learning_rate[exp]|_ - - The type of the learning rate. - - .. |code:learning_rate[exp]| replace:: ``exp`` - .. _`code:learning_rate[exp]`: `learning_rate[exp]`_ - - .. |flag:learning_rate/type| replace:: *type* - .. _`flag:learning_rate/type`: `learning_rate/type`_ - - - .. _`learning_rate[exp]`: - - When |flag:learning_rate/type|_ is set to ``exp``: - - .. _`learning_rate[exp]/start_lr`: - - start_lr: - | type: ``float``, optional, default: ``0.001`` - | argument path: ``learning_rate[exp]/start_lr`` - - The learning rate the start of the training. - - .. _`learning_rate[exp]/stop_lr`: - - stop_lr: - | type: ``float``, optional, default: ``1e-08`` - | argument path: ``learning_rate[exp]/stop_lr`` - - The desired learning rate at the end of the training. - - .. _`learning_rate[exp]/decay_steps`: - - decay_steps: - | type: ``int``, optional, default: ``5000`` - | argument path: ``learning_rate[exp]/decay_steps`` - - The learning rate is decaying every this number of training steps. - - -.. _`training`: - -training: - | type: ``dict`` - | argument path: ``training`` - - The training options. - - .. _`training/training_data`: - - training_data: - | type: ``dict`` - | argument path: ``training/training_data`` - - Configurations of training data. - - .. _`training/training_data/systems`: - - systems: - | type: ``list`` | ``str`` - | argument path: ``training/training_data/systems`` - - The data systems for training. This key can be provided with a list that specifies the systems, or be provided with a string by which the prefix of all systems are given and the list of the systems is automatically generated. - - .. _`training/training_data/set_prefix`: - - set_prefix: - | type: ``str``, optional, default: ``set`` - | argument path: ``training/training_data/set_prefix`` - - The prefix of the sets in the `systems `_. - - .. _`training/training_data/batch_size`: - - batch_size: - | type: ``list`` | ``int`` | ``str``, optional, default: ``auto`` - | argument path: ``training/training_data/batch_size`` - - This key can be - - - list: the length of which is the same as the `systems `_. The batch size of each system is given by the elements of the list. - - - int: all `systems `_ use the same batch size. - - - string "auto": automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than 32. - - - string "auto:N": automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than N. - - .. _`training/training_data/auto_prob`: - - auto_prob: - | type: ``str``, optional, default: ``prob_sys_size``, alias: *auto_prob_style* - | argument path: ``training/training_data/auto_prob`` - - Determine the probability of systems automatically. The method is assigned by this key and can be - - - "prob_uniform" : the probability all the systems are equal, namely 1.0/self.get_nsystems() - - - "prob_sys_size" : the probability of a system is proportional to the number of batches in the system - - - "prob_sys_size;stt_idx:end_idx:weight;stt_idx:end_idx:weight;..." : the list of systems is devided into blocks. A block is specified by `stt_idx:end_idx:weight`, where `stt_idx` is the starting index of the system, `end_idx` is then ending (not including) index of the system, the probabilities of the systems in this block sums up to `weight`, and the relatively probabilities within this block is proportional to the number of batches in the system. - - .. _`training/training_data/sys_probs`: - - sys_probs: - | type: ``list`` | ``NoneType``, optional, default: ``None``, alias: *sys_weights* - | argument path: ``training/training_data/sys_probs`` - - A list of float if specified. Should be of the same length as `systems`, specifying the probability of each system. - - .. _`training/validation_data`: - - validation_data: - | type: ``dict`` | ``NoneType``, optional, default: ``None`` - | argument path: ``training/validation_data`` - - Configurations of validation data. Similar to that of training data, except that a `numb_btch` argument may be configured - - .. _`training/validation_data/systems`: - - systems: - | type: ``list`` | ``str`` - | argument path: ``training/validation_data/systems`` - - The data systems for validation. This key can be provided with a list that specifies the systems, or be provided with a string by which the prefix of all systems are given and the list of the systems is automatically generated. - - .. _`training/validation_data/set_prefix`: - - set_prefix: - | type: ``str``, optional, default: ``set`` - | argument path: ``training/validation_data/set_prefix`` - - The prefix of the sets in the `systems `_. - - .. _`training/validation_data/batch_size`: - - batch_size: - | type: ``list`` | ``int`` | ``str``, optional, default: ``auto`` - | argument path: ``training/validation_data/batch_size`` - - This key can be - - - list: the length of which is the same as the `systems `_. The batch size of each system is given by the elements of the list. - - - int: all `systems `_ use the same batch size. - - - string "auto": automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than 32. - - - string "auto:N": automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than N. - - .. _`training/validation_data/auto_prob`: - - auto_prob: - | type: ``str``, optional, default: ``prob_sys_size``, alias: *auto_prob_style* - | argument path: ``training/validation_data/auto_prob`` - - Determine the probability of systems automatically. The method is assigned by this key and can be - - - "prob_uniform" : the probability all the systems are equal, namely 1.0/self.get_nsystems() - - - "prob_sys_size" : the probability of a system is proportional to the number of batches in the system - - - "prob_sys_size;stt_idx:end_idx:weight;stt_idx:end_idx:weight;..." : the list of systems is devided into blocks. A block is specified by `stt_idx:end_idx:weight`, where `stt_idx` is the starting index of the system, `end_idx` is then ending (not including) index of the system, the probabilities of the systems in this block sums up to `weight`, and the relatively probabilities within this block is proportional to the number of batches in the system. - - .. _`training/validation_data/sys_probs`: - - sys_probs: - | type: ``list`` | ``NoneType``, optional, default: ``None``, alias: *sys_weights* - | argument path: ``training/validation_data/sys_probs`` - - A list of float if specified. Should be of the same length as `systems`, specifying the probability of each system. - - .. _`training/validation_data/numb_btch`: - - numb_btch: - | type: ``int``, optional, default: ``1``, alias: *numb_batch* - | argument path: ``training/validation_data/numb_btch`` - - An integer that specifies the number of systems to be sampled for each validation period. - - .. _`training/numb_steps`: - - numb_steps: - | type: ``int``, alias: *stop_batch* - | argument path: ``training/numb_steps`` - - Number of training batch. Each training uses one batch of data. - - .. _`training/seed`: - - seed: - | type: ``int`` | ``NoneType``, optional - | argument path: ``training/seed`` - - The random seed for getting frames from the training data set. - - .. _`training/disp_file`: - - disp_file: - | type: ``str``, optional, default: ``lcurve.out`` - | argument path: ``training/disp_file`` - - The file for printing learning curve. - - .. _`training/disp_freq`: - - disp_freq: - | type: ``int``, optional, default: ``1000`` - | argument path: ``training/disp_freq`` - - The frequency of printing learning curve. - - .. _`training/numb_test`: - - numb_test: - | type: ``list`` | ``int`` | ``str``, optional, default: ``1`` - | argument path: ``training/numb_test`` - - Number of frames used for the test during training. - - .. _`training/save_freq`: - - save_freq: - | type: ``int``, optional, default: ``1000`` - | argument path: ``training/save_freq`` - - The frequency of saving check point. - - .. _`training/save_ckpt`: - - save_ckpt: - | type: ``str``, optional, default: ``model.ckpt`` - | argument path: ``training/save_ckpt`` - - The file name of saving check point. - - .. _`training/disp_training`: - - disp_training: - | type: ``bool``, optional, default: ``True`` - | argument path: ``training/disp_training`` - - Displaying verbose information during training. - - .. _`training/time_training`: - - time_training: - | type: ``bool``, optional, default: ``True`` - | argument path: ``training/time_training`` - - Timing durining training. - - .. _`training/profiling`: - - profiling: - | type: ``bool``, optional, default: ``False`` - | argument path: ``training/profiling`` - - Profiling during training. - - .. _`training/profiling_file`: - - profiling_file: - | type: ``str``, optional, default: ``timeline.json`` - | argument path: ``training/profiling_file`` - - Output file for profiling. - - .. _`training/tensorboard`: - - tensorboard: - | type: ``bool``, optional, default: ``False`` - | argument path: ``training/tensorboard`` - - Enable tensorboard - - .. _`training/tensorboard_log_dir`: - - tensorboard_log_dir: - | type: ``str``, optional, default: ``log`` - | argument path: ``training/tensorboard_log_dir`` - - The log directory of tensorboard outputs - - .. _`training/tensorboard_freq`: - - tensorboard_freq: - | type: ``int``, optional, default: ``1`` - | argument path: ``training/tensorboard_freq`` - - The frequency of writing tensorboard events. diff --git a/doc/train/index.md b/doc/train/index.md deleted file mode 100644 index f37c1a55ce..0000000000 --- a/doc/train/index.md +++ /dev/null @@ -1,10 +0,0 @@ -# Training - -- [Training a model](training.md) -- [Advanced options](training-advanced.md) -- [Parallel training](parallel-training.md) -- [multi-task training](multi-task-training.md) -- [TensorBoard Usage](tensorboard.md) -- [Known limitations of using GPUs](gpu-limitations.md) -- [Training Parameters](../train-input-auto.rst) -- [Finetuning the Pretrained Model](finetuning.md) diff --git a/doc/troubleshooting/index.md b/doc/troubleshooting/index.md deleted file mode 100644 index a77d058811..0000000000 --- a/doc/troubleshooting/index.md +++ /dev/null @@ -1,15 +0,0 @@ -# FAQs - -As a consequence of differences in computers or systems, problems may occur. Some common circumstances are listed as follows. -In addition, some frequently asked questions are listed as follows. -If other unexpected problems occur, you’re welcome to contact us for help. - -- [Model compatibility](model-compatability.md) -- [Installation](installation.md) -- [The temperature undulates violently during the early stages of MD](md-energy-undulation.md) -- [MD: cannot run LAMMPS after installing a new version of DeePMD-kit](md-version-compatibility.md) -- [Do we need to set rcut < half boxsize?](howtoset-rcut.md) -- [How to set sel?](howtoset-sel.md) -- [How to control the parallelism of a job?](howtoset_num_nodes.md) -- [How to tune Fitting/embedding-net size?](howtoset_netsize.md) -- [Why does a model have low precision?](precision.md)