Releases: choderalab/modelforge
v0.1.4
This release changes the dataset toml file structure to allow users to modify the properties we are loading from the hdf5 datafiles and how those are being used with the software. This also addresses several other issues (e.g., changing matplotlib backend, tagging of loss calculations in wandb). These are all covered by a single PR: #327
v0.1.3
v0.1.2
This release provides some significant changes, including refactoring of neighbor lists, integration with OpenMM, addition of the tmQM dataset, updates to AimNet2 architecture.
Note, checkpoint files and state_dict files have changed since merging of PR #299 (information about whether to use unique pairs is now part of these files). How to load/convert these into the newer versions is covered in the documentation (https://modelforge.readthedocs.io/en/latest/inference.html#load-inference-potential-from-training-checkpoint)
A mostly complete list of the main PRs since past release:
- #322 -- Allows profilers to be toggled via control file
- #319 -- Adds tmQM dataset
- #316 -- Optimize "calculate_radial_contributions" function to reduce GPU memory usage
- #311 -- Adds in functionality to load checkpoint files directly from wandb
- #308 -- Adds epoch time logging
- #307 -- Update to AIMNet2 architecture for radial and vector embedding
- #304 -- Fix a bug in center of mass shifting (required for dipole moment calculation)
- #302 -- Adds regression plots and error histograms
- #301 -- Adding back in unit checking to HDF5 dataset loader, including unit conversion
- #300 -- Make dimensions consistent for predicted properties
- #299 -- OpenMM integration, including substantial refactoring of how neighbor lists are handled for inference.
- #296 -- Adds additional learning rate schedulers beyond step function reduction.
- #295 -- Chang NNPInput structure to allow us to write torch script models
- #294 -- check ani2x against original implementation
- #289 -- Log the gradient norm of loss component and model parameters
- #288 -- Separate training and inference setup in NNP factory
- #287 -- Change input structure from named tuple to class with slots
- #285 -- Add function to generate inference model from training checkpoint file
- #283 -- rename variables in NNP factory class
- #278 -- refactor tests to work with pytest-xdist for parallel test execution
- #275 -- Refactor training neighbor lists
- #268 -- Optimize the training routine
- #263 -- Add function to visualize compute graph of models
- #259 -- formulate paint interactions in a clearer message passing way
- #257 -- refactor inference neighbor lists
v0.1.1
This release provides several improvements and bug fixes.
Major bug fixes:
- Related to calculating losses when training with forces: #239, #240, #243
- PhysNet interaction module: #236
Notable additions:
- AimNet2 added to the available: NNPs #253
- Additional PhAlkEthOH dataset versions, including a version that removes configurations with high energy: #245
- Support to enable multiple cutoffs for models: #238
- Routines for handling long-range electrostatics (following the PhysNet approach): #235
- Charge conservation scheme: #234
v0.1.0
This is the initial release of the modelforge package.
This provides support for training several different Neural Network Potentials, including SchNet and ANI2x (Invariant architectures) and PaiNN, PhysNet, TensorNet, and SAKE (Equivariant architectures) using several curated datasets (QM9, ANI1x, ANI2x, SPICE 1, SPICE 1 openff, SPICE 2, and PhAlkEthOH openff).