Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
rkingsbury committed Jun 16, 2024
2 parents db83d94 + 6d771fd commit 519c17e
Show file tree
Hide file tree
Showing 215 changed files with 23,072 additions and 26,909 deletions.
4 changes: 2 additions & 2 deletions .github/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ changelog:
exclude:
authors: [dependabot, github-actions, pre-commit-ci]
categories:
- title: 💥 Breaking Changes
labels: [breaking]
- title: 🎉 New Features
labels: [feature]
- title: 🐛 Bug Fixes
Expand All @@ -20,8 +22,6 @@ changelog:
labels: [refactor]
- title: 🧪 Tests
labels: [tests]
- title: 💥 Breaking Changes
labels: [breaking]
- title: 🔒 Security Fixes
labels: [security]
- title: 🏥 Package Health
Expand Down
43 changes: 23 additions & 20 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,22 +25,29 @@ jobs:
strategy:
fail-fast: false
matrix:
# pytest-split automatically distributes work load so parallel jobs finish in similar time
os: [ubuntu-latest, windows-latest]
python-version: ["3.9", "3.12"]
split: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
# include/exclude is meant to maximize CI coverage of different platforms and python
# versions while minimizing the total number of jobs. We run all pytest splits with the
# oldest supported python version (currently 3.9) on windows (seems most likely to surface
# errors) and with newest version (currently 3.12) on ubuntu (to get complete and speedy
# coverage on unix). We ignore mac-os, which is assumed to be similar to ubuntu.
exclude:
# maximize CI coverage of different platforms and python versions while minimizing the
# total number of jobs. We run all pytest splits with the oldest supported python
# version (currently 3.9) on windows (seems most likely to surface errors) and with
# newest version (currently 3.12) on ubuntu (to get complete coverage on unix).
config:
- os: windows-latest
python-version: "3.12"
python: "3.9"
resolution: highest
extras: ci,optional
- os: ubuntu-latest
python-version: "3.9"
python: "3.12"
resolution: lowest-direct
extras: ci,optional
- os: macos-latest
python: "3.10"
resolution: lowest-direct
extras: ci # test with only required dependencies installed

runs-on: ${{ matrix.os }}
# pytest-split automatically distributes work load so parallel jobs finish in similar time
# update durations file with `pytest --store-durations --durations-path tests/files/.pytest-split-durations`
split: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

runs-on: ${{ matrix.config.os }}

env:
PMG_MAPI_KEY: ${{ secrets.PMG_MAPI_KEY }}
Expand All @@ -55,13 +62,13 @@ jobs:

- name: Create mamba environment
run: |
micromamba create -n pmg python=${{ matrix.python-version }} --yes
micromamba create -n pmg python=${{ matrix.config.python }} --yes
- name: Install uv
run: micromamba run -n pmg pip install uv

- name: Install ubuntu-only conda dependencies
if: matrix.os == 'ubuntu-latest'
if: matrix.config.os == 'ubuntu-latest'
run: |
micromamba install -n pmg -c conda-forge enumlib packmol bader openbabel openff-toolkit --yes
Expand All @@ -73,11 +80,7 @@ jobs:
pip install torch
uv pip install numpy cython
uv pip install --editable '.[dev,optional]'
# TODO remove next line installing ase from main branch when FrechetCellFilter is released
uv pip install --upgrade 'git+https://gitlab.com/ase/ase'
uv pip install --editable '.[${{ matrix.config.extras }}]' --resolution=${{ matrix.config.resolution }}
- name: pytest split ${{ matrix.split }}
run: |
Expand Down
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ ci:

repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.3
rev: v0.4.8
hooks:
- id: ruff
args: [--fix, --unsafe-fixes]
Expand All @@ -27,7 +27,7 @@ repos:
- id: mypy

- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
rev: v2.3.0
hooks:
- id: codespell
stages: [commit, commit-msg]
Expand All @@ -47,7 +47,7 @@ repos:
- id: blacken-docs

- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.40.0
rev: v0.41.0
hooks:
- id: markdownlint
# MD013: line too long
Expand All @@ -64,6 +64,6 @@ repos:
args: [--drop-empty-cells, --keep-output]

- repo: https://github.com/RobertCraigie/pyright-python
rev: v1.1.361
rev: v1.1.366
hooks:
- id: pyright
11 changes: 4 additions & 7 deletions dev_scripts/regen_libxcfunc.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"""
This script regenerates the enum values in pymatgen.core.libxc_func.py.
It requires in input the path of the `libxc_docs.txt` file contained in libxc/src
The script parses this file, creates a new json file inside pymatgen.core
The script parses this file, creates a new JSON file inside pymatgen.core
and update the enum values declared in LibxcFunc.
The script must be executed inside pymatgen/dev_scripts.
"""
Expand All @@ -16,10 +16,7 @@


def parse_libxc_docs(path):
"""
Parse libxc_docs.txt file, return dictionary with mapping:
libxc_id --> info_dict.
"""
"""Parse libxc_docs.txt file, return dictionary {libxc_id: info_dict}."""

def parse_section(section):
dct = {}
Expand All @@ -46,7 +43,7 @@ def parse_section(section):


def write_libxc_docs_json(xc_funcs, json_path):
"""Write json file with libxc metadata to path jpath."""
"""Write JSON file with libxc metadata to path jpath."""
xc_funcs = deepcopy(xc_funcs)

# Remove XC_FAMILY from Family and XC_ from Kind to make strings more human-readable.
Expand Down Expand Up @@ -85,7 +82,7 @@ def main():

xc_funcs = parse_libxc_docs(path)

# Generate new json file in pycore
# Generate new JSON file in pycore
pmg_core = os.path.abspath("../pymatgen/core/")
json_path = f"{pmg_core}/libxc_docs.json"
write_libxc_docs_json(xc_funcs, json_path)
Expand Down
2 changes: 1 addition & 1 deletion dev_scripts/update_pt_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def parse_ionic_radii():
ionic_radii[int(header[tok_idx])] = float(match.group(1))

if el in data:
data[el]["Ionic_radii" + suffix] = ionic_radii
data[el][f"Ionic_radii{suffix}"] = ionic_radii
if suffix == "_hs":
data[el]["Ionic_radii"] = ionic_radii
else:
Expand Down
80 changes: 80 additions & 0 deletions docs/CHANGES.md

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion docs/apidoc/conf.py

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 0 additions & 8 deletions docs/apidoc/pymatgen.analysis.rst

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 8 additions & 0 deletions docs/apidoc/pymatgen.io.rst

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

23 changes: 22 additions & 1 deletion docs/compatibility.md

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit 519c17e

Please sign in to comment.