Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

releasing minor 1.5.2 [rebase & merge] #2829

Merged
merged 17 commits into from
Nov 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def my_func(param_a: int, param_b: Optional[float] = None) -> str:
>>> my_func(1, 2)
3

.. note:: If you want to add something.
.. hint:: If you want to add something.
"""
p = param_b if param_b else 0
return str(param_a + p)
Expand Down
14 changes: 7 additions & 7 deletions .github/workflows/ci-checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,29 +13,29 @@ concurrency:

jobs:
check-code:
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
with:
actions-ref: v0.11.7
actions-ref: v0.11.8
extra-typing: "typing"

check-schema:
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
uses: Lightning-AI/utilities/.github/workflows/[email protected].8

check-package:
if: github.event.pull_request.draft == false
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
with:
actions-ref: v0.11.7
actions-ref: v0.11.8
artifact-name: dist-packages-${{ github.sha }}
import-name: "torchmetrics"
testing-matrix: |
{
"os": ["ubuntu-22.04", "macos-12", "windows-2022"],
"os": ["ubuntu-22.04", "macos-13", "windows-2022"],
"python-version": ["3.8", "3.11"]
}

check-md-links:
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
with:
base-branch: master
config-file: ".github/markdown-links-config.json"
10 changes: 6 additions & 4 deletions .github/workflows/ci-integrate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,12 @@ jobs:
strategy:
fail-fast: false
matrix:
os: ["ubuntu-22.04", "macOS-12", "windows-2022"]
python-version: ["3.8", "3.10"]
os: ["ubuntu-22.04", "macOS-13", "windows-2022"]
python-version: ["3.9", "3.11"]
requires: ["oldest", "latest"]
exclude:
- { python-version: "3.10", requires: "oldest" }
- { python-version: "3.10", os: "windows" } # todo: https://discuss.pytorch.org/t/numpy-is-not-available-error/146192
- { python-version: "3.11", requires: "oldest" }
- { python-version: "3.11", os: "windows" } # todo: https://discuss.pytorch.org/t/numpy-is-not-available-error/146192
include:
- { python-version: "3.10", requires: "latest", os: "ubuntu-22.04" }
# - { python-version: "3.10", requires: "latest", os: "macOS-14" } # M1 machine # todo: crashing for MPS out of memory
Expand All @@ -53,6 +53,8 @@ jobs:

- name: source cashing
uses: ./.github/actions/pull-caches
with:
requires: ${{ matrix.requires }}
- name: set oldest if/only for integrations
if: matrix.requires == 'oldest'
run: python .github/assistant.py set-oldest-versions --req_files='["requirements/_integrate.txt"]'
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ jobs:
- "2.5.0"
include:
# cover additional python and PT combinations
- { os: "ubuntu-22.04", python-version: "3.8", pytorch-version: "1.13.1" }
- { os: "ubuntu-20.04", python-version: "3.8", pytorch-version: "1.13.1", requires: "oldest" }
- { os: "ubuntu-22.04", python-version: "3.10", pytorch-version: "2.0.1" }
- { os: "ubuntu-22.04", python-version: "3.10", pytorch-version: "2.2.2" }
- { os: "ubuntu-22.04", python-version: "3.11", pytorch-version: "2.3.1" }
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/clear-cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ on:
jobs:
cron-clear:
if: github.event_name == 'schedule' || github.event_name == 'pull_request'
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
with:
scripts-ref: v0.11.7
dry-run: ${{ github.event_name == 'pull_request' }}
Expand All @@ -32,9 +32,9 @@ jobs:

direct-clear:
if: github.event_name == 'workflow_dispatch' || github.event_name == 'pull_request'
uses: Lightning-AI/utilities/.github/workflows/[email protected].7
uses: Lightning-AI/utilities/.github/workflows/[email protected].8
with:
scripts-ref: v0.11.7
scripts-ref: v0.11.8
dry-run: ${{ github.event_name == 'pull_request' }}
pattern: ${{ inputs.pattern || 'pypi_wheels' }} # setting str in case of PR / debugging
age-days: ${{ fromJSON(inputs.age-days) || 0 }} # setting 0 in case of PR / debugging
1 change: 0 additions & 1 deletion .github/workflows/docs-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,6 @@ jobs:
- name: source cashing
uses: ./.github/actions/pull-caches
with:
requires: ${{ matrix.requires }}
pytorch-version: ${{ matrix.pytorch-version }}
pypi-dir: ${{ env.PYPI_CACHE }}

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/publish-pkg.yml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ jobs:
- run: ls -lh dist/
# We do this, since failures on test.pypi aren't that bad
- name: Publish to Test PyPI
uses: pypa/gh-action-pypi-publish@v1.10.2
uses: pypa/gh-action-pypi-publish@v1.11.0
with:
user: __token__
password: ${{ secrets.test_pypi_password }}
Expand All @@ -94,7 +94,7 @@ jobs:
path: dist
- run: ls -lh dist/
- name: Publish distribution 📦 to PyPI
uses: pypa/gh-action-pypi-publish@v1.10.2
uses: pypa/gh-action-pypi-publish@v1.11.0
with:
user: __token__
password: ${{ secrets.pypi_password }}
Expand Down
16 changes: 16 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,22 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

**Note: we move fast, but still we preserve 0.1 version (one feature release) back compatibility.**

---

## [1.5.2] - 2024-11-07

### Changed

- Re-adding `numpy` 2+ support ([#2804](https://github.com/Lightning-AI/torchmetrics/pull/2804))

### Fixed

- Fixed iou scores in detection for either empty predictions/targets leading to wrong scores ([#2805](https://github.com/Lightning-AI/torchmetrics/pull/2805))
- Fixed `MetricCollection` compatibility with `torch.jit.script` ([#2813](https://github.com/Lightning-AI/torchmetrics/pull/2813))
- Fixed assert in PIT ([#2811](https://github.com/Lightning-AI/torchmetrics/pull/2811))
- Pathed `np.Inf` for `numpy` 2.0+ ([#2826](https://github.com/Lightning-AI/torchmetrics/pull/2826))


---

## [1.5.1] - 2024-10-22
Expand Down
10 changes: 5 additions & 5 deletions docs/source/pages/implement.rst
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ and tests gets formatted in the following way:
3. ``new_metric(...)``: essentially wraps the ``_update`` and ``_compute`` private functions into one public function that
makes up the functional interface for the metric.

.. note::
.. hint::
The `functional mean squared error <https://github.com/Lightning-AI/torchmetrics/blob/master/src/torchmetrics/functional/regression/mse.py>`_
metric is a is a great example of how to divide the logic.

Expand All @@ -270,9 +270,9 @@ and tests gets formatted in the following way:
``_new_metric_compute(...)`` function in its ``compute``. No logic should really be implemented in the module interface.
We do this to not have duplicate code to maintain.

.. note::
The module `MeanSquaredError <https://github.com/Lightning-AI/torchmetrics/blob/master/src/torchmetrics/regression/mse.py>`_
metric that corresponds to the above functional example showcases these steps.
.. note::
The module `MeanSquaredError <https://github.com/Lightning-AI/torchmetrics/blob/master/src/torchmetrics/regression/mse.py>`_
metric that corresponds to the above functional example showcases these steps.

4. Remember to add binding to the different relevant ``__init__`` files.

Expand All @@ -291,7 +291,7 @@ and tests gets formatted in the following way:
so that different combinations of inputs and parameters get tested.
5. (optional) If your metric raises any exception, please add tests that showcase this.

.. note::
.. hint::
The `test file for MSE <https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/regression/test_mean_error.py>`_
metric shows how to implement such tests.

Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/lightning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ TorchMetrics in PyTorch Lightning
TorchMetrics was originally created as part of `PyTorch Lightning <https://github.com/Lightning-AI/pytorch-lightning>`_, a powerful deep learning research
framework designed for scaling models without boilerplate.

.. note::
.. caution::

TorchMetrics always offers compatibility with the last 2 major PyTorch Lightning versions, but we recommend always
keeping both frameworks up-to-date for the best experience.
Expand Down Expand Up @@ -69,9 +69,9 @@ LightningModule `self.log <https://lightning.ai/docs/pytorch/stable/extensions/l
method, Lightning will log the metric based on ``on_step`` and ``on_epoch`` flags present in ``self.log(...)``. If
``on_epoch`` is True, the logger automatically logs the end of epoch metric value by calling ``.compute()``.

.. note::
.. caution::

``sync_dist``, ``sync_dist_group`` and ``reduce_fx`` flags from ``self.log(...)`` don't affect the metric logging
The ``sync_dist``, ``sync_dist_group`` and ``reduce_fx`` flags from ``self.log(...)`` don't affect the metric logging
in any manner. The metric class contains its own distributed synchronization logic.

This, however is only true for metrics that inherit the base class ``Metric``,
Expand Down Expand Up @@ -136,7 +136,7 @@ Note that logging metrics this way will require you to manually reset the metric
In general, we recommend logging the metric object to make sure that metrics are correctly computed and reset.
Additionally, we highly recommend that the two ways of logging are not mixed as it can lead to wrong results.

.. note::
.. hint::

When using any Modular metric, calling ``self.metric(...)`` or ``self.metric.forward(...)`` serves the dual purpose
of calling ``self.metric.update()`` on its input and simultaneously returning the metric value over the provided
Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,13 +61,13 @@ This metrics API is independent of PyTorch Lightning. Metrics can directly be us
It is highly recommended to re-initialize the metric per mode as
shown in the examples above.

.. note::
.. caution::

Metric states are **not** added to the models ``state_dict`` by default.
To change this, after initializing the metric, the method ``.persistent(mode)`` can
be used to enable (``mode=True``) or disable (``mode=False``) this behaviour.

.. note::
.. important::

Due to specialized logic around metric states, we in general do **not** recommend
that metrics are initialized inside other metrics (nested metrics), as this can lead
Expand Down Expand Up @@ -306,7 +306,7 @@ This pattern is implemented for the following operators (with ``a`` being metric
* Positive Value (``pos(a)``)
* Indexing (``a[0]``)

.. note::
.. caution::

Some of these operations are only fully supported from Pytorch v1.4 and onwards, explicitly we found:
``add``, ``mul``, ``rmatmul``, ``rsub``, ``rmod``
Expand Down Expand Up @@ -381,7 +381,7 @@ inside your LightningModule. In most cases we just have to replace ``self.log``
# remember to reset metrics at the end of the epoch
self.valid_metrics.reset()

.. note::
.. important::

`MetricCollection` as default assumes that all the metrics in the collection
have the same call signature. If this is not the case, input that should be
Expand Down
2 changes: 1 addition & 1 deletion requirements/_tests.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ fire ==0.7.*
cloudpickle >1.3, <=3.1.0
scikit-learn ==1.2.*; python_version < "3.9"
scikit-learn ==1.5.*; python_version > "3.8" # we do not use `> =` because of oldest replcement
cachier ==3.0.1
cachier ==3.1.2
3 changes: 2 additions & 1 deletion requirements/audio.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@

# this need to be the same as used inside speechmetrics
pesq >=0.0.4, <0.0.5
numpy <2.0 # strict, for compatibility reasons
pystoi >=0.4.0, <0.5.0
torchaudio >=0.10.0, <2.6.0
gammatone >=1.0.0, <1.1.0
librosa >=0.9.0, <0.11.0
onnxruntime >=1.12.0, <1.20 # installing onnxruntime_gpu-gpu failed on macos
onnxruntime >=1.12.0, <1.21 # installing onnxruntime_gpu-gpu failed on macos
requests >=2.19.0, <2.33.0
2 changes: 1 addition & 1 deletion requirements/base.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment

numpy >1.20.0, <2.0 # strict, for compatibility reasons
numpy >1.20.0
packaging >17.1
torch >=1.10.0, <2.6.0
typing-extensions; python_version < '3.9'
Expand Down
2 changes: 1 addition & 1 deletion requirements/multimodal.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# NOTE: the upper bound for the package version is only set for CI stability, and it is dropped while installing this package
# in case you want to preserve/enforce restrictions on the latest compatible version, add "strict" as an in-line comment

transformers >=4.42.3, <4.46.0
transformers >=4.42.3, <4.47.0
piq <=0.8.0
2 changes: 1 addition & 1 deletion requirements/text.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
nltk >3.8.1, <=3.9.1
tqdm <4.67.0
regex >=2021.9.24, <=2024.9.11
transformers >4.4.0, <4.46.0
transformers >4.4.0, <4.47.0
mecab-python3 >=1.0.6, <1.1.0
ipadic >=1.0.0, <1.1.0
sentencepiece >=0.2.0, <0.3.0
4 changes: 2 additions & 2 deletions requirements/typing.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
mypy ==1.11.2
torch ==2.5.0
mypy ==1.13.0
torch ==2.5.1

types-PyYAML
types-emoji
Expand Down
2 changes: 1 addition & 1 deletion src/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,5 +36,5 @@ def collect(self) -> GeneratorExit:
def pytest_collect_file(parent: Path, path: Path) -> Optional[DoctestModule]:
"""Collect doctests and add the reset_random_seed fixture."""
if path.ext == ".py":
return DoctestModule.from_parent(parent, fspath=path)
return DoctestModule.from_parent(parent, path=Path(path))
return None
2 changes: 1 addition & 1 deletion src/torchmetrics/__about__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "1.5.1"
__version__ = "1.5.2"
__author__ = "Lightning-AI et al."
__author_email__ = "[email protected]"
__license__ = "Apache-2.0"
Expand Down
7 changes: 7 additions & 0 deletions src/torchmetrics/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,13 @@
_PACKAGE_ROOT = os.path.dirname(__file__)
_PROJECT_ROOT = os.path.dirname(_PACKAGE_ROOT)

if package_available("numpy"):
# compatibility for AttributeError: `np.Inf` was removed in the NumPy 2.0 release. Use `np.inf` instead
import numpy

numpy.Inf = numpy.inf


if package_available("PIL"):
import PIL

Expand Down
6 changes: 4 additions & 2 deletions src/torchmetrics/audio/dnsmos.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,13 @@ class DeepNoiseSuppressionMeanOpinionScore(Metric):
- ``dnsmos`` (:class:`~torch.Tensor`): float tensor of DNSMOS values reduced across the batch
with shape ``(...,4)`` indicating [p808_mos, mos_sig, mos_bak, mos_ovr] in the last dim.

.. note:: using this metric requires you to have ``librosa``, ``onnxruntime`` and ``requests`` installed.
.. hint::
Using this metric requires you to have ``librosa``, ``onnxruntime`` and ``requests`` installed.
Install as ``pip install torchmetrics['audio']`` or alternatively `pip install librosa onnxruntime-gpu requests`
(if you do not have GPU enabled machine install `onnxruntime` instead of `onnxruntime-gpu`)

.. note:: the ``forward`` and ``compute`` methods in this class return a reduced DNSMOS value
.. caution::
The ``forward`` and ``compute`` methods in this class return a reduced DNSMOS value
for a batch. To obtain the DNSMOS value for each sample, you may use the functional counterpart in
:func:`~torchmetrics.functional.audio.dnsmos.deep_noise_suppression_mean_opinion_score`.

Expand Down
6 changes: 4 additions & 2 deletions src/torchmetrics/audio/pesq.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,14 @@ class PerceptualEvaluationSpeechQuality(Metric):

- ``pesq`` (:class:`~torch.Tensor`): float tensor of PESQ value reduced across the batch

.. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
.. hint::
Using this metrics requires you to have ``pesq`` install. Either install as ``pip install
torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently
installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
most likely have to reinstall ``pesq``.

.. note:: the ``forward`` and ``compute`` methods in this class return a single (reduced) PESQ value
.. caution::
The ``forward`` and ``compute`` methods in this class return a single (reduced) PESQ value
for a batch. To obtain a PESQ value for each sample, you may use the functional counterpart in
:func:`~torchmetrics.functional.audio.pesq.perceptual_evaluation_speech_quality`.

Expand Down
5 changes: 3 additions & 2 deletions src/torchmetrics/audio/srmr.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,11 +49,12 @@ class SpeechReverberationModulationEnergyRatio(Metric):

- ``srmr`` (:class:`~torch.Tensor`): float scaler tensor

.. note:: using this metrics requires you to have ``gammatone`` and ``torchaudio`` installed.
.. hint::
Using this metrics requires you to have ``gammatone`` and ``torchaudio`` installed.
Either install as ``pip install torchmetrics[audio]`` or ``pip install torchaudio``
and ``pip install git+https://github.com/detly/gammatone``.

.. note::
.. attention::
This implementation is experimental, and might not be consistent with the matlab
implementation `SRMRToolbox`_, especially the fast implementation.
The slow versions, a) fast=False, norm=False, max_cf=128, b) fast=False, norm=True, max_cf=30, have
Expand Down
3 changes: 2 additions & 1 deletion src/torchmetrics/audio/stoi.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,8 @@ class ShortTimeObjectiveIntelligibility(Metric):

- ``stoi`` (:class:`~torch.Tensor`): float scalar tensor

.. note:: using this metrics requires you to have ``pystoi`` install. Either install as ``pip install
.. hint::
Using this metrics requires you to have ``pystoi`` install. Either install as ``pip install
torchmetrics[audio]`` or ``pip install pystoi``.

Args:
Expand Down
2 changes: 1 addition & 1 deletion src/torchmetrics/classification/calibration_error.py
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ class MulticlassCalibrationError(Metric):
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)`` containing ground truth labels, and
therefore only contain values in the [0, n_classes-1] range (except if `ignore_index` is specified).

.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.

As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down
Loading
Loading