Skip to content

Commit

Permalink
docs: specify directives
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Oct 31, 2024
1 parent d3894e1 commit 36733df
Show file tree
Hide file tree
Showing 49 changed files with 132 additions and 108 deletions.
2 changes: 1 addition & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def my_func(param_a: int, param_b: Optional[float] = None) -> str:
>>> my_func(1, 2)
3
.. note:: If you want to add something.
.. hint:: If you want to add something.
"""
p = param_b if param_b else 0
return str(param_a + p)
Expand Down
10 changes: 5 additions & 5 deletions docs/source/pages/implement.rst
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ and tests gets formatted in the following way:
3. ``new_metric(...)``: essentially wraps the ``_update`` and ``_compute`` private functions into one public function that
makes up the functional interface for the metric.

.. note::
.. hint::
The `functional mean squared error <https://github.com/Lightning-AI/torchmetrics/blob/master/src/torchmetrics/functional/regression/mse.py>`_
metric is a is a great example of how to divide the logic.

Expand All @@ -270,9 +270,9 @@ and tests gets formatted in the following way:
``_new_metric_compute(...)`` function in its ``compute``. No logic should really be implemented in the module interface.
We do this to not have duplicate code to maintain.

.. note::
The module `MeanSquaredError <https://github.com/Lightning-AI/torchmetrics/blob/master/src/torchmetrics/regression/mse.py>`_
metric that corresponds to the above functional example showcases these steps.
.. note::
The module `MeanSquaredError <https://github.com/Lightning-AI/torchmetrics/blob/master/src/torchmetrics/regression/mse.py>`_
metric that corresponds to the above functional example showcases these steps.

4. Remember to add binding to the different relevant ``__init__`` files.

Expand All @@ -291,7 +291,7 @@ and tests gets formatted in the following way:
so that different combinations of inputs and parameters get tested.
5. (optional) If your metric raises any exception, please add tests that showcase this.

.. note::
.. hint::
The `test file for MSE <https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/regression/test_mean_error.py>`_
metric shows how to implement such tests.

Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/lightning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ TorchMetrics in PyTorch Lightning
TorchMetrics was originally created as part of `PyTorch Lightning <https://github.com/Lightning-AI/pytorch-lightning>`_, a powerful deep learning research
framework designed for scaling models without boilerplate.

.. note::
.. caution::

TorchMetrics always offers compatibility with the last 2 major PyTorch Lightning versions, but we recommend always
keeping both frameworks up-to-date for the best experience.
Expand Down Expand Up @@ -69,9 +69,9 @@ LightningModule `self.log <https://lightning.ai/docs/pytorch/stable/extensions/l
method, Lightning will log the metric based on ``on_step`` and ``on_epoch`` flags present in ``self.log(...)``. If
``on_epoch`` is True, the logger automatically logs the end of epoch metric value by calling ``.compute()``.

.. note::
.. caution::

``sync_dist``, ``sync_dist_group`` and ``reduce_fx`` flags from ``self.log(...)`` don't affect the metric logging
The ``sync_dist``, ``sync_dist_group`` and ``reduce_fx`` flags from ``self.log(...)`` don't affect the metric logging
in any manner. The metric class contains its own distributed synchronization logic.

This, however is only true for metrics that inherit the base class ``Metric``,
Expand Down Expand Up @@ -136,7 +136,7 @@ Note that logging metrics this way will require you to manually reset the metric
In general, we recommend logging the metric object to make sure that metrics are correctly computed and reset.
Additionally, we highly recommend that the two ways of logging are not mixed as it can lead to wrong results.

.. note::
.. hint::

When using any Modular metric, calling ``self.metric(...)`` or ``self.metric.forward(...)`` serves the dual purpose
of calling ``self.metric.update()`` on its input and simultaneously returning the metric value over the provided
Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,13 +61,13 @@ This metrics API is independent of PyTorch Lightning. Metrics can directly be us
It is highly recommended to re-initialize the metric per mode as
shown in the examples above.

.. note::
.. caution::

Metric states are **not** added to the models ``state_dict`` by default.
To change this, after initializing the metric, the method ``.persistent(mode)`` can
be used to enable (``mode=True``) or disable (``mode=False``) this behaviour.

.. note::
.. important::

Due to specialized logic around metric states, we in general do **not** recommend
that metrics are initialized inside other metrics (nested metrics), as this can lead
Expand Down Expand Up @@ -306,7 +306,7 @@ This pattern is implemented for the following operators (with ``a`` being metric
* Positive Value (``pos(a)``)
* Indexing (``a[0]``)

.. note::
.. caution::

Some of these operations are only fully supported from Pytorch v1.4 and onwards, explicitly we found:
``add``, ``mul``, ``rmatmul``, ``rsub``, ``rmod``
Expand Down Expand Up @@ -381,7 +381,7 @@ inside your LightningModule. In most cases we just have to replace ``self.log``
# remember to reset metrics at the end of the epoch
self.valid_metrics.reset()

.. note::
.. important::

`MetricCollection` as default assumes that all the metrics in the collection
have the same call signature. If this is not the case, input that should be
Expand Down
6 changes: 4 additions & 2 deletions src/torchmetrics/audio/dnsmos.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,13 @@ class DeepNoiseSuppressionMeanOpinionScore(Metric):
- ``dnsmos`` (:class:`~torch.Tensor`): float tensor of DNSMOS values reduced across the batch
with shape ``(...,4)`` indicating [p808_mos, mos_sig, mos_bak, mos_ovr] in the last dim.
.. note:: using this metric requires you to have ``librosa``, ``onnxruntime`` and ``requests`` installed.
.. hint::
Using this metric requires you to have ``librosa``, ``onnxruntime`` and ``requests`` installed.
Install as ``pip install torchmetrics['audio']`` or alternatively `pip install librosa onnxruntime-gpu requests`
(if you do not have GPU enabled machine install `onnxruntime` instead of `onnxruntime-gpu`)
.. note:: the ``forward`` and ``compute`` methods in this class return a reduced DNSMOS value
.. caution::
The ``forward`` and ``compute`` methods in this class return a reduced DNSMOS value
for a batch. To obtain the DNSMOS value for each sample, you may use the functional counterpart in
:func:`~torchmetrics.functional.audio.dnsmos.deep_noise_suppression_mean_opinion_score`.
Expand Down
6 changes: 4 additions & 2 deletions src/torchmetrics/audio/nisqa.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,12 @@ class NonIntrusiveSpeechQualityAssessment(Metric):
- ``nisqa`` (:class:`~torch.Tensor`): float tensor reduced across the batch with shape ``(5,)`` corresponding to
overall MOS, noisiness, discontinuity, coloration and loudness in that order
.. note:: Using this metric requires you to have ``librosa`` and ``requests`` installed. Install as
.. hint::
Using this metric requires you to have ``librosa`` and ``requests`` installed. Install as
``pip install librosa requests``.
.. note:: The ``forward`` and ``compute`` methods in this class return values reduced across the batch. To obtain
.. caution::
The ``forward`` and ``compute`` methods in this class return values reduced across the batch. To obtain
values for each sample, you may use the functional counterpart
:func:`~torchmetrics.functional.audio.nisqa.non_intrusive_speech_quality_assessment`.
Expand Down
6 changes: 4 additions & 2 deletions src/torchmetrics/audio/pesq.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,14 @@ class PerceptualEvaluationSpeechQuality(Metric):
- ``pesq`` (:class:`~torch.Tensor`): float tensor of PESQ value reduced across the batch
.. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
.. hint::
Using this metrics requires you to have ``pesq`` install. Either install as ``pip install
torchmetrics[audio]`` or ``pip install pesq``. ``pesq`` will compile with your currently
installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
most likely have to reinstall ``pesq``.
.. note:: the ``forward`` and ``compute`` methods in this class return a single (reduced) PESQ value
.. caution::
The ``forward`` and ``compute`` methods in this class return a single (reduced) PESQ value
for a batch. To obtain a PESQ value for each sample, you may use the functional counterpart in
:func:`~torchmetrics.functional.audio.pesq.perceptual_evaluation_speech_quality`.
Expand Down
5 changes: 3 additions & 2 deletions src/torchmetrics/audio/srmr.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,12 @@ class SpeechReverberationModulationEnergyRatio(Metric):
- ``srmr`` (:class:`~torch.Tensor`): float scaler tensor
.. note:: using this metrics requires you to have ``gammatone`` and ``torchaudio`` installed.
.. hint::
Using this metrics requires you to have ``gammatone`` and ``torchaudio`` installed.
Either install as ``pip install torchmetrics[audio]`` or ``pip install torchaudio``
and ``pip install git+https://github.com/detly/gammatone``.
.. note::
.. attention::
This implementation is experimental, and might not be consistent with the matlab
implementation `SRMRToolbox`_, especially the fast implementation.
The slow versions, a) fast=False, norm=False, max_cf=128, b) fast=False, norm=True, max_cf=30, have
Expand Down
3 changes: 2 additions & 1 deletion src/torchmetrics/audio/stoi.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,8 @@ class ShortTimeObjectiveIntelligibility(Metric):
- ``stoi`` (:class:`~torch.Tensor`): float scalar tensor
.. note:: using this metrics requires you to have ``pystoi`` install. Either install as ``pip install
.. hint::
Using this metrics requires you to have ``pystoi`` install. Either install as ``pip install
torchmetrics[audio]`` or ``pip install pystoi``.
Args:
Expand Down
2 changes: 1 addition & 1 deletion src/torchmetrics/classification/calibration_error.py
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ class MulticlassCalibrationError(Metric):
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)`` containing ground truth labels, and
therefore only contain values in the [0, n_classes-1] range (except if `ignore_index` is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down
4 changes: 2 additions & 2 deletions src/torchmetrics/classification/cohen_kappa.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ class labels.
Additionally, we convert to int tensor with thresholding using the value in ``threshold``.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)``.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -175,7 +175,7 @@ class labels.
convert probabilities/logits into an int tensor.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)``.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down
2 changes: 1 addition & 1 deletion src/torchmetrics/classification/dice.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ class Dice(Metric):
- ``'samples'``: Calculate the metric for each sample, and average the metrics
across samples (with equal weights for each sample).
.. note::
.. hint::
What is considered a sample in the multi-dimensional multi-class case
depends on the value of ``mdmc_average``.
Expand Down
4 changes: 2 additions & 2 deletions src/torchmetrics/classification/hinge.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ class BinaryHingeLoss(Metric):
ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified). The value
1 always encodes the positive class.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -189,7 +189,7 @@ class MulticlassHingeLoss(Metric):
ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if `ignore_index`
is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down
6 changes: 3 additions & 3 deletions src/torchmetrics/classification/jaccard.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class BinaryJaccardIndex(BinaryConfusionMatrix):
Additionally, we convert to int tensor with thresholding using the value in ``threshold``.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)``.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -170,7 +170,7 @@ class MulticlassJaccardIndex(MulticlassConfusionMatrix):
probabilities/logits into an int tensor.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)``.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -307,7 +307,7 @@ class MultilabelJaccardIndex(MultilabelConfusionMatrix):
sigmoid per element. Additionally, we convert to int tensor with thresholding using the value in ``threshold``.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, C, ...)``
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down
6 changes: 3 additions & 3 deletions src/torchmetrics/classification/matthews_corrcoef.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class BinaryMatthewsCorrCoef(BinaryConfusionMatrix):
per element. Additionally, we convert to int tensor with thresholding using the value in ``threshold``.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)``
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -156,7 +156,7 @@ class MulticlassMatthewsCorrCoef(MulticlassConfusionMatrix):
probabilities/logits into an int tensor.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)``
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -268,7 +268,7 @@ class MultilabelMatthewsCorrCoef(MultilabelConfusionMatrix):
per element. Additionally, we convert to int tensor with thresholding using the value in ``threshold``.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, C, ...)``
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down
6 changes: 3 additions & 3 deletions src/torchmetrics/classification/precision_fixed_recall.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ class BinaryPrecisionAtFixedRecall(BinaryPrecisionRecallCurve):
ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified). The value
1 always encodes the positive class.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -193,7 +193,7 @@ class MulticlassPrecisionAtFixedRecall(MulticlassPrecisionRecallCurve):
ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if `ignore_index`
is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns a tuple of either 2 tensors or 2 lists containing:
Expand Down Expand Up @@ -338,7 +338,7 @@ class MultilabelPrecisionAtFixedRecall(MultilabelPrecisionRecallCurve):
ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified). The value
1 always encodes the positive class.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns a tuple of either 2 tensors or 2 lists containing:
Expand Down
6 changes: 3 additions & 3 deletions src/torchmetrics/classification/precision_recall_curve.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ class BinaryPrecisionRecallCurve(Metric):
ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified). The value
1 always encodes the positive class.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -244,7 +244,7 @@ class MulticlassPrecisionRecallCurve(Metric):
ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if `ignore_index`
is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -441,7 +441,7 @@ class MultilabelPrecisionRecallCurve(Metric):
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, C, ...)``. Target should be a tensor containing
ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following a tuple of either 3 tensors or
Expand Down
6 changes: 3 additions & 3 deletions src/torchmetrics/classification/ranking.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ class MultilabelCoverageError(Metric):
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, C, ...)``. Target should be a tensor
containing ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -171,7 +171,7 @@ class MultilabelRankingAveragePrecision(Metric):
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, C, ...)``. Target should be a tensor
containing ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -291,7 +291,7 @@ class MultilabelRankingLoss(Metric):
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, C, ...)``. Target should be a tensor
containing ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down
6 changes: 3 additions & 3 deletions src/torchmetrics/classification/recall_fixed_precision.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ class BinaryRecallAtFixedPrecision(BinaryPrecisionRecallCurve):
ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified). The value
1 always encodes the positive class.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -194,7 +194,7 @@ class MulticlassRecallAtFixedPrecision(MulticlassPrecisionRecallCurve):
ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if `ignore_index`
is specified).
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns a tuple of either 2 tensors or 2 lists containing:
Expand Down Expand Up @@ -337,7 +337,7 @@ class MultilabelRecallAtFixedPrecision(MultilabelPrecisionRecallCurve):
ground truth labels, and therefore only contain {0,1} values (except if `ignore_index` is specified). The value
1 always encodes the positive class.
.. note::
.. tip::
Additional dimension ``...`` will be flattened into the batch dimension.
As output to ``forward`` and ``compute`` the metric returns a tuple of either 2 tensors or 2 lists containing:
Expand Down
Loading

0 comments on commit 36733df

Please sign in to comment.