Skip to content

Commit

Permalink
Merge branch 'wasserstein' of https://github.com/GaganCodes/torcheval
Browse files Browse the repository at this point in the history
…into wasserstein
  • Loading branch information
GaganCodes committed Oct 29, 2023
2 parents 0cb9c26 + c512b69 commit bc92c21
Show file tree
Hide file tree
Showing 57 changed files with 2,667 additions and 2,498 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/nightly_build_cpu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
python-version: [3.8, 3.9, "3.10"]
steps:
- name: Check out repo
uses: actions/checkout@v2
Expand All @@ -28,7 +28,7 @@ jobs:
run: |
set -eux
conda activate test
conda install pytorch cpuonly -c pytorch-nightly
conda install pytorch torchaudio torchvision cpuonly -c pytorch-nightly
pip install -r requirements.txt
pip install -r dev-requirements.txt
python setup.py sdist bdist_wheel
Expand All @@ -51,7 +51,7 @@ jobs:
with:
miniconda-version: "latest"
activate-environment: test
python-version: 3.7
python-version: "3.10"
- name: Install dependencies
shell: bash -l {0}
run: |
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/release_build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
python-version: [3.8, 3.9, "3.10"]
steps:
- name: Check out repo
uses: actions/checkout@v2
Expand All @@ -23,7 +23,7 @@ jobs:
run: |
set -eux
conda activate test
conda install pytorch cpuonly -c pytorch-nightly
conda install pytorch torchaudio torchvision cpuonly -c pytorch-nightly
pip install -r requirements.txt
pip install -r dev-requirements.txt
python setup.py sdist bdist_wheel
Expand All @@ -46,7 +46,7 @@ jobs:
with:
miniconda-version: "latest"
activate-environment: test
python-version: 3.7
python-version: "3.10"
- name: Install dependencies
shell: bash -l {0}
run: |
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/unit_test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8, 3.9]
python-version: [3.8, 3.9]
steps:
- name: Check out repo
uses: actions/checkout@v2
Expand All @@ -25,7 +25,7 @@ jobs:
run: |
set -eux
conda activate test
conda install pytorch cpuonly -c pytorch-nightly
conda install pytorch torchaudio torchvision cpuonly -c pytorch-nightly
pip install -r requirements.txt
pip install -r dev-requirements.txt
pip install --no-build-isolation -e ".[dev]"
Expand Down Expand Up @@ -74,7 +74,7 @@ jobs:
run: |
set -eux
conda activate test
pip install torch --extra-index-url https://download.pytorch.org/whl/cu117
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117
# Use stable fbgemm-gpu
pip uninstall -y fbgemm-gpu-nightly
pip install fbgemm-gpu==0.2.0
Expand Down
7 changes: 2 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to facilitate metric computation in distributed training and tools for PyTorch model evaluations.

## Installing TorchEval
Requires Python >= 3.7 and PyTorch >= 1.11
Requires Python >= 3.8 and PyTorch >= 1.11

From pip:

Expand Down Expand Up @@ -163,13 +163,10 @@ for epoch in range(num_epochs):
# all seen data on the local process since last reset()
local_compute_result = metric.compute()

# sync_and_compute(metric) sends metric data across all processes to the process with rank 0,
# the output on rank 0 is the computed metric for the entire process group, on other ranks None is returned.
# sync_and_compute(metric) syncs metric data across all ranks and computes the metric value
global_compute_result = sync_and_compute(metric)
if global_rank == 0:
print(global_compute_result)
# if sync_and_compute(metric, recipient_rank="all") is called, the computation is done on rank 0, and the output is synced
# across processes so that each rank returns the computed metric.

# metric.reset() clears the data on each process so that subsequent
# calls to compute() only act on new data
Expand Down
6 changes: 3 additions & 3 deletions dev-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
numpy
torchvision
pre-commit
pytest
pytest-timeout
pytest-cov
Cython>=0.28.5
scikit-learn==0.22
scikit-image>=0.18.3
scikit-learn>=0.22
scikit-image==0.18.3
torchtnt-nightly
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@
"sphinx.ext.napoleon",
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.viewcode",
"fbcode",
]

Expand Down
1 change: 0 additions & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,4 +113,3 @@ TorchEval API
torcheval.metrics.rst
torcheval.metrics.functional.rst
torcheval.metrics.toolkit.rst
torcheval.tools.rst
10 changes: 9 additions & 1 deletion docs/source/torcheval.metrics.functional.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,15 @@ Classification Metrics
multilabel_recall_at_fixed_precision
topk_multilabel_accuracy

Image Metrics
-------------------------------------------------------------------

.. autosummary::
:toctree: generated
:nosignatures:

peak_signal_noise_ratio

Ranking Metrics
-------------------------------------------------------------------

Expand Down Expand Up @@ -86,4 +95,3 @@ Text Metrics
word_error_rate
word_information_preserved
word_information_lost

21 changes: 20 additions & 1 deletion docs/source/torcheval.metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,15 @@ Aggregation Metrics
Sum
Throughput

Audio Metrics
-------------------------------------------------------------------

.. autosummary::
:toctree: generated
:nosignatures:

FrechetAudioDistance

Classification Metrics
-------------------------------------------------------------------

Expand Down Expand Up @@ -54,6 +63,17 @@ Classification Metrics
MultilabelRecallAtFixedPrecision
TopKMultilabelAccuracy

Image Metrics
-------------------------------------------------------------------

.. autosummary::
:toctree: generated
:nosignatures:

FrechetInceptionDistance
PeakSignalNoiseRatio
StructuralSimilarity

Ranking Metrics
-------------------------------------------------------------------

Expand Down Expand Up @@ -101,4 +121,3 @@ Windowed Metrics
WindowedClickThroughRate
WindowedMeanSquaredError
WindowedWeightedCalibration

8 changes: 0 additions & 8 deletions docs/source/torcheval.tools.rst

This file was deleted.

160 changes: 4 additions & 156 deletions examples/Introducing_TorchEval.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -604,159 +604,6 @@
"Notice that our final result is computed with 20,000 samples, 5,000 from each process!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6XIhytwSeNPd"
},
"source": [
"# Module Summary Tools\n",
"\n",
"TorchEval also includes tools for model summarization, providing per layer details of trainable parameters, size in bytes, FLOPS, and more."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uC9eKXXkg5v0"
},
"source": [
"To get a basic summary of the model, we can use `get_module_summary` and pass in the model\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "qaZmg2YWe7PS",
"outputId": "50915417-cee2-4e3f-a7d8-2c9b628bc75e"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Name | Type | # Parameters | # Trainable Parameters | Size (bytes) | Contains Uninitialized Parameters?\n",
"----------------------------------------------------------------------------------------------------------------------------\n",
" | AlexNet | 61.1 M | 61.1 M | 244 M | No \n",
"features | Sequential | 2.5 M | 2.5 M | 9.9 M | No \n",
"features.0 | Conv2d | 23.3 K | 23.3 K | 93.2 K | No \n",
"features.1 | ReLU | 0 | 0 | 0 | No \n",
"features.2 | MaxPool2d | 0 | 0 | 0 | No \n",
"features.3 | Conv2d | 307 K | 307 K | 1.2 M | No \n",
"features.4 | ReLU | 0 | 0 | 0 | No \n",
"features.5 | MaxPool2d | 0 | 0 | 0 | No \n",
"features.6 | Conv2d | 663 K | 663 K | 2.7 M | No \n",
"features.7 | ReLU | 0 | 0 | 0 | No \n",
"features.8 | Conv2d | 884 K | 884 K | 3.5 M | No \n",
"features.9 | ReLU | 0 | 0 | 0 | No \n",
"features.10 | Conv2d | 590 K | 590 K | 2.4 M | No \n",
"features.11 | ReLU | 0 | 0 | 0 | No \n",
"features.12 | MaxPool2d | 0 | 0 | 0 | No \n",
"avgpool | AdaptiveAvgPool2d | 0 | 0 | 0 | No \n",
"classifier | Sequential | 58.6 M | 58.6 M | 234 M | No \n",
"classifier.0 | Dropout | 0 | 0 | 0 | No \n",
"classifier.1 | Linear | 37.8 M | 37.8 M | 151 M | No \n",
"classifier.2 | ReLU | 0 | 0 | 0 | No \n",
"classifier.3 | Dropout | 0 | 0 | 0 | No \n",
"classifier.4 | Linear | 16.8 M | 16.8 M | 67.1 M | No \n",
"classifier.5 | ReLU | 0 | 0 | 0 | No \n",
"classifier.6 | Linear | 4.1 M | 4.1 M | 16.4 M | No \n",
"\n"
]
}
],
"source": [
"from torchvision.models.alexnet import AlexNet\n",
"from torcheval.tools import get_module_summary\n",
"\n",
"model = AlexNet()\n",
"ms = get_module_summary(model)\n",
"print(ms)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the table above we see the layers of AlexNet printed, alongside the parameter count and size in bytes at each layer."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8gx5fSgVf1C1"
},
"source": [
"Passing in an example input tensor will retrieve additional metrics such as FLOPS (number of multiply-add operations), activation sizes, and more"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "4UdzYuflf0Iz",
"outputId": "da16fd14-f8c8-40dd-eb27-105bef69c21c"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Name | Type | # Parameters | # Trainable Parameters | Size (bytes) | Contains Uninitialized Parameters? | Forward FLOPs | Backward FLOPs | In size | Out size | Forward Elapsed Times (ms)\n",
"--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n",
" | AlexNet | 61.1 M | 61.1 M | 244 M | No | 714 M | 1.4 G | [1, 3, 224, 224] | [1, 1000] | 0.0004032743 \n",
"features | Sequential | 2.5 M | 2.5 M | 9.9 M | No | 655 M | 1.2 G | [1, 3, 224, 224] | [1, 256, 6, 6] | 0.0003372050 \n",
"features.0 | Conv2d | 23.3 K | 23.3 K | 93.2 K | No | 70.3 M | 70.3 M | [1, 3, 224, 224] | [1, 64, 55, 55] | 0.0002422328 \n",
"features.1 | ReLU | 0 | 0 | 0 | No | 0 | 0 | [1, 64, 55, 55] | [1, 64, 55, 55] | 0.0000035029 \n",
"features.2 | MaxPool2d | 0 | 0 | 0 | No | 0 | 0 | [1, 64, 55, 55] | [1, 64, 27, 27] | 0.0000089833 \n",
"features.3 | Conv2d | 307 K | 307 K | 1.2 M | No | 223 M | 447 M | [1, 64, 27, 27] | [1, 192, 27, 27] | 0.0000217149 \n",
"features.4 | ReLU | 0 | 0 | 0 | No | 0 | 0 | [1, 192, 27, 27] | [1, 192, 27, 27] | 0.0000007634 \n",
"features.5 | MaxPool2d | 0 | 0 | 0 | No | 0 | 0 | [1, 192, 27, 27] | [1, 192, 13, 13] | 0.0000033676 \n",
"features.6 | Conv2d | 663 K | 663 K | 2.7 M | No | 112 M | 224 M | [1, 192, 13, 13] | [1, 384, 13, 13] | 0.0000095535 \n",
"features.7 | ReLU | 0 | 0 | 0 | No | 0 | 0 | [1, 384, 13, 13] | [1, 384, 13, 13] | 0.0000006440 \n",
"features.8 | Conv2d | 884 K | 884 K | 3.5 M | No | 149 M | 299 M | [1, 384, 13, 13] | [1, 256, 13, 13] | 0.0000138996 \n",
"features.9 | ReLU | 0 | 0 | 0 | No | 0 | 0 | [1, 256, 13, 13] | [1, 256, 13, 13] | 0.0000005126 \n",
"features.10 | Conv2d | 590 K | 590 K | 2.4 M | No | 99.7 M | 199 M | [1, 256, 13, 13] | [1, 256, 13, 13] | 0.0000107368 \n",
"features.11 | ReLU | 0 | 0 | 0 | No | 0 | 0 | [1, 256, 13, 13] | [1, 256, 13, 13] | 0.0000007307 \n",
"features.12 | MaxPool2d | 0 | 0 | 0 | No | 0 | 0 | [1, 256, 13, 13] | [1, 256, 6, 6] | 0.0000012592 \n",
"avgpool | AdaptiveAvgPool2d | 0 | 0 | 0 | No | 0 | 0 | [1, 256, 6, 6] | [1, 256, 6, 6] | 0.0000035178 \n",
"classifier | Sequential | 58.6 M | 58.6 M | 234 M | No | 58.6 M | 117 M | [1, 9216] | [1, 1000] | 0.0000574685 \n",
"classifier.0 | Dropout | 0 | 0 | 0 | No | 0 | 0 | [1, 9216] | [1, 9216] | 0.0000154559 \n",
"classifier.1 | Linear | 37.8 M | 37.8 M | 151 M | No | 37.7 M | 75.5 M | [1, 9216] | [1, 4096] | 0.0000240998 \n",
"classifier.2 | ReLU | 0 | 0 | 0 | No | 0 | 0 | [1, 4096] | [1, 4096] | 0.0000004282 \n",
"classifier.3 | Dropout | 0 | 0 | 0 | No | 0 | 0 | [1, 4096] | [1, 4096] | 0.0000006650 \n",
"classifier.4 | Linear | 16.8 M | 16.8 M | 67.1 M | No | 16.8 M | 33.6 M | [1, 4096] | [1, 4096] | 0.0000103166 \n",
"classifier.5 | ReLU | 0 | 0 | 0 | No | 0 | 0 | [1, 4096] | [1, 4096] | 0.0000010071 \n",
"classifier.6 | Linear | 4.1 M | 4.1 M | 16.4 M | No | 4.1 M | 8.2 M | [1, 4096] | [1, 1000] | 0.0000022619 \n",
"Remark for FLOPs calculation: (1) Only operators `mm`|`matmul`|`addmm`|`bmm`|`convolution`|`_convolution`|`convolution_backward` are included. To add more operators supported in FLOPs calculation, please contribute to torcheval/tools/flops.py. (2) The calculation related to additional loss function is not included. For forward, we calculated FLOPs based on `loss = model(input_data).mean()`. For backward, we calculated FLOPs based on `loss.backward()`. \n",
"\n"
]
}
],
"source": [
"model = AlexNet()\n",
"inp = torch.randn(1, 3, 224, 224)\n",
"ms = get_module_summary(model, module_args=(inp,))\n",
"print(ms)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By passing this tensor through the model, the module summary is additionally able to provide FLOPS, activations sizes, and time elapsed at each layer. FLOPS are computed by utilizing [TorchDispatchMode](https://dev-discuss.pytorch.org/t/torchdispatchmode-for-debugging-testing-and-more/717) to interpose at the [__torch_dispatch__](https://dev-discuss.pytorch.org/t/what-and-why-is-torch-dispatch/557) level, where all operators on the input tensor are caught and FLOPS at each operator are computed and added together."
]
},
{
"cell_type": "markdown",
"metadata": {
Expand All @@ -775,10 +622,11 @@
"colab": {
"provenance": []
},
"fileHeader": "",
"kernelspec": {
"display_name": "Python 3.10.6 ('evaldocs')",
"display_name": "Python 3",
"language": "python",
"name": "python3"
"name": "bento_kernel_default"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -799,5 +647,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat_minor": 2
}
1 change: 0 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1 @@
torchtnt>=0.0.5
typing_extensions
Loading

0 comments on commit bc92c21

Please sign in to comment.