Skip to content

Commit

Permalink
changed enums "*" to "-" within documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
Kleebaue committed Apr 16, 2024
1 parent d939c95 commit 1536e32
Show file tree
Hide file tree
Showing 5 changed files with 52 additions and 52 deletions.
32 changes: 16 additions & 16 deletions docs/source/creators/creators_description_classes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,19 @@ Model classes and requirements
Supported model classes
=======================
The plugin supports the following model classes:
* Segmentation Model (aka. :code:`Segmentor`)
* Detection Model (aka. :code:`Detector`)
* Regression Model (aka. :code:`Regressor`)
- Segmentation Model (aka. :code:`Segmentor`)
- Detection Model (aka. :code:`Detector`)
- Regression Model (aka. :code:`Regressor`)

Once the processing of ortophoto is finished, a report with model-specific information will be presented.

Common rules for models and processing:
* Model needs to be in ONNX format, which contains both the network architecture and weights.
* All model classes process the data in chunks called 'tiles', that is a small part of the entire ortophoto - tiles size and overlap is configurable.
* Every model should have one input of size :code:`[BATCH_SIZE, CHANNELS, SIZE_PX, SIZE_PX]`. :code:`BATCH_SIZE` can be 1 or dynamic.
* Size of processed tiles (in pixels) is model defined, but needs to be equal in x and y axes, so that the tiles can be square.
* If the processed tile needs to be padded (e.g. on otophoto borders) it will be padded with 0 values.
* Input image data - only uint8_t value for each pixel channel is supported
- Model needs to be in ONNX format, which contains both the network architecture and weights.
- All model classes process the data in chunks called 'tiles', that is a small part of the entire ortophoto - tiles size and overlap is configurable.
- Every model should have one input of size :code:`[BATCH_SIZE, CHANNELS, SIZE_PX, SIZE_PX]`. :code:`BATCH_SIZE` can be 1 or dynamic.
- Size of processed tiles (in pixels) is model defined, but needs to be equal in x and y axes, so that the tiles can be square.
- If the processed tile needs to be padded (e.g. on otophoto borders) it will be padded with 0 values.
- Input image data - only uint8_t value for each pixel channel is supported


==================
Expand All @@ -31,15 +31,15 @@ The segmentation model output is also an image, with same dimension as the input
Therefore, the shape of model output is :code:`[BATCH_SIZE, NUM_CLASSES, SIZE_PX, SIZE_PX]`.

We support the following types of models:
* single output (one head) with the following output shapes:
* :code:`[BATCH_SIZE, 1, SIZE_PX, SIZE_PX]` - one class with sigmoid activation function (binary classification)
* :code:`[BATCH_SIZE, NUM_CLASSES, SIZE_PX, SIZE_PX]` - multiple classes with softmax activation function (multi-class classification) - outputs sum to 1.0
* multiple outputs (multiple heads) with each output head composed of the same shapes as single output.
- single output (one head) with the following output shapes:
- :code:`[BATCH_SIZE, 1, SIZE_PX, SIZE_PX]` - one class with sigmoid activation function (binary classification)
- :code:`[BATCH_SIZE, NUM_CLASSES, SIZE_PX, SIZE_PX]` - multiple classes with softmax activation function (multi-class classification) - outputs sum to 1.0
- multiple outputs (multiple heads) with each output head composed of the same shapes as single output.

Metaparameter :code:`class_names` saved in the model file should be as follows in this example:
* for single output with binary classification (sigmoid): :code:`[{0: "background", 1: "class_name"}]`
* for single output with multi-class classification (softmax): :code:`[{0: "class0", 1: "class1", 2: "class2"}]` or :code:`{0: "class0", 1: "class1", 2: "class2"}`
* for multiple outputs (multiple heads): :code:`[{0: "class0", 1: "class1", 2: "class2"}, {0: "background", 1: "class_name"}]`
- for single output with binary classification (sigmoid): :code:`[{0: "background", 1: "class_name"}]`
- for single output with multi-class classification (softmax): :code:`[{0: "class0", 1: "class1", 2: "class2"}]` or :code:`{0: "class0", 1: "class1", 2: "class2"}`
- for multiple outputs (multiple heads): :code:`[{0: "class0", 1: "class1", 2: "class2"}, {0: "background", 1: "class_name"}]`

Output report contains information about percentage coverage of each class.

Expand Down
14 changes: 7 additions & 7 deletions docs/source/creators/creators_example_onnx_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ Pytorch

Steps based on `EXPORTING A MODEL FROM PYTORCH TO ONNX AND RUNNING IT USING ONNX RUNTIME <https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html>`_.

* Step 0. Requirements:
Step 0. Requirements:

- Pytorch

- ONNX

* Step 1. Load PyTorch model
Step 1. Load PyTorch model
.. code-block::
from torch import nn
Expand All @@ -25,13 +25,13 @@ Steps based on `EXPORTING A MODEL FROM PYTORCH TO ONNX AND RUNNING IT USING ONNX
model.load_state_dict(torch.load(YOUR_MODEL_CHECKPOINT_PATH, map_location='cpu')['state_dict'])
model.eval()
* Step 2. Create data sample with :code:`batch_size=1` and call forward step of your model:
Step 2. Create data sample with :code:`batch_size=1` and call forward step of your model:
.. code-block::
x = torch.rand(1, INP_CHANNEL, INP_HEIGHT, INP_WIDTH) # eg. torch.rand([1, 3, 256, 256])
_ = model(x)
* Step 3a. Call export function with static batch_size=1:
Step 3a. Call export function with static batch_size=1:

.. code-block::
Expand All @@ -44,7 +44,7 @@ Steps based on `EXPORTING A MODEL FROM PYTORCH TO ONNX AND RUNNING IT USING ONNX
output_names=['output'],
do_constant_folding=False)
* Step 3b. Call export function with dynamic batch_size:
Step 3b. Call export function with dynamic batch_size:

.. code-block::
Expand All @@ -64,15 +64,15 @@ Tensorflow/Keras

Steps based on the `tensorflow-onnx <https://github.com/onnx/tensorflow-onnx>`_ repository. The instruction is valid for :code:`saved model` format. For other types follow :code:`tensorflow-onnx` instructions.

* Requirements:
Requirements:

- tensorflow

- ONNX

- tf2onnx

* And simply call converter script:
And simply call converter script:

.. code-block::
Expand Down
4 changes: 2 additions & 2 deletions docs/source/creators/creators_export_training_data_tool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ Training Data Export Tool

Apart from model inference functionality, the plugin contains :code:`Training Data Export Tool`.
This tool allows to prepare the following data for the model training process:
* specified part of ortophoto - divided into tiles (each tile saved as a separate file)
* specified part of the annotation layer, which can be used as ground-truth for model training. Each mask tile corresponds to one ortophoto tile.
- specified part of ortophoto - divided into tiles (each tile saved as a separate file)
- specified part of the annotation layer, which can be used as ground-truth for model training. Each mask tile corresponds to one ortophoto tile.

Exported data follows the same rules as inference, that is the user needs to specify layer and what part of thereof should be processed.
Tile size and overlap between consecutive tiles is also configurable.
Expand Down
42 changes: 21 additions & 21 deletions docs/source/main/main_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,38 +6,38 @@ Installation
Plugin installation
===================

* (option 1) Install using QGIS plugin browser
(option 1) Install using QGIS plugin browser

.. note::

The repository has not yet been pushed out to the QGIS plugins collection.

* Run QGIS
- Run QGIS

* Open: Plugins->Manage and Install Plugins
- Open: Plugins->Manage and Install Plugins

* Select: Not installed
- Select: Not installed

* Type in the "Search..." field plugin name
- Type in the "Search..." field plugin name

* Select from list and click the Install button
- Select from list and click the Install button


* (option 2) Install using downloaded ZIP
(option 2) Install using downloaded ZIP

* Go to the plugin repository: `https://github.com/PUTvision/qgis-plugin-deepness <https://github.com/PUTvision/qgis-plugin-deepness>`_
- Go to the plugin repository: `https://github.com/PUTvision/qgis-plugin-deepness <https://github.com/PUTvision/qgis-plugin-deepness>`_

* From the right panel, select Release and download the latest version
- From the right panel, select Release and download the latest version

* Run QGIS
- Run QGIS

* Open: Plugins->Manage and Install Plugins
- Open: Plugins->Manage and Install Plugins

* Select: Install from ZIP
- Select: Install from ZIP

* Select the ZIP file using system prompt
- Select the ZIP file using system prompt

* Click the Install Plugin button
- Click the Install Plugin button

============
Requirements
Expand All @@ -49,29 +49,29 @@ The plugin should install all required dependencies automatically during the fir

The plugin requirements and versions are listed in the `requirements.txt <https://github.com/PUTvision/qgis-plugin-deepness/blob/master/src/deepness/python_requirements/requirements.txt>`_ file.

* Ubuntu
Ubuntu

* (option 1) Install requirements using system Python interpreter:
- (option 1) Install requirements using system Python interpreter:

.. code-block::
python3 -m pip install opencv-python-headless onnxruntime-gpu
* (option 2) Run QGIS and Python Console. Then call command:
- (option 2) Run QGIS and Python Console. Then call command:

.. code-block::
import pip; pip.main(['install', 'opencv-python-headless', 'onnxruntime-gpu'])
* Windows
Windows

* Go to QGIS installation path (for example :code:`C:\Program Files\QGIS 3.26.3\`)
- Go to QGIS installation path (for example :code:`C:\Program Files\QGIS 3.26.3\`)
* Run :code:`OSGeo4W.bat` and type installation command:
- Run :code:`OSGeo4W.bat` and type installation command:

.. code-block::
python3 -m pip install opencv-python-headless onnxruntime-gpu
* MacOS - SOON
MacOS - SOON
12 changes: 6 additions & 6 deletions docs/source/main/model_zoo/MODEL_ZOO.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection
| [Agriculture segmentation RGB+NIR](https://chmura.put.poznan.pl/s/wf5Ml1ZDyiVdNiy) | 256 | 30 | Trained on the [Agriculture Vision 2021 dataset](https://www.agriculture-vision.com/agriculture-vision-2021/dataset-2021). 4 channels input (RGB + NIR). 9 output classes within agricultural field (weed_cluster, waterway, ...). Uses X-UNet. | [Image](https://chmura.put.poznan.pl/s/35A5ISUxLxcK7kL) |
| [Fire risk assesment](https://chmura.put.poznan.pl/s/NxKLdfdr9s9jsVA) | 384 | 100 | Trained on the FireRisk dataset (RGB data). Classifies risk of fires (ver_high, high, low, ...). Uses ConvNeXt XXL. Val F1-score 65.5. | [Image](https://chmura.put.poznan.pl/s/Ijn3VgG76NvYtDY) |
| [Roads Segmentation](https://chmura.put.poznan.pl/s/y6S3CmodPy1fYYz) | 512 | 21 | The model segments the Google Earth satellite images into 'road' and 'not-road' classes. Model works best on wide car roads, crossroads and roundabouts. | [Image](https://chmura.put.poznan.pl/s/rln6mpbjpsXWpKg) |
| [Solar PV Segmentation](https://owncloud.fraunhofer.de/index.php/s/Ph9TC6BTxPi5oZZ) | 512 | 3 | Model trained by M Kleebauer et al. in "[Multi-resolution segmentation of solar photovoltaic systems using deep learning](https://www.mdpi.com/2596164) on a diverse range of image data, spanning UAV, aerial, and satellite imagery at both native and aggregated resolutions of 0.1 m, 0.2 m, 0.3 m, 0.8 m, 1.6 m, and 3.2 m. | [Image](https://github.com/Kleebaue/multi-resolution-pv-system-segmentation/blob/main/figures/prediction_multi_res.png) |
| [Solar PV Segmentation](https://owncloud.fraunhofer.de/index.php/s/Ph9TC6BTxPi5oZZ) | 512 | 20 | Model trained by M Kleebauer et al. in "[Multi-resolution segmentation of solar photovoltaic systems using deep learning](https://www.mdpi.com/2596164) on a diverse range of image data, spanning UAV, aerial, and satellite imagery at both native and aggregated resolutions of 0.1 m, 0.2 m, 0.3 m, 0.8 m, 1.6 m, and 3.2 m. | [Image](https://github.com/Kleebaue/multi-resolution-pv-system-segmentation/blob/main/figures/prediction_multi_res.png) |

## Regression models

Expand Down Expand Up @@ -52,12 +52,12 @@ The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection

PRs with models are welcome!

* Please follow the [general model information](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_description_classes.html).
- Please follow the [general model information](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_description_classes.html).

* Use `MODEL_ZOO` tag in your PRs to make it easier to find them.
- Use `MODEL_ZOO` tag in your PRs to make it easier to find them.

* If you need, you can check [how to export the model to ONNX](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_example_onnx_model.html).
- If you need, you can check [how to export the model to ONNX](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_example_onnx_model.html).

* And do not forget to [add metadata to the ONNX model](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_add_metadata_to_model.html).
- And do not forget to [add metadata to the ONNX model](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_add_metadata_to_model.html).

* You can host your model yourself or ask us to do it.
- You can host your model yourself or ask us to do it.

0 comments on commit 1536e32

Please sign in to comment.