Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rev 0.6.4 #185

Merged
merged 20 commits into from
Oct 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
1536e32
changed enums "*" to "-" within documentation
Kleebaue Apr 16, 2024
5141e36
Merge pull request #169 from Kleebaue/editing_documentation
przemyslaw-aszkowski Apr 22, 2024
ee2978f
added tested QGIS version for docu and made little changes for displa…
Kleebaue Apr 24, 2024
6eb0c13
Merge branch 'PUTvision:editing_documentation' into editing_documenta…
Kleebaue Apr 24, 2024
e077627
Merge pull request #168 from Kleebaue/editing_documentation
przemyslaw-aszkowski Apr 24, 2024
f3615b5
Update issue templates
przemyslaw-aszkowski May 30, 2024
25fe1e8
Update bug_report.md
przemyslaw-aszkowski May 30, 2024
71e7c8c
Merge pull request #172 from PUTvision/issue_template_v1
przemyslaw-aszkowski Jun 2, 2024
b4af1b0
#171 Invalid data type processing bug fix
przemyslaw-aszkowski Jun 2, 2024
3aa8b0b
Merge pull request #173 from PUTvision/units_bug_fix
przemyslaw-aszkowski Jun 2, 2024
6372e06
Add noise insulating wall segmentation model to zoo
ziyadsheeba Jun 6, 2024
72fc0f5
Merge pull request #174 from ziyadsheeba/devel
przemyslaw-aszkowski Jun 6, 2024
6204634
Update car_detection__prepare_and_train.ipynb
keepcool13 Aug 22, 2024
be6d8d6
Merge pull request #182 from keepcool13/devel
przemyslaw-aszkowski Aug 23, 2024
46ffb4e
Limit numpy library to <2.0.0
bartoszptak Oct 3, 2024
992fcf1
Add ULTRALYTICS OBB models
bartoszptak Oct 3, 2024
689c1ab
Add OBB to docs, tests, fix typos
bartoszptak Oct 3, 2024
2dcca4e
Bump deepness version to 0.6.4
bartoszptak Oct 3, 2024
5102f28
Add more representative rotation test
bartoszptak Oct 3, 2024
3d68a83
Merge pull request #184 from PUTvision/yolo11_obb
przemyslaw-aszkowski Oct 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Screenshots (including Deepness options selected)**
If applicable, add screenshots to help explain your problem.
Please also include screenshot of Deepness UI settings.

**Desktop (please complete the following information):**
- OS: [e.g. Windows 10, Ubuntu 22.04]
- QGiS version
- Deepness version

** Orthophoto/input file-related issues **
If the issue may be related to your specific file, please consider sharing it with us (on some hosting platform). If you are sure the problem is not related to your file, please describe in the section "to reproduce" what example orthophoto to use.

** Model-related issues **
If the issue may be related to your specific ONNX model file, please consider sharing it with us (on some hosting platform). If you are sure the problem is not related to your model, please describe in the section "to reproduce" what example model to use.

**Additional context**
Add any other context about the problem here.
1 change: 1 addition & 0 deletions docs/source/creators/creators_add_metadata_to_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ Availeble detector types:
- :code:`YOLO_v9`
- :code:`YOLO_Ultralytics`
- :code:`YOLO_Ultralytics_segmentation`
- :code:`YOLO_Ultralytics_obb`

=======
Example
Expand Down
32 changes: 16 additions & 16 deletions docs/source/creators/creators_description_classes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,19 @@ Model classes and requirements
Supported model classes
=======================
The plugin supports the following model classes:
* Segmentation Model (aka. :code:`Segmentor`)
* Detection Model (aka. :code:`Detector`)
* Regression Model (aka. :code:`Regressor`)
- Segmentation Model (aka. :code:`Segmentor`)
- Detection Model (aka. :code:`Detector`)
- Regression Model (aka. :code:`Regressor`)

Once the processing of ortophoto is finished, a report with model-specific information will be presented.

Common rules for models and processing:
* Model needs to be in ONNX format, which contains both the network architecture and weights.
* All model classes process the data in chunks called 'tiles', that is a small part of the entire ortophoto - tiles size and overlap is configurable.
* Every model should have one input of size :code:`[BATCH_SIZE, CHANNELS, SIZE_PX, SIZE_PX]`. :code:`BATCH_SIZE` can be 1 or dynamic.
* Size of processed tiles (in pixels) is model defined, but needs to be equal in x and y axes, so that the tiles can be square.
* If the processed tile needs to be padded (e.g. on otophoto borders) it will be padded with 0 values.
* Input image data - only uint8_t value for each pixel channel is supported
- Model needs to be in ONNX format, which contains both the network architecture and weights.
- All model classes process the data in chunks called 'tiles', that is a small part of the entire ortophoto - tiles size and overlap is configurable.
- Every model should have one input of size :code:`[BATCH_SIZE, CHANNELS, SIZE_PX, SIZE_PX]`. :code:`BATCH_SIZE` can be 1 or dynamic.
- Size of processed tiles (in pixels) is model defined, but needs to be equal in x and y axes, so that the tiles can be square.
- If the processed tile needs to be padded (e.g. on otophoto borders) it will be padded with 0 values.
- Input image data - only uint8_t value for each pixel channel is supported


==================
Expand All @@ -31,15 +31,15 @@ The segmentation model output is also an image, with same dimension as the input
Therefore, the shape of model output is :code:`[BATCH_SIZE, NUM_CLASSES, SIZE_PX, SIZE_PX]`.

We support the following types of models:
* single output (one head) with the following output shapes:
* :code:`[BATCH_SIZE, 1, SIZE_PX, SIZE_PX]` - one class with sigmoid activation function (binary classification)
* :code:`[BATCH_SIZE, NUM_CLASSES, SIZE_PX, SIZE_PX]` - multiple classes with softmax activation function (multi-class classification) - outputs sum to 1.0
* multiple outputs (multiple heads) with each output head composed of the same shapes as single output.
- single output (one head) with the following output shapes:
- :code:`[BATCH_SIZE, 1, SIZE_PX, SIZE_PX]` - one class with sigmoid activation function (binary classification)
- :code:`[BATCH_SIZE, NUM_CLASSES, SIZE_PX, SIZE_PX]` - multiple classes with softmax activation function (multi-class classification) - outputs sum to 1.0
- multiple outputs (multiple heads) with each output head composed of the same shapes as single output.

Metaparameter :code:`class_names` saved in the model file should be as follows in this example:
* for single output with binary classification (sigmoid): :code:`[{0: "background", 1: "class_name"}]`
* for single output with multi-class classification (softmax): :code:`[{0: "class0", 1: "class1", 2: "class2"}]` or :code:`{0: "class0", 1: "class1", 2: "class2"}`
* for multiple outputs (multiple heads): :code:`[{0: "class0", 1: "class1", 2: "class2"}, {0: "background", 1: "class_name"}]`
- for single output with binary classification (sigmoid): :code:`[{0: "background", 1: "class_name"}]`
- for single output with multi-class classification (softmax): :code:`[{0: "class0", 1: "class1", 2: "class2"}]` or :code:`{0: "class0", 1: "class1", 2: "class2"}`
- for multiple outputs (multiple heads): :code:`[{0: "class0", 1: "class1", 2: "class2"}, {0: "background", 1: "class_name"}]`

Output report contains information about percentage coverage of each class.

Expand Down
14 changes: 7 additions & 7 deletions docs/source/creators/creators_example_onnx_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ Pytorch

Steps based on `EXPORTING A MODEL FROM PYTORCH TO ONNX AND RUNNING IT USING ONNX RUNTIME <https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html>`_.

* Step 0. Requirements:
Step 0. Requirements:

- Pytorch

- ONNX

* Step 1. Load PyTorch model
Step 1. Load PyTorch model
.. code-block::

from torch import nn
Expand All @@ -25,13 +25,13 @@ Steps based on `EXPORTING A MODEL FROM PYTORCH TO ONNX AND RUNNING IT USING ONNX
model.load_state_dict(torch.load(YOUR_MODEL_CHECKPOINT_PATH, map_location='cpu')['state_dict'])
model.eval()

* Step 2. Create data sample with :code:`batch_size=1` and call forward step of your model:
Step 2. Create data sample with :code:`batch_size=1` and call forward step of your model:
.. code-block::

x = torch.rand(1, INP_CHANNEL, INP_HEIGHT, INP_WIDTH) # eg. torch.rand([1, 3, 256, 256])
_ = model(x)

* Step 3a. Call export function with static batch_size=1:
Step 3a. Call export function with static batch_size=1:

.. code-block::

Expand All @@ -44,7 +44,7 @@ Steps based on `EXPORTING A MODEL FROM PYTORCH TO ONNX AND RUNNING IT USING ONNX
output_names=['output'],
do_constant_folding=False)

* Step 3b. Call export function with dynamic batch_size:
Step 3b. Call export function with dynamic batch_size:

.. code-block::

Expand All @@ -64,15 +64,15 @@ Tensorflow/Keras

Steps based on the `tensorflow-onnx <https://github.com/onnx/tensorflow-onnx>`_ repository. The instruction is valid for :code:`saved model` format. For other types follow :code:`tensorflow-onnx` instructions.

* Requirements:
Requirements:

- tensorflow

- ONNX

- tf2onnx

* And simply call converter script:
And simply call converter script:

.. code-block::

Expand Down
4 changes: 2 additions & 2 deletions docs/source/creators/creators_export_training_data_tool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ Training Data Export Tool

Apart from model inference functionality, the plugin contains :code:`Training Data Export Tool`.
This tool allows to prepare the following data for the model training process:
* specified part of ortophoto - divided into tiles (each tile saved as a separate file)
* specified part of the annotation layer, which can be used as ground-truth for model training. Each mask tile corresponds to one ortophoto tile.
- specified part of ortophoto - divided into tiles (each tile saved as a separate file)
- specified part of the annotation layer, which can be used as ground-truth for model training. Each mask tile corresponds to one ortophoto tile.

Exported data follows the same rules as inference, that is the user needs to specify layer and what part of thereof should be processed.
Tile size and overlap between consecutive tiles is also configurable.
Expand Down
42 changes: 21 additions & 21 deletions docs/source/main/main_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,38 +6,38 @@ Installation
Plugin installation
===================

* (option 1) Install using QGIS plugin browser
(option 1) Install using QGIS plugin browser

.. note::

The repository has not yet been pushed out to the QGIS plugins collection.

* Run QGIS
- Run QGIS

* Open: Plugins->Manage and Install Plugins
- Open: Plugins->Manage and Install Plugins

* Select: Not installed
- Select: Not installed

* Type in the "Search..." field plugin name
- Type in the "Search..." field plugin name

* Select from list and click the Install button
- Select from list and click the Install button


* (option 2) Install using downloaded ZIP
(option 2) Install using downloaded ZIP

* Go to the plugin repository: `https://github.com/PUTvision/qgis-plugin-deepness <https://github.com/PUTvision/qgis-plugin-deepness>`_
- Go to the plugin repository: `https://github.com/PUTvision/qgis-plugin-deepness <https://github.com/PUTvision/qgis-plugin-deepness>`_

* From the right panel, select Release and download the latest version
- From the right panel, select Release and download the latest version

* Run QGIS
- Run QGIS

* Open: Plugins->Manage and Install Plugins
- Open: Plugins->Manage and Install Plugins

* Select: Install from ZIP
- Select: Install from ZIP

* Select the ZIP file using system prompt
- Select the ZIP file using system prompt

* Click the Install Plugin button
- Click the Install Plugin button

============
Requirements
Expand All @@ -49,29 +49,29 @@ The plugin should install all required dependencies automatically during the fir

The plugin requirements and versions are listed in the `requirements.txt <https://github.com/PUTvision/qgis-plugin-deepness/blob/master/src/deepness/python_requirements/requirements.txt>`_ file.

* Ubuntu
Ubuntu

* (option 1) Install requirements using system Python interpreter:
- (option 1) Install requirements using system Python interpreter:

.. code-block::

python3 -m pip install opencv-python-headless onnxruntime-gpu

* (option 2) Run QGIS and Python Console. Then call command:
- (option 2) Run QGIS and Python Console. Then call command:

.. code-block::

import pip; pip.main(['install', 'opencv-python-headless', 'onnxruntime-gpu'])


* Windows
Windows

* Go to QGIS installation path (for example :code:`C:\Program Files\QGIS 3.26.3\`)
- Go to QGIS installation path (for example :code:`C:\\Program Files\\QGIS 3.26.3\\`)

* Run :code:`OSGeo4W.bat` and type installation command:
- Run :code:`OSGeo4W.bat` and type installation command:

.. code-block::

python3 -m pip install opencv-python-headless onnxruntime-gpu

* MacOS - SOON
MacOS - SOON
3 changes: 2 additions & 1 deletion docs/source/main/main_supported_versions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,4 +29,5 @@ The plug-in was tested in the environment:
- Ubuntu 22.04.1 LTS and QGIS 3.22.11
- Ubuntu 22.04.1 LTS and QGIS 3.22.4
- Ubuntu 20.04 LTS and QGIS 3.28
- Windows 10.0.19043 and QGIS 3.26.3
- Windows 10.0.19043 and QGIS 3.26.3
- Windows 10.0.19045 and QGIS 3.34.3
13 changes: 7 additions & 6 deletions docs/source/main/model_zoo/MODEL_ZOO.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection
| [Agriculture segmentation RGB+NIR](https://chmura.put.poznan.pl/s/wf5Ml1ZDyiVdNiy) | 256 | 30 | Trained on the [Agriculture Vision 2021 dataset](https://www.agriculture-vision.com/agriculture-vision-2021/dataset-2021). 4 channels input (RGB + NIR). 9 output classes within agricultural field (weed_cluster, waterway, ...). Uses X-UNet. | [Image](https://chmura.put.poznan.pl/s/35A5ISUxLxcK7kL) |
| [Fire risk assesment](https://chmura.put.poznan.pl/s/NxKLdfdr9s9jsVA) | 384 | 100 | Trained on the FireRisk dataset (RGB data). Classifies risk of fires (ver_high, high, low, ...). Uses ConvNeXt XXL. Val F1-score 65.5. | [Image](https://chmura.put.poznan.pl/s/Ijn3VgG76NvYtDY) |
| [Roads Segmentation](https://chmura.put.poznan.pl/s/y6S3CmodPy1fYYz) | 512 | 21 | The model segments the Google Earth satellite images into 'road' and 'not-road' classes. Model works best on wide car roads, crossroads and roundabouts. | [Image](https://chmura.put.poznan.pl/s/rln6mpbjpsXWpKg) |
| [Solar PV Segmentation](https://owncloud.fraunhofer.de/index.php/s/Ph9TC6BTxPi5oZZ) | 512 | 3 | Model trained by M Kleebauer et al. in "[Multi-resolution segmentation of solar photovoltaic systems using deep learning](https://www.mdpi.com/2596164) on a diverse range of image data, spanning UAV, aerial, and satellite imagery at both native and aggregated resolutions of 0.1 m, 0.2 m, 0.3 m, 0.8 m, 1.6 m, and 3.2 m. | [Image](https://github.com/Kleebaue/multi-resolution-pv-system-segmentation/blob/main/figures/prediction_multi_res.png) |
| [Solar PV Segmentation](https://owncloud.fraunhofer.de/index.php/s/Ph9TC6BTxPi5oZZ) | 512 | 20 | Model trained by M Kleebauer et al. in "[Multi-resolution segmentation of solar photovoltaic systems using deep learning](https://www.mdpi.com/2596164) on a diverse range of image data, spanning UAV, aerial, and satellite imagery at both native and aggregated resolutions of 0.1 m, 0.2 m, 0.3 m, 0.8 m, 1.6 m, and 3.2 m. | [Image](https://github.com/Kleebaue/multi-resolution-pv-system-segmentation/blob/main/figures/prediction_multi_res.png) |
| [Noise Insulating Walls Segmentation](https://github.com/merantix-momentum/dzsf-open-source/releases/download/v0.0.1/model.zip) | 1000 | 20 | Model trained by [Merantix Momentum](https://merantix-momentum.github.io/dzsf-open-source/) on Digital Orthophotos of the whole Germany to detect noise insulating walls near train railways. | [Image](https://github.com/merantix-momentum/dzsf-open-source/blob/main/assets/images/prediction_1.png)

## Regression models

Expand Down Expand Up @@ -52,12 +53,12 @@ The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection

PRs with models are welcome!

* Please follow the [general model information](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_description_classes.html).
- Please follow the [general model information](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_description_classes.html).

* Use `MODEL_ZOO` tag in your PRs to make it easier to find them.
- Use `MODEL_ZOO` tag in your PRs to make it easier to find them.

* If you need, you can check [how to export the model to ONNX](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_example_onnx_model.html).
- If you need, you can check [how to export the model to ONNX](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_example_onnx_model.html).

* And do not forget to [add metadata to the ONNX model](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_add_metadata_to_model.html).
- And do not forget to [add metadata to the ONNX model](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_add_metadata_to_model.html).

* You can host your model yourself or ask us to do it.
- You can host your model yourself or ask us to do it.
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ class DetectorType(enum.Enum):
YOLO_v9 = 'YOLO_v9'
YOLO_ULTRALYTICS = 'YOLO_Ultralytics'
YOLO_ULTRALYTICS_SEGMENTATION = 'YOLO_Ultralytics_segmentation'
YOLO_ULTRALYTICS_OBB = 'YOLO_Ultralytics_obb'

def get_parameters(self):
if self == DetectorType.YOLO_v5_v7_DEFAULT:
Expand All @@ -36,7 +37,7 @@ def get_parameters(self):
has_inverted_output_shape=True,
skipped_objectness_probability=True,
)
elif self == DetectorType.YOLO_ULTRALYTICS or self == DetectorType.YOLO_ULTRALYTICS_SEGMENTATION:
elif self == DetectorType.YOLO_ULTRALYTICS or self == DetectorType.YOLO_ULTRALYTICS_SEGMENTATION or self == DetectorType.YOLO_ULTRALYTICS_OBB:
return DetectorTypeParameters(
has_inverted_output_shape=True,
skipped_objectness_probability=True,
Expand Down
2 changes: 1 addition & 1 deletion src/deepness/metadata.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
name=Deepness: Deep Neural Remote Sensing
qgisMinimumVersion=3.22
description=Inference of deep neural network models (ONNX) for segmentation, detection and regression
version=0.6.3
version=0.6.4
author=PUT Vision
[email protected]

Expand Down
Loading
Loading