Skip to content

Commit

Permalink
Merge pull request #136 from PUTvision/feat/0.6.0
Browse files Browse the repository at this point in the history
Feat/0.6.0
  • Loading branch information
bartoszptak authored Feb 9, 2024
2 parents 4579af0 + 57ffdb6 commit 498f941
Show file tree
Hide file tree
Showing 25 changed files with 465 additions and 259 deletions.
68 changes: 40 additions & 28 deletions docs/source/creators/creators_add_metadata_to_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,34 +8,37 @@ The plugin allows you to load the meta parameters of the onnx model automaticall
List of parameters parsed by plugin
===================================

+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| Parameter | Type | Example | Description |
+====================+=======+=======================================+=============================================================+
| model_type | str | :code:`'Segmentor'` | Types of models available: Segmentor, Regressor, Detector. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| class_names | dict | :code:`{0: 'background', 1: 'field'}` | A dictionary that maps a class id to its name. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| resolution | float | :code:`100` | Real-world resolution of images (centimeters per pixel). |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| tiles_size | int | :code:`512` | What size (in pixels) is the tile to crop. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| tiles_overlap | int | :code:`40` | How many percent of the image size overlap. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| seg_thresh | float | :code:`0.5` | Segmentor: class confidence threshold. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| seg_small_segment | int | :code:`7` | Segmentor: remove small occurrences of the class. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| reg_output_scaling | float | :code:`1.0` | Regressor: scaling factor for the model output. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_conf | float | :code:`0.6` | Detector: object confidence threshold. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_iou_thresh | float | :code:`0.4` | Detector: IOU threshold for NMS. |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_type | str | :code:`YOLO_v5_or_v7_default` | Detector: type of the detector model format |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_remove_overlap | bool | :code:`True` | Detector: Whether overlapping detection should be removed |
+--------------------+-------+---------------------------------------+-------------------------------------------------------------+

+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| Parameter | Type | Example | Description |
+======================+=======+=======================================+=============================================================+
| model_type | str | :code:`'Segmentor'` | Types of models available: Segmentor, Regressor, Detector. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| class_names | dict | :code:`{0: 'background', 1: 'field'}` | A dictionary that maps a class id to its name. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| resolution | float | :code:`100` | Real-world resolution of images (centimeters per pixel). |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| tiles_size | int | :code:`512` | What size (in pixels) is the tile to crop. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| tiles_overlap | int | :code:`40` | How many percent of the image size overlap. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| standardization_mean | list | :code:`[0.0, 0.0, 0.0]` | Mean - if you want to standarize input after normalisation |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| standardization_std | list | :code:`[1.0, 1.0, 1.0]` | Std - if you want to standarize input after normalisation |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| seg_thresh | float | :code:`0.5` | Segmentor: class confidence threshold. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| seg_small_segment | int | :code:`7` | Segmentor: remove small occurrences of the class. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| reg_output_scaling | float | :code:`1.0` | Regressor: scaling factor for the model output. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_conf | float | :code:`0.6` | Detector: object confidence threshold. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_iou_thresh | float | :code:`0.4` | Detector: IOU threshold for NMS. |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_type | str | :code:`YOLO_v5_or_v7_default` | Detector: type of the detector model format |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+
| det_remove_overlap | bool | :code:`True` | Detector: Whether overlapping detection should be removed |
+----------------------+-------+---------------------------------------+-------------------------------------------------------------+

=======
Example
Expand Down Expand Up @@ -70,6 +73,15 @@ The example below shows how to add string, float, and dictionary metadata into a
m3.key = 'resolution'
m3.value = json.dumps(50)
# optional, if you want to standarize input after normalisation
m4 = model.metadata_props.add()
m4.key = 'standardization_mean'
m4.value = json.dumps([0.0, 0.0, 0.0])
m5 = model.metadata_props.add()
m5.key = 'standardization_std'
m5.value = json.dumps([1.0, 1.0, 1.0])
onnx.save(model, 'deeplabv3_landcover_4c.onnx')
Expand Down
2 changes: 1 addition & 1 deletion docs/source/creators/creators_example_onnx_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,4 @@ Steps based on the `tensorflow-onnx <https://github.com/onnx/tensorflow-onnx>`_
Update ONNX model to support dynamic batch size
===============================================

To convert model to support dynamic batch size, you need to update :code:`model.onnx` file. You can do it manually using `this <https://github.com/onnx/onnx/issues/2182#issuecomment-881752539>` script. Please note that the script is not perfect and may not work for all models.
To convert model to support dynamic batch size, you need to update :code:`model.onnx` file. You can do it manually using `this <https://github.com/onnx/onnx/issues/2182#issuecomment-881752539>`_ script. Please note that the script is not perfect and may not work for all models.
13 changes: 7 additions & 6 deletions docs/source/main/model_zoo/MODEL_ZOO.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,30 +2,33 @@

The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection of pre-trained, deep learning models in the ONNX format. It allows for an easy-to-use start with the plugin.

NOTE: the provided models are not universal tools and will perform well only on similar data as in the training datasets. If you notice the model is not perfroming well on your data, consider re-training (or fine-tuning) it on your data.
> NOTE: the provided models are not universal tools and will perform well only on similar data as in the training datasets. If you notice the model is not perfroming well on your data, consider re-training (or fine-tuning) it on your data.
If you do not have machine learning expertise, feel free to contact the plugin authors for help or advice.
> If you do not have machine learning expertise, feel free to contact the plugin authors for help or advice.
## Segmentation models

| Model | Input size | CM/PX | Description | Example image |
|----------------------------------------------------------------------------------|------------|-------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| [Corn Field Damage Segmentation](https://chmura.put.poznan.pl/s/abWFTVYSDIcncWs) | 512 | 3 | [PUT Vision](https://putvision.github.io/) model for Corn Field Damage Segmentation created on own dataset labeled by experts. We used the classical UNet++ model. It generates 3 outputs: healthy crop, damaged crop, and out-of-field area. | [Image](https://chmura.put.poznan.pl/s/i5WVmcfqPNdBTAQ) |
| [Land Cover Segmentation](https://chmura.put.poznan.pl/s/PnAFJw27uneROkV) | 512 | 40 | The model is trained on the [LandCover.ai dataset](https://landcover.ai.linuxpolska.com/). It provides satellite images with 25 cm/px and 50 cm/px resolution. Annotation masks for the following classes are provided for the images: building (1), woodland (2), water(3), road(4). We use `DeepLabV3+` model with `tu-semnasnet_100` backend and `FocalDice` as a loss function. NOTE: the dataset covers only the area of Poland, therefore the performance may be inferior in other parts of the world. | [Image](https://chmura.put.poznan.pl/s/Xa29vnieNQTvSt5) |
| [Buildings Segmentation](https://chmura.put.poznan.pl/s/MwhgQNhyQF3fuBs) | 256 | 40 | Trained on the [RampDataset dataset](https://cmr.earthdata.nasa.gov/search/concepts/C2781412367-MLHUB.html). Annotation masks for buildings and background. Xunet network. Val F1-score 81.0 | [Image](https://chmura.put.poznan.pl/s/XCjuDKDS3FFovDl) |
| [Land Cover Segmentation Sentinel-2](https://chmura.put.poznan.pl/s/UbljXBr1XSc9hCL) | 64 | 1000 | Trained on the [Eurosat dataset](https://www.tensorflow.org/datasets/catalog/eurosat). Uses 13 spectral bands from Sentinel-2, with 10 classes. Model ConvNeXt. | [Image](https://chmura.put.poznan.pl/s/pGR5VX6AV3hYKVl) |
| [Agriculture segmentation RGB+NIR](https://chmura.put.poznan.pl/s/wf5Ml1ZDyiVdNiy) | 256 | 30 | Trained on the [Agriculture Vision 2021 dataset](https://www.agriculture-vision.com/agriculture-vision-2021/dataset-2021). 4 channels input (RGB + NIR). 9 output classes within agricultural field (weed_cluster, waterway, ...). Uses X-UNet. | [Image](https://chmura.put.poznan.pl/s/35A5ISUxLxcK7kL) |
| [Fire risk assesment](https://chmura.put.poznan.pl/s/NxKLdfdr9s9jsVA) | 384 | 100 | Trained on the FireRisk dataset (RGB data). Classifies risk of fires (ver_high, high, low, ...). Uses ConvNeXt XXL. Val F1-score 65.5. | [Image](https://chmura.put.poznan.pl/s/Ijn3VgG76NvYtDY) |
| [Roads Segmentation](https://chmura.put.poznan.pl/s/y6S3CmodPy1fYYz) | 512 | 21 | The model segments the Google Earth satellite images into 'road' and 'not-road' classes. Model works best on wide car roads, crossroads and roundabouts. | [Image](https://chmura.put.poznan.pl/s/rln6mpbjpsXWpKg) |

## Regression models

| Model | Input size | CM/PX | Description | Example image |
|---------|---|---|---|---|
| | | | | |
| | | | | |

## Recognition models

| Model | Input size | CM/PX | Description | Example image |
|---------|---|---|---|---|
| NAIP Place recognition | 224 | 100 | ConvNeXt nano trained using SimSiam onn NAIP imagery | |
| [NAIP Place recognition](https://chmura.put.poznan.pl/s/k7EvbNGc2udHvck) | 224 | 100 | ConvNeXt nano trained using SimSiam onn [NAIP imagery](https://earth.esa.int/eogateway/catalog/pleiades-esa-archive). Rank1-accuracy 75.0. | [Image](https://chmura.put.poznan.pl/s/UzAvz8w5ceCui9y) |
| | | | | |

## Object detection models
Expand All @@ -44,8 +47,6 @@ The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection
|[Residual Dense Network (RDN X4)](https://chmura.put.poznan.pl/s/AaKySmOoOhxW6qZ) |64 |Trained on 10 cm/px images set it same as input data | X4 | Model originally trained by H Zhang et. al. in "[A Comparative Study on CNN-Based Single-Image Super-Resolution Techniques for Satellite Images](https://github.com/farahmand-m/satellite-image-super-resolution)" converted to onnx format | [Image](https://chmura.put.poznan.pl/s/Ruz24ZpMNg97joV) from Massachusetts Roads Dataset [Dataset in kaggle](https://www.kaggle.com/datasets/balraj98/massachusetts-roads-dataset) |




## Contributing

PRs with models are welcome!
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
import numpy as np


class StandardizationParameters:
def __init__(self, channels_number: int):
self.mean = np.array([0.0 for _ in range(channels_number)], dtype=np.float32)
self.std = np.array([1.0 for _ in range(channels_number)], dtype=np.float32)

def set_mean_std(self, mean: np.array, std: np.array):
self.mean = np.array(mean, dtype=np.float32)
self.std = np.array(std, dtype=np.float32)
26 changes: 15 additions & 11 deletions src/deepness/deepness.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,24 +8,22 @@
import logging
import traceback

from qgis.core import Qgis, QgsApplication, QgsProject, QgsVectorLayer
from qgis.gui import QgisInterface
from qgis.PyQt.QtCore import QCoreApplication, Qt
from qgis.PyQt.QtGui import QIcon
from qgis.PyQt.QtWidgets import QAction
from qgis.PyQt.QtWidgets import QMessageBox
from qgis.core import Qgis
from qgis.core import QgsApplication
from qgis.core import QgsProject
from qgis.core import QgsVectorLayer
from qgis.gui import QgisInterface
from qgis.PyQt.QtWidgets import QAction, QMessageBox

from deepness.common.defines import PLUGIN_NAME, IS_DEBUG
from deepness.common.defines import IS_DEBUG, PLUGIN_NAME
from deepness.common.lazy_package_loader import LazyPackageLoader
from deepness.common.processing_parameters.map_processing_parameters import MapProcessingParameters, ProcessedAreaType
from deepness.common.processing_parameters.training_data_export_parameters import TrainingDataExportParameters
from deepness.deepness_dockwidget import DeepnessDockWidget
from deepness.dialogs.resizable_message_box import ResizableMessageBox
from deepness.images.get_image_path import get_icon_path
from deepness.processing.map_processor.map_processing_result import MapProcessingResult, MapProcessingResultFailed, \
MapProcessingResultCanceled, MapProcessingResultSuccess
from deepness.processing.map_processor.map_processing_result import (MapProcessingResult, MapProcessingResultCanceled,
MapProcessingResultFailed,
MapProcessingResultSuccess)
from deepness.processing.map_processor.map_processor_training_data_export import MapProcessorTrainingDataExport
from deepness.processing.models.model_types import ModelDefinition

Expand Down Expand Up @@ -308,5 +306,11 @@ def _map_processor_finished(self, result: MapProcessingResult):
msg = 'Processing finished!'
self.iface.messageBar().pushMessage(PLUGIN_NAME, msg, level=Qgis.Success, duration=3)
message_to_show = result.message
QMessageBox.information(self.dockwidget, "Processing Result", message_to_show)

msgBox = ResizableMessageBox(self.dockwidget)
msgBox.setWindowTitle("Processing Result")
msgBox.setText(message_to_show)
msgBox.setStyleSheet("QLabel{min-width:800 px; font-size: 24px;} QPushButton{ width:250px; font-size: 18px; }")
msgBox.exec()

self._map_processor = None
20 changes: 20 additions & 0 deletions src/deepness/dialogs/resizable_message_box.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from qgis.PyQt.QtWidgets import QMessageBox, QTextEdit


class ResizableMessageBox(QMessageBox):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.setSizeGripEnabled(True)

def event(self, event):
if event.type() in (event.LayoutRequest, event.Resize):
if event.type() == event.Resize:
res = super().event(event)
else:
res = False
details = self.findChild(QTextEdit)
if details:
details.setMaximumSize(16777215, 16777215)
self.setMaximumSize(16777215, 16777215)
return res
return super().event(event)
2 changes: 1 addition & 1 deletion src/deepness/metadata.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
name=Deepness: Deep Neural Remote Sensing
qgisMinimumVersion=3.22
description=Inference of deep neural network models (ONNX) for segmentation, detection and regression
version=0.5.4
version=0.6.0
author=PUT Vision
[email protected]

Expand Down
Loading

0 comments on commit 498f941

Please sign in to comment.