Skip to content

Commit

Permalink
Merge pull request #116 from PUTvision/devel
Browse files Browse the repository at this point in the history
devel -> master
  • Loading branch information
bartoszptak authored Sep 28, 2023
2 parents f1227ff + fbc99f0 commit 037894d
Show file tree
Hide file tree
Showing 35 changed files with 615 additions and 205 deletions.
1 change: 0 additions & 1 deletion .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,3 @@ sphinx:
python:
install:
- requirements: requirements_development.txt
system_packages: true
10 changes: 9 additions & 1 deletion docs/source/creators/creators_description_classes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Detection models allow to solve problem of objects detection, that is finding an
Example application is detection of oil and water tanks on satellite images.

The detection model output is list of bounding boxes, with assigned class and confidence value. This information is not really standardized between different model architectures.
Currently plugin supports :code:`YOLOv5` and :code:`YOLOv7` output types.
Currently plugin supports :code:`YOLOv5`, :code:`YOLOv7` and :code:`ULTRALYTICS` output types.

For each object class, a separate vector layer can be created, with information saved as rectangle polygons (so the output can be potentially easily exported to a text).

Expand All @@ -65,6 +65,14 @@ Usually, only one output map (class) is used, as the model usually tries to solv

Output report contains statistics for each class, that is average, min, max and standard deviation of values.

=====================
SuperResolution Model
=====================
SuperResolution models allow to solve problem of increasing the resolution of the image. The model takes a low resolution image as input and outputs a high resolution image.

Example application is increasing the resolution of satellite images.

The superresolution model output is also an image, with same dimension as the input tile, but with higher resolution (GDS).

================
Extending Models
Expand Down
1 change: 1 addition & 0 deletions docs/source/main/model_zoo/MODEL_ZOO.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection
| [Airbus Planes Detection](https://chmura.put.poznan.pl/s/bBIJ5FDPgyQvJ49) | 256 | 70 | YOLOv7 tiny model for object detection on satellite images. Based on the [Airbus Aircraft Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-aircrafts-sample-dataset). | [Image](https://chmura.put.poznan.pl/s/VfLmcWhvWf0UJfI) |
| [Airbus Oil Storage Detection](https://chmura.put.poznan.pl/s/gMundpKsYUC7sNb) | 512 | 150 | YOLOv5-m model for object detection on satellite images. Based on the [Airbus Oil Storage Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-oil-storage-detection-dataset). | [Image](https://chmura.put.poznan.pl/s/T3pwaKlbFDBB2C3) |
| [Aerial Cars Detection](https://chmura.put.poznan.pl/s/vgOeUN4H4tGsrGm) | 640 | 10 | YOLOv7-m model for cars detection on aerial images. Based on the [ITCVD](https://arxiv.org/pdf/1801.07339.pdf). | [Image](https://chmura.put.poznan.pl/s/cPzw1mkXlprSUIJ) |
| [UAVVaste Instance Segmentation](https://chmura.put.poznan.pl/s/v99rDlSPbyNpOCH) | 640 | 0.5 | YOLOv8-L Instance Segmentation model for litter detection on high-quality UAV images. Based on the [UAVVaste dataset](https://github.com/PUTvision/UAVVaste). | [Image](https://chmura.put.poznan.pl/s/KFQTlS2qtVnaG0q) |

## Super Resolution Models
| Model | Input size | CM/PX | Scale Factor |Description | Example image |
Expand Down
8 changes: 4 additions & 4 deletions src/deepness/common/channels_mapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ def __init__(self, name):
self.name = name

def get_band_number(self):
raise NotImplementedError
raise NotImplementedError('Base class not implemented!')

def get_byte_number(self):
raise NotImplementedError
raise NotImplementedError('Base class not implemented!')


class ImageChannelStandaloneBand(ImageChannel):
Expand All @@ -43,7 +43,7 @@ def get_band_number(self):
return self.band_number

def get_byte_number(self):
raise NotImplementedError
raise NotImplementedError('Something went wrong if we are here!')


class ImageChannelCompositeByte(ImageChannel):
Expand All @@ -62,7 +62,7 @@ def __str__(self):
return txt

def get_band_number(self):
raise NotImplementedError
raise NotImplementedError('Something went wrong if we are here!')

def get_byte_number(self):
return self.byte_number
Expand Down
37 changes: 37 additions & 0 deletions src/deepness/common/processing_overlap.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import enum
from typing import Dict, List


class ProcessingOverlapOptions(enum.Enum):
OVERLAP_IN_PIXELS = 'Overlap in pixels'
OVERLAP_IN_PERCENT = 'Overlap in percent'


class ProcessingOverlap:
""" Represents overlap between tiles during processing
"""
def __init__(self, selected_option: ProcessingOverlapOptions, percentage: float = None, overlap_px: int = None):
self.selected_option = selected_option

if selected_option == ProcessingOverlapOptions.OVERLAP_IN_PERCENT and percentage is None:
raise Exception(f"Percentage must be specified when using {ProcessingOverlapOptions.OVERLAP_IN_PERCENT}")
if selected_option == ProcessingOverlapOptions.OVERLAP_IN_PIXELS and overlap_px is None:
raise Exception(f"Overlap in pixels must be specified when using {ProcessingOverlapOptions.OVERLAP_IN_PIXELS}")

if selected_option == ProcessingOverlapOptions.OVERLAP_IN_PERCENT:
self._percentage = percentage
elif selected_option == ProcessingOverlapOptions.OVERLAP_IN_PIXELS:
self._overlap_px = overlap_px
else:
raise Exception(f"Unknown option: {selected_option}")

def get_overlap_px(self, tile_size_px: int) -> int:
""" Returns the overlap in pixels
:param tile_size_px: Tile size in pixels
:return: Returns the overlap in pixels
"""
if self.selected_option == ProcessingOverlapOptions.OVERLAP_IN_PIXELS:
return self._overlap_px
else:
return int(tile_size_px * self._percentage / 100 * 2) // 2 # TODO: check if this is correct
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ class DetectorType(enum.Enum):
YOLO_v5_v7_DEFAULT = 'YOLO_v5_or_v7_default'
YOLO_v6 = 'YOLO_v6'
YOLO_ULTRALYTICS = 'YOLO_Ultralytics'
YOLO_ULTRALYTICS_SEGMENTATION = 'YOLO_Ultralytics_segmentation'

def get_parameters(self):
if self == DetectorType.YOLO_v5_v7_DEFAULT:
Expand All @@ -29,7 +30,7 @@ def get_parameters(self):
return DetectorTypeParameters(
ignore_objectness_probability=True,
)
elif self == DetectorType.YOLO_ULTRALYTICS:
elif self == DetectorType.YOLO_ULTRALYTICS or self == DetectorType.YOLO_ULTRALYTICS_SEGMENTATION:
return DetectorTypeParameters(
has_inverted_output_shape=True,
skipped_objectness_probability=True,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
from typing import Optional

from deepness.common.channels_mapping import ChannelsMapping
from deepness.common.processing_overlap import ProcessingOverlap


class ProcessedAreaType(enum.Enum):
Expand Down Expand Up @@ -40,7 +41,7 @@ class MapProcessingParameters:
input_layer_id: str # raster layer to process
mask_layer_id: Optional[str] # Processing of masked layer - if processed_area_type is FROM_POLYGONS

processing_overlap_percentage: float # aka stride - overlap of neighbouring tiles while processing (0-100)
processing_overlap: ProcessingOverlap # aka "stride" - how much to overlap tiles during processing

input_channels_mapping: ChannelsMapping # describes mapping of image channels to model inputs

Expand All @@ -54,9 +55,9 @@ def tile_size_m(self):
@property
def processing_overlap_px(self) -> int:
"""
Always multiple of 2
Always divisible by 2, because overlap is on both sides of the tile
"""
return int(self.tile_size_px * self.processing_overlap_percentage / 100 * 2) // 2
return self.processing_overlap.get_overlap_px(self.tile_size_px)

@property
def resolution_m_per_px(self):
Expand Down
2 changes: 2 additions & 0 deletions src/deepness/deepness.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
Skeleton of this file was generate with the QGis plugin to create plugin skeleton - QGIS PluginBuilder
"""

import logging
import traceback

from qgis.PyQt.QtCore import QCoreApplication, Qt
Expand Down Expand Up @@ -292,6 +293,7 @@ def _map_processor_finished(self, result: MapProcessingResult):
msg = f'Error! Processing error: "{result.message}"!'
self.iface.messageBar().pushMessage(PLUGIN_NAME, msg, level=Qgis.Critical, duration=14)
if result.exception is not None:
logging.error(msg)
trace = '\n'.join(traceback.format_tb(result.exception.__traceback__)[-1:])
msg = f'{msg}\n\n\n' \
f'Details: ' \
Expand Down
29 changes: 28 additions & 1 deletion src/deepness/deepness_dockwidget.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
from deepness.common.config_entry_key import ConfigEntryKey
from deepness.common.defines import IS_DEBUG, PLUGIN_NAME
from deepness.common.errors import OperationFailedException
from deepness.common.processing_overlap import ProcessingOverlap, ProcessingOverlapOptions
from deepness.common.processing_parameters.detection_parameters import DetectionParameters, DetectorType
from deepness.common.processing_parameters.map_processing_parameters import (MapProcessingParameters, ModelOutputFormat,
ProcessedAreaType)
Expand Down Expand Up @@ -164,6 +165,7 @@ def _setup_misc_ui(self):
self.mMapLayerComboBox_inputLayer.setFilters(QgsMapLayerProxyModel.RasterLayer)
self.mMapLayerComboBox_areaMaskLayer.setFilters(QgsMapLayerProxyModel.VectorLayer)
self._set_processed_area_mask_options()
self._set_processing_overlap_enabled()

for model_definition in ModelDefinition.get_model_definitions():
self.comboBox_modelType.addItem(model_definition.model_type.value)
Expand Down Expand Up @@ -201,6 +203,8 @@ def _create_connections(self):
self.checkBox_pixelClassEnableThreshold.stateChanged.connect(self._set_probability_threshold_enabled)
self.checkBox_removeSmallAreas.stateChanged.connect(self._set_remove_small_segment_enabled)
self.comboBox_modelOutputFormat.currentIndexChanged.connect(self._model_output_format_changed)
self.radioButton_processingTileOverlapPercentage.toggled.connect(self._set_processing_overlap_enabled)
self.radioButton_processingTileOverlapPixels.toggled.connect(self._set_processing_overlap_enabled)

def _model_type_changed(self):
model_type = ModelType(self.comboBox_modelType.currentText())
Expand Down Expand Up @@ -235,6 +239,13 @@ def _model_output_format_changed(self):
model_output_format = ModelOutputFormat(txt)
class_number_selection_enabled = bool(model_output_format == ModelOutputFormat.ONLY_SINGLE_CLASS_AS_LAYER)
self.comboBox_outputFormatClassNumber.setEnabled(class_number_selection_enabled)

def _set_processing_overlap_enabled(self):
overlap_percentage_enabled = self.radioButton_processingTileOverlapPercentage.isChecked()
self.spinBox_processingTileOverlapPercentage.setEnabled(overlap_percentage_enabled)

overlap_pixels_enabled = self.radioButton_processingTileOverlapPixels.isChecked()
self.spinBox_processingTileOverlapPixels.setEnabled(overlap_pixels_enabled)

def _set_probability_threshold_enabled(self):
self.doubleSpinBox_probabilityThreshold.setEnabled(self.checkBox_pixelClassEnableThreshold.isChecked())
Expand Down Expand Up @@ -400,6 +411,20 @@ def _get_input_layer_id(self):
else:
return ''

def _get_overlap_parameter(self):
if self.radioButton_processingTileOverlapPercentage.isChecked():
return ProcessingOverlap(
selected_option=ProcessingOverlapOptions.OVERLAP_IN_PERCENT,
percentage=self.spinBox_processingTileOverlapPercentage.value(),
)
elif self.radioButton_processingTileOverlapPixels.isChecked():
return ProcessingOverlap(
selected_option=ProcessingOverlapOptions.OVERLAP_IN_PIXELS,
overlap_px=self.spinBox_processingTileOverlapPixels.value(),
)
else:
raise Exception('Something goes wrong. No overlap parameter selected!')

def _get_pixel_classification_threshold(self):
if not self.checkBox_pixelClassEnableThreshold.isChecked():
return 0
Expand Down Expand Up @@ -491,7 +516,7 @@ def _get_map_processing_parameters(self) -> MapProcessingParameters:
processed_area_type=processed_area_type,
mask_layer_id=self.get_mask_layer_id(),
input_layer_id=self._get_input_layer_id(),
processing_overlap_percentage=self.spinBox_processingTileOverlapPercentage.value(),
processing_overlap=self._get_overlap_parameter(),
input_channels_mapping=self._input_channels_mapping_widget.get_channels_mapping(),
model_output_format=ModelOutputFormat(self.comboBox_modelOutputFormat.currentText()),
model_output_format__single_class_number=self.comboBox_outputFormatClassNumber.currentIndex(),
Expand All @@ -508,6 +533,7 @@ def _run_inference(self):
except OperationFailedException as e:
msg = str(e)
self.iface.messageBar().pushMessage(PLUGIN_NAME, msg, level=Qgis.Warning, duration=7)
logging.exception(msg)
QMessageBox.critical(self, "Error!", msg)
return

Expand All @@ -531,6 +557,7 @@ def _run_training_data_export(self):
except OperationFailedException as e:
msg = str(e)
self.iface.messageBar().pushMessage(PLUGIN_NAME, msg, level=Qgis.Warning)
logging.exception(msg)
QMessageBox.critical(self, "Error!", msg)
return

Expand Down
Loading

0 comments on commit 037894d

Please sign in to comment.