Skip to content

Commit

Permalink
Merge pull request #61 from PUTvision/devel
Browse files Browse the repository at this point in the history
devel -> master
  • Loading branch information
przemyslaw-aszkowski authored Oct 27, 2022
2 parents 28885d6 + c53b9e3 commit a6d2dd5
Show file tree
Hide file tree
Showing 19 changed files with 364 additions and 179 deletions.
12 changes: 11 additions & 1 deletion .github/workflows/python-app.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,10 @@ jobs:

steps:
- uses: actions/checkout@v3
- name: Prepare directory for packing with zip
run: |
mkdir -p out/deepness
cp -r plugin/deepness/ out/deepness/deepness
- name: Update
run: |
sudo apt update
Expand Down Expand Up @@ -48,9 +52,15 @@ jobs:
# run the actual tests
xvfb-run python3 -m pytest --cov=plugin/deepness/ --cov-report html test/
- name: 'Upload Artifact'
- name: 'Upload Artifact - test coverage'
uses: actions/upload-artifact@v3
with:
name: htmlcov
path: htmlcov/
retention-days: 30
- name: 'Upload Artifact'
uses: actions/upload-artifact@v3
with:
name: deepness
path: out/deepness
retention-days: 89
79 changes: 45 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
<p align="center">
<img width="250" height="250" src="plugin/deepness/images/icon.png" alt="dsf_logo">

<h2 align="center">Deepness: <b>Deep</b> <b>N</b>eural r<b>E</b>mote <b>S</b>en<b>S</b>ing QGIS Plugin</h2>
<h2 align="center">Deepness: Deep Neural Remote Sensing QGIS Plugin</h2>
</p>

![main](https://github.com/PUTvision/qgis-plugin-deepness/actions/workflows/python-app.yml/badge.svg)
[![GitHub contributors](https://img.shields.io/github/contributors/PUTvision/qgis-plugin-deepness)](https://github.com/PUTvision/qgis-plugin-deepness/graphs/contributors)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](https://makeapullrequest.com)
[![GitHub stars](https://img.shields.io/github/stars/PUTvision/qgis-plugin-deepness)](https://github.com/PUTvision/qgis-plugin-deepness/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/PUTvision/qgis-plugin-deepness)](https://github.com/PUTvision/qgis-plugin-deepness/network/members)

Plugin for QGIS to perform map/image segmentation, regression and object detection with (ONNX) neural network models.
Plugin for QGIS to perform map/image segmentation, regression and object detection with (ONNX) neural network models.

## Introduction video

Expand All @@ -19,50 +20,60 @@ Plugin for QGIS to perform map/image segmentation, regression and object detecti

You can find the documentation [here](https://qgis-plugin-deepness.readthedocs.io/).

# Development
- Install QGIS (the plugin was tested with QGIS 3.12)
- Debian/Ubuntu based systems: `sudo apt install qgis`
- Fedora: `sudo dnf install qgis`
- Arch Linux: `sudo pacman -S qgis`
- Windows, macOS and others: https://qgis.org/en/site/forusers/download.html
- Create virtual environment (with global packages inherited!):
```
## Deepness Model ZOO

Check our example models in the [Model ZOO](./docs/source/main/model_zoo/MODEL_ZOO.md).

## Development

- Install QGIS (the plugin was tested with QGIS 3.12)
- Debian/Ubuntu based systems: `sudo apt install qgis`
- Fedora: `sudo dnf install qgis-devel`
- Arch Linux: `sudo pacman -S qgis`
- [Windows, macOS and others](https://qgis.org/en/site/forusers/download.html)
- Create virtual environment (with global packages inherited!):

```bash
python3 -m venv venv --system-site-packages
```
- Create a symlink to our plugin in a QGIS plugin directory:
```

- Create a symlink to our plugin in a QGIS plugin directory:

```bash
ln -s $PROJECT_DIR/plugin/deepness ~/.local/share/QGIS/QGIS3/profiles/default/python/plugins/deepness
```
- Activate the environment and install requirements:
```

- Activate the environment and install requirements:

```bash
. venv/bin/activate
pip install -r requirements.txt
```
- Run QGis in the virtual environment:
```

- Run QGis in the virtual environment:

```bash
export IS_DEBUG=true # to enable some debugging options
qgis
```
- Enable `Deepness` plugin in the `Plugins -> Manage and install plugins`
- Install and enable:
- `Plugin reloader` plugin - allows plugins reloading
- `first aid` plugin - prints stack traces for exceptions

- Enable `Deepness` plugin in the `Plugins -> Manage and install plugins`
- Install and enable:
- `Plugin reloader` plugin - allows plugins reloading
- `first aid` plugin - prints stack traces for exceptions

After the plugin code is modified, use the `Plugin reloader` to reload our plugin.

# Unit tests
## Unit tests

See [test/README.md](test/README.md)

# Documentation
See [docs/README.md](docs/README.md)

# Development notes
- plugin skeleton was initially generated with `Plugin Builder`, but then refactored and cleaned up a little bit
- Before release: change version number in `metadata.txt` and in docs (?)
- to recreate resource file (`resource.qrsc`) run:
```
cd plugin/deepness
pyrcc5 -o resources.py resources.qrc
```
Though I'm not sure if this file is even needed anymore
-
## Bugs, feature requests and questions

If you encountered some problems or have some feature requests you think will make this project better, consider opening an [issue](https://github.com/PUTvision/qgis-plugin-deepness/issues/new).

If you don't understand something and/or have some questions, ask them in [Discussions](https://github.com/PUTvision/qgis-plugin-deepness/discussions).

## Contributing

PRs are welcome! Read our [General Information for Developers](https://qgis-plugin-deepness.readthedocs.io/en/latest/dev/dev_general_info.html). Consider discussing your plans with maintainers.
9 changes: 6 additions & 3 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,8 @@
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'sphinxcontrib.youtube',
'm2r2',
'sphinx_rtd_size',
]

MOCK_MODULES = ['future', 'qgis', 'osgeo', 'qgis.core', 'qgis.gui', 'qgis.utils', 'qgis.PyQt', 'qgis.PyQt.QtWidgets', 'qgis.PyQt.QtGui', 'qgis.PyQt.QtCore', 'PyQt5']
Expand Down Expand Up @@ -109,6 +111,7 @@
#
# html_theme = 'alabaster'
html_theme = 'sphinx_rtd_theme'
sphinx_rtd_size_width = "60%"

html_theme_options = {
'collapse_navigation': False,
Expand Down Expand Up @@ -179,7 +182,7 @@
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'Deepness.tex', 'Deepness: Deep Neural rEmote SenSing',
(master_doc, 'Deepness.tex', 'Deepness: Deep Neural Remote Sensing',
'Przemysław Aszkowski \\& Bartosz Ptak', 'manual'),
]

Expand All @@ -189,7 +192,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'Deepness', 'Deepness: Deep Neural rEmote SenSing',
(master_doc, 'Deepness', 'Deepness: Deep Neural Remote Sensing',
[author], 1)
]

Expand All @@ -200,7 +203,7 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'Deepness', 'Deepness: Deep Neural rEmote SenSing',
(master_doc, 'Deepness', 'Deepness: Deep Neural Remote Sensing',
author, 'Deepness', 'One line description of project.',
'Miscellaneous'),
]
Expand Down
5 changes: 4 additions & 1 deletion docs/source/creators/creators_add_metadata_to_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ The example below shows how to add string, float, and dictionary metadata into a
m1 = model.metadata_props.add()
m1.key = 'model_type'
m1.value = json.dumps('segmenter')
m1.value = json.dumps('Segmentor')
m2 = model.metadata_props.add()
m2.key = 'class_names'
Expand All @@ -69,3 +69,6 @@ The example below shows how to add string, float, and dictionary metadata into a
m3.value = json.dumps(50)
onnx.save(model, 'deeplabv3_landcover_4c.onnx')
You can also use a script setting the most important parameters in the plugin repository (file :code:`tools/add_model_metadata.py`).
4 changes: 2 additions & 2 deletions docs/source/example/example_segmentation_landcover.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ When model training is completed, export the model using the script below:
Example inference
=================

Run QGIS, next add Google Eart map using :code:`QuickMapServices` plugin.
Run QGIS, next add Google Earth map using :code:`QuickMapServices` plugin.

.. image:: ../images/example_landcover_input_image.webp

Expand All @@ -72,4 +72,4 @@ After a few seconds, the results are available:

* predicted mask with Google Earth background

.. image:: ../images/example_landcover_output_map.webp
.. image:: ../images/example_landcover_output_map.webp
4 changes: 2 additions & 2 deletions docs/source/main/main_features.rst
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Plugin features
===============

- Processing any raster layer (custom ortophoto from file or layers from online providers, e.g Google Satellite.
- Processing any raster layer (custom ortophoto from file or layers from online providers, e.g Google Satellite).

- Limiting processing range to predefined area.

- Main types of models are supported: segmentation, regression, detection.

- Training data export tool.
- Training data export tool - exporting raster and mask as small tiles.

- Parametrization of the processing for advanced users.

Expand Down
38 changes: 1 addition & 37 deletions docs/source/main/main_model_zoo.rst
Original file line number Diff line number Diff line change
@@ -1,37 +1 @@
Sample Model ZOO
================

The `Model ZOO <https://chmura.put.poznan.pl/s/O69QZggRYprk3Ks>`_ is a collection of pre-trained, deep learning models in the ONNX format. It allows for an easy-to-use start with the plugin.


===================
Segmentation models
===================

+-----------------------------------------+-------------------------+----------------------------------------------------------------+
| Task | Model | ONNX Model |
+=========================================+=========================+================================================================+
| Land Cover segmentation | DeepLabV3+ | `Link <#>`_ |
+-----------------------------------------+-------------------------+----------------------------------------------------------------+
| Corn Field Damage Segmentation | UNet++ | `Link <https://chmura.put.poznan.pl/s/98zo9C5AdTK5ra4>`_ |
+-----------------------------------------+-------------------------+----------------------------------------------------------------+

=======================
Object detection models
=======================

+-----------------------------------------+-------------------------+----------------------------------------+
| Task | Model | ONNX Model |
+=========================================+=========================+========================================+
| Airbus Planes Detection | YOLOv7-tiny | `Link <#>`_ |
+--------------------+--------------------+-------------------------+----------------------------------------+
| Airbus Oil Storage Detection | YOLOv5-m | `Link <#>`_ |
+--------------------+--------------------+-------------------------+----------------------------------------+

=================
Regression models
=================

.. note::

Documentation in progress: add example regression model
.. mdinclude:: model_zoo/MODEL_ZOO.md
35 changes: 35 additions & 0 deletions docs/source/main/model_zoo/MODEL_ZOO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Deepness Model ZOO

The [Model ZOO](https://chmura.put.poznan.pl/s/2pJk4izRurzQwu3) is a collection of pre-trained, deep learning models in the ONNX format. It allows for an easy-to-use start with the plugin.

## Segmentation models

| Model name | Input size | CM/PX | Description | Example image |
|------------------------------------------------------------------------------------|---|---|---|---------------------------------------------------------|
| [Corn Field Damage Segmentation](https://chmura.put.poznan.pl/s/abWFTVYSDIcncWs) | 512 | 3 | [PUT Vision](https://putvision.github.io/) model for Corn Field Damage Segmentation created on own dataset labeled by experts. We used the classical UNet++ model. It generates 3 outputs: healthy crop, damaged crop, and out-of-field area. | [Image](https://chmura.put.poznan.pl/s/i5WVmcfqPNdBTAQ) |
| [Land Cover Segmentation](https://chmura.put.poznan.pl/s/PnAFJw27uneROkV) | 512 | 40 | The model is trained on the [LandCover.ai dataset](https://landcover.ai.linuxpolska.com/). It provides satellite images with 25 cm/px and 50 cm/px resolution. Annotation masks for the following classes are provided for the images: building (1), woodland (2), water(3), road(4). We use `DeepLabV3+` model with `tu-semnasnet_100` backend and `FocalDice` as a loss function. | [Image](https://chmura.put.poznan.pl/s/Xa29vnieNQTvSt5) |
| [Roads Segmentation](https://chmura.put.poznan.pl/s/y6S3CmodPy1fYYz) | 512 | 21 | The model segments the Google Earth satellite images into 'road' and 'not-road' classes. Model works best on wide car roads, crossroads and roundabouts. | [Image](https://chmura.put.poznan.pl/s/rln6mpbjpsXWpKg) |

## Regression models

| Model name | Input size | CM/PX | Description | Example image |
|---|---|---|---|---|
| | | | | |
| | | | | |

## Object detection models

| Model name | Input size | CM/PX | Description | Example image |
|---|---|---|---|
| [Airbus Planes Detection](https://chmura.put.poznan.pl/s/bBIJ5FDPgyQvJ49) | 256 | 70 | YOLOv7 tiny model for object detection on satellite images. Based on the [Airbus Aircraft Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-aircrafts-sample-dataset). | [Image](https://chmura.put.poznan.pl/s/VfLmcWhvWf0UJfI) |
| [Airbus Oil Storage Detection](https://chmura.put.poznan.pl/s/gMundpKsYUC7sNb) | 512 | 150 | YOLOv5-m model for object detection on satellite images. Based on the [Airbus Oil Storage Detection dataset](https://www.kaggle.com/datasets/airbusgeo/airbus-oil-storage-detection-dataset). | [Image](https://chmura.put.poznan.pl/s/T3pwaKlbFDBB2C3) |

## Contributing

* PRs with models are welcome! Please follow the [general model information](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_description_classes.html).

* Use `MODEL_ZOO` tag in your PRs to make it easier to find them.

* If you need, you can check [how to export the model to ONNX](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_example_onnx_model.html).

* And do not forget to [add metadata to the ONNX model](https://qgis-plugin-deepness.readthedocs.io/en/latest/creators/creators_add_metadata_to_model.html). You can host your model yourself or ask us to do it.
2 changes: 1 addition & 1 deletion plugin/deepness/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Deepness: Deep Neural rEmote SenSing
# Deepness: Deep Neural Remote Sensing

Plugin for QGIS to perform map/image segmentation, regression and object detection with (ONNX) neural network models.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ class SegmentationParameters(MapProcessingParameters):
Parameters for Inference of Segmentation model (including pre/post-processing) obtained from UI.
"""

postprocessing_dilate_erode_size: int # dilate/erode operation size, once we have a single class map. 0 if inactive
postprocessing_dilate_erode_size: int # dilate/erode operation size, once we have a single class map. 0 if inactive. Implementation may use median filer instead of erode/dilate
model: ModelBase # wrapper of the loaded model

pixel_classification__probability_threshold: float # Minimum required class probability for pixel. 0 if disabled
2 changes: 1 addition & 1 deletion plugin/deepness/deepness.py
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ def _are_map_processing_parameters_are_correct(self, params: MapProcessingParame
return True

def _display_processing_started_info(self):
msg = "Error! Please select the layer to process first!"
msg = "Processing in progress... Cool! It's tea time!"
self.iface.messageBar().pushMessage(PLUGIN_NAME, msg, level=Qgis.Info, duration=2)

def _run_training_data_export(self, training_data_export_parameters: TrainingDataExportParameters):
Expand Down
Loading

0 comments on commit a6d2dd5

Please sign in to comment.