Skip to content

Latest commit

 

History

History
87 lines (58 loc) · 3.56 KB

README.md

File metadata and controls

87 lines (58 loc) · 3.56 KB

Unet for image segmentation

My project is inspired by original architecture: U-Net: Convolutional Networks for Biomedical Image Segmentation

Binary semantic segmentation

image image image
Original Ground truth Prediction

Review training on colab:

  • Unet --> Open In Colab
  • Mobilenetv2 - Unet --> Open In Colab
  • Resnet50 - Unet --> Open In Colab

Review training on colab:

Open In Colab

Multiclass semantic segmentation with Resnet50-Unet (Car Segmentation)

Review training on kaggle:

Kaggle

Overview

In this my model, I will resize INPUT_SHAPE and OUTPUT_SHAPE equally. The reason is that when the dimensions are equal, the model will not lose position and spatial information. Some layers of the SOTA model module have an image mask that is resized to the same size as the input.
To do so, I use padding = same and LeakyRelu as an alternative to Relu

In addition, I have implemented Unet models with the encoding as the Mobilenetv2 and Resnet50 backbones

Original Unet Architecture

Unet

Author

Usage

Dependencies

  • python >= 3.9
  • numpy >= 1.20.3
  • tensorflow >= 2.7.0
  • opencv >= 4.5.4
  • matplotlib >= 3.4.3

Train your model by running this command line

Training script:

python train.py --all-train ${link_to_train_origin_folder} --all-train ${link_to_train_mask_folder} --epochs ${epochs}

Example:

python train.py  --all-train frames/*.jpg --all-train masks/*.jpg --batch-size 4 --classes 2 --epochs 2 --color-mode gray --use-kmean True --image-size 64 

There are some important arguments for the script you should consider when running it:

  • all-train: The folder of training data
  • all-valid: The folder of validation data
  • color-mode: hsv or rgb or gray
  • lr: The learning rate
  • image-size: The image size of the dataset
  • model-save: Where the model after training saved

Feedback

If you meet any issues when using this library, please let us know via the issues submission tab.