Skip to content

Latest commit

 

History

History
204 lines (158 loc) · 6.85 KB

File metadata and controls

204 lines (158 loc) · 6.85 KB

Few-shot satellite image classification for bringing deep learning on board OPS-SAT

Abstract

Bringing artificial intelligence on board Earth observation satellites unlocks unprecedented possibilities to extract actionable items from various image modalities at the global scale in real time. This is of paramount importance nowadays, as downlinking large amounts of imagery is not only prohibitively expensive but also time-consuming. However, building deep learning solutions that could be deployed on board an edge device is challenging due to the limited manually-annotated satellite datasets and hardware constraints of an edge device. This paper addresses these challenges through harnessing a blend of data-centric and model-centric approaches to build a well-generalizing yet efficient and resource-frugal deep learning model for multi-class satellite image classification in the few-shot learning settings. This integrated strategy is formulated to enhance classification performance while accommodating the unique demands of an image analysis chain on board OPS-SAT, a nanosatellite operated by the European Space Agency. The experiments performed over a real-world dataset of OPS-SAT images delves into the interactions between data- and model-centric techniques, underscores the significance of synthesizing artificial training data and emphasizes the value of ensemble learning. However, they also caution against negative transfer in domain adaptation. This study sheds light on effective model training strategies and highlights the multifaceted challenges inherent in deep learning for practical Earth observation, contributing insights to the field of satellite image classification within the constraints of nanosatellite operations.

https://doi.org/10.1016/j.eswa.2024.123984

Few-shot Satellite Image Classification (OPS-SAT)

Welcome to the Few-shot Satellite Image Classification (OPS-SAT) repository Follow the steps below to get started:

Usage Guide

  1. Clone the Repository:

    git clone https://github.com/ShendoxParadox/Few-shot-satellite-image-classification-OPS-SAT.git
  2. Navigate to Repo Root Folder:

    cd Few-shot-satellite-image-classification-OPS-SAT

Using Conda

Create a Virtual Environment

Make sure you have Conda installed on your machine.

# Create a virtual environment with Python 3.9
conda create --name myenv python=3.9

# Activate the virtual environment
conda activate myenv

# Install project dependencies
pip install -r requirements.txt

Using Docker

  1. Build Docker Image:

    docker build --no-cache -t ops_sat:latest .
  2. Run Docker Container:

    docker run -it ops_sat
  3. Modify Configuration: Edit the config.json file as needed:

    nano config.json
  4. Navigate to Source Folder:

    cd src/
  5. Run OPS-SAT Development Script:

    python OPS_SAT_Dev.py
  6. Choose W&B Option: Follow the prompts to choose the WandB option during script execution.

  7. View Run Results: Navigate to the WandB dashboard to observe the run results.

  8. (Another way) Pull and run the following docker image

    docker pull ramezshendy/ops_sat:latest
    docker run -it ramezshendy/ops_sat:latest

For any additional information or troubleshooting, refer to the documentation or contact the repository owner.

Config file guide:

  • Dataset Name: The OPS-SAT case dataset
  • Dataset Variation Description: Augmented Color Corrected Synthetic Variation

Dataset Paths

  • Training/Validation Dataset Path: ../Data/Variation_Synthetic_Generation_color_corrected_Augmentation/train/
  • Test Dataset Path: ../Data/Variation_Synthetic_Generation_color_corrected_Augmentation/test/
    Change the path of the training and test datasets from the available dataset variations in the Data folder.

Model Configuration

  • Transfer Learning: false
    Means that the model will utilize pretraining using imagenet. If true, it will use transfer learning techniques.
  • Transfer Learning Dataset: landuse
    The available transfer learning datasets are: landuse, imagenet, opensurfaces

Model Parameters

  • Project: OPS-SAT-Thesis-Project
  • Input Shape: [200, 200, 3]
  • Number of Classes: 8
  • Dropout: 0.5
  • Output Layer Activation: Softmax
  • Model Optimizer: Adam
  • Loss Function: FocalLoss
    The implemented loss functions to use from are: FocalLoss, SparseCategoricalCrossentropy
  • Model Metrics: [SparseCategoricalAccuracy]
  • Early Stopping:
    • Monitor: val_sparse_categorical_accuracy
    • Patience: 6
  • Model Checkpoint:
    • Monitor: val_sparse_categorical_accuracy
  • Cross Validation K-Fold: 5
  • Number of Epochs: 200
  • Batch Size: 4
  • Focal Loss Parameters:
    • Alpha: 0.2
    • Gamma: 2
      If loss function is FocalLoss
  • Number of Freeze Layers: 5
    If transfer learning is true.

Supplementary Links

Project Structure

- /OPS-SAT-Thesis-Project
  - /Data
    - /Variation_Synthetic_Generation_color_corrected_Augmentation
      - /train
        - /Agricultural
        - /Cloud
        - /Mountain
        - /Natural
        - /River
        - /Sea_ice
        - /Snow
        - /Water
      - /test
    - /ops_sat
    - /Variation_Augmentation
    - /Variation_Original
    - /Variation_Synthetic_Generation
    - /Variation_Synthetic_Generation_color_corrected
  - /src
    - OPS_SAT_Dev.py
    - color_correction.py
    - image_augmentation.py
    - Your source code files
  - /notebooks
  - /models
    - best_weights.h5
    - fold_1_best_model_weights.h5
    - fold_2_best_model_weights.h5
    - fold_3_best_model_weights.h5
    - fold_4_best_model_weights.h5
    - fold_5_best_model_weights.h5
  - README.md
  - Dockerfile
  - config.json
  - .gitignore
  - requirements.txt

WandB dashboard

The following are examples of what can be found after each run in wandb dashboard

Class Accuracies Class Accuracies

Correct Predictions Correct Predictions

Wrong Predictions Wrong Predictions

Train - Val Accuracy Train - Val Accuracy

Train - Val Loss Train - Val Loss

System Charts System Charts

Run Info Run Info

Configuration Parameters Configuration Parameters

Model Files Model Files

Runs Parallel Plot Runs Parallel Plot