Bringing artificial intelligence on board Earth observation satellites unlocks unprecedented possibilities to extract actionable items from various image modalities at the global scale in real time. This is of paramount importance nowadays, as downlinking large amounts of imagery is not only prohibitively expensive but also time-consuming. However, building deep learning solutions that could be deployed on board an edge device is challenging due to the limited manually-annotated satellite datasets and hardware constraints of an edge device. This paper addresses these challenges through harnessing a blend of data-centric and model-centric approaches to build a well-generalizing yet efficient and resource-frugal deep learning model for multi-class satellite image classification in the few-shot learning settings. This integrated strategy is formulated to enhance classification performance while accommodating the unique demands of an image analysis chain on board OPS-SAT, a nanosatellite operated by the European Space Agency. The experiments performed over a real-world dataset of OPS-SAT images delves into the interactions between data- and model-centric techniques, underscores the significance of synthesizing artificial training data and emphasizes the value of ensemble learning. However, they also caution against negative transfer in domain adaptation. This study sheds light on effective model training strategies and highlights the multifaceted challenges inherent in deep learning for practical Earth observation, contributing insights to the field of satellite image classification within the constraints of nanosatellite operations.
https://doi.org/10.1016/j.eswa.2024.123984
Welcome to the Few-shot Satellite Image Classification (OPS-SAT) repository Follow the steps below to get started:
-
Clone the Repository:
git clone https://github.com/ShendoxParadox/Few-shot-satellite-image-classification-OPS-SAT.git
-
Navigate to Repo Root Folder:
cd Few-shot-satellite-image-classification-OPS-SAT
Make sure you have Conda installed on your machine.
# Create a virtual environment with Python 3.9
conda create --name myenv python=3.9
# Activate the virtual environment
conda activate myenv
# Install project dependencies
pip install -r requirements.txt
-
Build Docker Image:
docker build --no-cache -t ops_sat:latest .
-
Run Docker Container:
docker run -it ops_sat
-
Modify Configuration: Edit the
config.json
file as needed:nano config.json
-
Navigate to Source Folder:
cd src/
-
Run OPS-SAT Development Script:
python OPS_SAT_Dev.py
-
Choose W&B Option: Follow the prompts to choose the WandB option during script execution.
-
View Run Results: Navigate to the WandB dashboard to observe the run results.
-
(Another way) Pull and run the following docker image
docker pull ramezshendy/ops_sat:latest docker run -it ramezshendy/ops_sat:latest
For any additional information or troubleshooting, refer to the documentation or contact the repository owner.
- Dataset Name: The OPS-SAT case dataset
- Dataset Variation Description: Augmented Color Corrected Synthetic Variation
- Training/Validation Dataset Path: ../Data/Variation_Synthetic_Generation_color_corrected_Augmentation/train/
- Test Dataset Path: ../Data/Variation_Synthetic_Generation_color_corrected_Augmentation/test/
Change the path of the training and test datasets from the available dataset variations in the Data folder.
- Transfer Learning: false
Means that the model will utilize pretraining using imagenet. If true, it will use transfer learning techniques. - Transfer Learning Dataset: landuse
The available transfer learning datasets are: landuse, imagenet, opensurfaces
- Project: OPS-SAT-Thesis-Project
- Input Shape: [200, 200, 3]
- Number of Classes: 8
- Dropout: 0.5
- Output Layer Activation: Softmax
- Model Optimizer: Adam
- Loss Function: FocalLoss
The implemented loss functions to use from are: FocalLoss, SparseCategoricalCrossentropy - Model Metrics: [SparseCategoricalAccuracy]
- Early Stopping:
- Monitor: val_sparse_categorical_accuracy
- Patience: 6
- Model Checkpoint:
- Monitor: val_sparse_categorical_accuracy
- Cross Validation K-Fold: 5
- Number of Epochs: 200
- Batch Size: 4
- Focal Loss Parameters:
- Alpha: 0.2
- Gamma: 2
If loss function is FocalLoss
- Number of Freeze Layers: 5
If transfer learning is true.
- /OPS-SAT-Thesis-Project
- /Data
- /Variation_Synthetic_Generation_color_corrected_Augmentation
- /train
- /Agricultural
- /Cloud
- /Mountain
- /Natural
- /River
- /Sea_ice
- /Snow
- /Water
- /test
- /ops_sat
- /Variation_Augmentation
- /Variation_Original
- /Variation_Synthetic_Generation
- /Variation_Synthetic_Generation_color_corrected
- /src
- OPS_SAT_Dev.py
- color_correction.py
- image_augmentation.py
- Your source code files
- /notebooks
- /models
- best_weights.h5
- fold_1_best_model_weights.h5
- fold_2_best_model_weights.h5
- fold_3_best_model_weights.h5
- fold_4_best_model_weights.h5
- fold_5_best_model_weights.h5
- README.md
- Dockerfile
- config.json
- .gitignore
- requirements.txt