This repository provides the source code of the StomaDetector in "Additive effects of stomata density and leaf ABA levels on maize water use efficiency". Here, we trained and evaluated a Faster-RCNN-based object detector to detect stomata in microscopy images of maize leaves. The predicted stomatal counts (and thus the calculated stomatal density) is highly correlated with manual counts with a r-value of 0.996.
Here we show the results of the best performing model. The mAP50 on a hold-out test-set was calculated at almost 99%, indicating good generalization.
Here we show some example predictions from the hold-out test-set. The main difficulty for the model are images with severe out-of-focus blur, which might be rectified while image acquisition.
- Clone this project
git clone [email protected]:grimmlab/StomaDet.git
- Adjust Dockerfile, so the user matches your name and ID (
id -g
andid -u
). That way, you will have the ownership of the files generated inside docker. - Install StomaDet
cd StomaDet
docker build -t stoma_det .
- Download dataset (
data.zip
) from Mendeley Data (TODO: add link) and paste the unzipped files inStomaDet/data
. - Start docker container
docker run -it -v /path/to/github/repo/StomaDet:/workspace --workdir /workspace --runtime nvidia --gpus device=0 --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --name stoma_det1 stoma_det
- Image normalisation using mean and standard deviation of the training dataset
python3 calc_mean_std.py
This will return the following results:
- mean=tensor([0.8020, 0.8020, 0.8020])
- std=tensor([0.1936, 0.1936, 0.1936])
which are the values used in the scripts train_faster_rcnn.py
and retrain_faster_rcnn.py
.
- Dataset splits
python3 split_dataset.py
This script will use the information in StomaDet/data/dataset.csv
and create a stratified train-val-test split. Here, the split might vary due to differences in the random number generator used in different machines, so we also made our split dataset files publicly available. CAUTION: the downloaded split-files will be overwritten by this script.
- Hyperparameter Optimisation
python3 hyperparameter_optimization.py
This script will run the complete hyperparameter optimisation (210 different hyperparameter sets) by calling the script train_faster_rcnn.py
multiple times. Caution: this script will run for a long, thus it is advised to use smaller batch sizes, as they performed slightly better in our experiments.
- Selection of the best hyperparameter set
python3 generate_hp_table.py
This script will use the metrics.json
that is generated by train_faster_rcnn.py
for each trained model and create a summary csv file. With this script, the best hyperparameter set can be selected. For reference, we made the output of this file available (see /StomaDet/output/hp_table.csv
).
5. Re-training of the model by using the best hyperparameter set
python3 retrain_faster_rcnn.py --bs 1 --roi 256 --max_iter 9800 --lr 0.00199565293928084
This script will re-train a model and save it in the directory output
with the prefix trainval_
. We renamed the model and made it publicly available via mendeley data under models.zip
(TODO: add link).
We provide an easy way of inference through a web app call StomaApp. Please follow the instructions here to install it. The next sections will show examples on how we used the StomaApp to generate some results shown in our paper.
Download the details.csv file from the StomaApp and adjust the script process_test_set.py
accordingly.
Alternatively, you can download the predictions of the test-set (inside predictions.zip
) from mendeley data (extract into /StomaDet/data
) and run the following command.
python3 process_test_set.py
The generated images will be saved into /StomaDet/output/predictions
.
Download the summary.csv file from the StomaApp and adjust the script process_inference_set.py
accordingly. Additionally, you need the corresponding ground truth stomatal counts.
For reference, you can download the predictions.zip
and the inference.zip
from mendeley data (extract into /StomaDet/data
) and run the following command.
python3 process_inference_set.py