Yahia Dalbah, Jean Lahoud, Hisham Cholakkal
Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)
TransRadar has been accepted at WACV2024
If you find this work helpful in your research, please do cite our work through:
@InProceedings{Dalbah_2024_WACV,
author = {Dalbah, Yahia and Lahoud, Jean and Cholakkal, Hisham},
title = {TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View Radar Semantic Segmentation},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2024},
pages = {353-362}
}
- Clone the repo:
$ git clone https://github.com/YahiDar/TransRadar.git
- Create a conda environment using:
cd $ROOT/TransRadar
conda env create -f env.yml
conda activate TransRadar
pip install -e .
Due to certain discrepancies with scikit library, you might need to do:
pip install scikit-image
pip install scikit-learn
NOTE: We also provide requirements.txt
file for venv enthusiasts.
- Dataset:
The CARRADA dataset is available on Valeo.ai's github: https://github.com/valeoai/carrada_dataset.
You must specify the path at which you store the logs and load the data from, this is done through:
cd $ROOT/TransRadar/mvrss/utils/
python set_paths.py --carrada -dir containing the Carrada file- --logs -dir_to_output-
cd $ROOT/TransRadar/mvrss/
python -u train.py --cfg ./config_files/TransRadar.json --cp_store -dir_to_checkpoint_store-
Both this step and the previous step are in the train_transrad.sh
bash file. Edit the directories to fit your organizational preference.
You will find trained model, and associated pre-trained weights, in the $ROOT/TransRadar/mvrss/carrada_logs/carrada/(model_name)/(model_name_version)/_
directory. You test using:
$ cd $ROOT/TransRadar/mvrss/
$ python -u test.py --cfg $ROOT/TransRadar/mvrss/carrada_logs/carrada/TransRadar/TransRadar_1/config.json
You can also use the test_transrad.sh
file after editing directories.
Important note:
The weights we provide are slightly different than what we report in the paper, since we trained and reported the outcome of multiple training attempts for fairness.
This repository heavily borrows from Multi-View Radar Semantic Segmentation
The loss function implementation borrows from the Unified Focal Loss code.
The ADA code is adapted from Axial Attention.
Also feel free to check our work on the RODNet dataset at our other repository through: https://github.com/yahidar/radarformer