Code of MICCAI 2023 paper: "Unsupervised Domain Adaptation for Anatomical Landmark Detection".
- Clone this repository.
git clone https://github.com/jhb86253817/UDA_Med_Landmark.git
- Create a new conda environment.
conda create -n uda_med_landmark python=3.9
conda activate uda_med_landmark
- Install the dependencies in requirements.txt.
pip install -r requirements.txt
- Head: Source domain of cephalometric landmark detection (Download Link). Put the downloaded
RawImage
andAnnotationsByMD
underdata/Head/
. - HeadNew: Target domain of cephalometric landmark detection (Download Link). If the link is not available, please read the Section 4 "Usage Notes" of paper (https://arxiv.org/pdf/2302.07797.pdf) for data downloading. Put the downloaded
Cephalograms
andCephalometric_Landmarks
of the training set underdata/HeadNew/
. - JSRT: Source domain of lung landmark detection (Download Link). Put the downloaded
All247images
underdata/JSRT/
. Collect the landmark annotations from HybridGNet and put them underdata/JSRT/annos/
. Run thepreprocess.py
underdata/JSRT
to generateImages
. - MSP: Target domain of lung landmark detection, which consists of three datasets: Montgomery (Download Link), Shenzhen (Download Link), and Padchest (Download Link). 1) For Montgomery, put the downloaded
CXR_png
underdata/Montgomery/
. Collect the landmark annotations from HybridGNet and put them underdata/Montgomery/annos_RL/
anddata/Montgomery/annos_LL/
. Run thepreprocess.py
underdata/Montgomery
to generateImages
; 2) For Shenzhen, put the downloadedCXR_png
underdata/Shenzhen/
. Collect the landmark annotations from HybridGNet and put them underdata/Shenzhen/annos_RL/
anddata/Shenzhen/annos_LL/
. Run thepreprocess.py
underdata/Shenzhen
to generateImages
; 3) For Padchest, download the landmark annotations from here, unzip it and put it underdata/Padchest/annos
. Then select those images with landmark annotations and put them underdata/Padchest/Images/
.
You will have the following structure:
UDA_Med_Landmark
-- data
|-- Head
|-- RawImage
|-- AnnotationsByMD
|-- HeadNew
|-- Cephalograms
|-- Cephalometric_Landmarks
|-- img2size.json
|-- img2dist.json
|-- train_list.txt
|-- test_list.txt
|-- JSRT
|-- All247images
|-- preprocess.py
|-- Images
|-- annos
|-- Montgomery
|-- CXR_png
|-- preprocess.py
|-- Images
|-- annos_RL
|-- annos_LL
|-- train_list.txt
|-- test_list.txt
|-- Shenzhen
|-- CXR_png
|-- preprocess.py
|-- Images
|-- annos_RL
|-- annos_LL
|-- train_list.txt
|-- test_list.txt
|-- Padchest
|-- Images
|-- annos
|-- train_list.txt
|-- test_list.txt
Take cephalometric landmark detection as example.
- Go to folder
lib
, runpreprocess.py Head
andpreporcess.py HeadNew
to preprocess the two datasets, respectively. - Back to folder
UDA_Med_Landmark
, configure the command inrun_train.sh
as needed, then runbash run_train.sh
to start training.
Take cephalometric landmark detection as example.
- Preprocess
Head
andHeadNew
the same way as in training. - Back to folder
UDA_Med_Landmark
, configure the command inrun_test.sh
as needed, then runbash run_test.sh
to start testing.
Trained model weights: