RetinaFace is a practical single-stage face detector which is initially described in arXiv technical report
-
Download our annotations (face bounding boxes & five facial landmarks) from baidu cloud or dropbox
-
Download the WIDERFACE dataset.
-
Organise the dataset directory under
insightface/RetinaFace/
as follows:
data/retinaface/
train/
images/
label.txt
val/
images/
label.txt
test/
images/
label.txt
- Install MXNet with GPU support.
- Install Deformable Convolution V2 operator from Deformable-ConvNets if you use the DCN based backbone.
- Type
make
to build cxx tools.
Please check train.py
for training.
-
Copy
rcnn/sample_config.py
torcnn/config.py
-
Download pretrained models and put them into
model/
.ImageNet ResNet50 (baidu cloud and dropbox).
ImageNet ResNet152 (baidu cloud and dropbox).
-
Start training with
CUDA_VISIBLE_DEVICES='0,1,2,3' python -u train.py --prefix ./model/retina --network resnet
.
Before training, you can check theresnet
network configuration (e.g. pretrained model path, anchor setting and learning rate policy etc..) inrcnn/config.py
. -
We have two predefined network settings named
resnet
(for medium and large models) andmnet
(for lightweight models).
Please check test.py
for testing.
Pretrained Model: RetinaFace-R50 (baidu cloud or dropbox) is a medium size model with ResNet50 backbone. It can output face bounding boxes and five facial landmarks in a single forward pass.
WiderFace validation mAP: Easy 96.5, Medium 95.6, Hard 90.4.
To avoid the confliction with the WiderFace Challenge (ICCV 2019), we postpone the release time of our best model.
@inproceedings{yang2016wider,
title = {WIDER FACE: A Face Detection Benchmark},
author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
booktitle = {CVPR},
year = {2016}
}
@inproceedings{deng2019retinaface,
title={RetinaFace: Single-stage Dense Face Localisation in the Wild},
author={Deng, Jiankang and Guo, Jia and Yuxiang, Zhou and Jinke Yu and Irene Kotsia and Zafeiriou, Stefanos},
booktitle={arxiv},
year={2019}
}