The repo contains the official implementation of real-time motion segmentation with geometric priors used in our IROS'18 paper:
@inproceedings{siam2018real,
title={Real-Time Segmentation with Appearance, Motion and Geometry},
author={Siam, Mennatullah and Eikerdawy, Sara and Gamal, Mostafa and Abdel-Razek, Moemen and Jagersand, Martin and Zhang, Hong},
booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={5793--5800},
year={2018},
organization={IEEE}
}
Real-time Segmentation is of crucial importance to robotics related applications such as autonomous driving, driving assisted systems, and traffic monitoring from unmanned aerial vehicles imagery. We propose a novel two-stream convolutional network for motion segmentation, which exploits flow and geometric cues to balance the accuracy and computational efficiency trade-offs. The geometric cues take advantage of the domain knowledge of the application. In case of mostly planar scenes from high altitude unmanned aerial vehicles (UAVs), homography compensated flow is used. While in the case of urban scenes in autonomous driving, with GPS/IMU sensory data available, sparse projected depth estimates and odometry information are used. The network provides 4x speedup over the state of the art networks in motion segmentation, at the expense of a reduction in the segmentation accuracy in terms of pixel boundaries.
Python 3.5.2
Tensorflow 1.4
Use samples from run.sh
python3 main.py --load_config=fcn8s_2stream_shufflenet_test.yaml test Train2Stream FCN8s2StreamShuffleNetLate
python3 main.py --load_config=fcn8s_2stream_shufflenet_train.yaml train Train2Stream FCN8s2StreamShuffleNetLate
UAV Imagery VIVID Original Flow
UAV Imagery VIVID Homography Compensated Flow
The labels used for KITTI Motion is used from the work on Deep Motion
Weights Trained on VIVID Original Flow