This repository provides an implementation of the SMART-TRACK framework, which is currently under review in IEEE Sensors. The SMART-TRACK framework is designed to enhance UAV object tracking by using a novel measurement augmentation system that leverages Kalman Filter (KF) predictions to guide the detection and localization process. This approach is particularly effective in scenarios where primary object detectors might fail intermittently, ensuring more robust state estimation and continuity in tracking. The framework is implemented as a ROS 2 package with nodes that perform drone detection using neural networks and depth maps.
NOTE Use the ros2_humble
branch if you use use ROS 2 humble (the currently supported version)
- We detect drones using YOLOv8. Make sure to install YOLOv8 before you continue.
- Clone yolov8_ros, and use release
2.0.1
which we tested with YOLOv8 with commitb638c4ed
- we provide a Yolov8 custom model for drone detection available in the config directory of this package. The name of the model is
drone_detection_v3.pt
. You can use this model withyolov8_ros
package. - Check the remaining dependencies in the package.xml file.
- The
main
branch of this repository contains the ROS 2humble
version of this package. - Clone our Kalman filter implementation multi_target_kf in your
ros2_ws/src
. Checkout theros2_humble
branch.
- Build your ros2 workspace using
colcon build
-
Make sure this package is inside the ROS 2 workspace
-
Make sure that you build the ROS 2 workspace, and source
install/setup.bash
-
Run the
-
To run the pose estimator, run the following launch file,
ros2 launch d2dtracker_drone_detector yolo2pose.launch
-
The yolo2pose_node.py also accepts Kalman filter estimation of the target 3D position in order to implement the KF-guided measurement algorithm for more robust state estimation. We use our Kalman filter implementation in multi_target_kf
TBD
TBD
- Make sure that you provide the correct depth image topic in here, and the camera info topic here
- There should be a valid static transformation between the robot base link frame and the camera frame. For an example, see here. This is required to compute the position of the detected drone in the observer's localization frame, which can be sent to a Kalman filter in a later stage.
- You can configure the depth-based detection parameters in here
- Make sure that you build your workspace after any modifications, using
colcon build
- Support 3D LiDAR measurements