Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting Started with RealSense D457 #20

Open
jon-terry-sisu opened this issue Nov 18, 2024 · 6 comments
Open

Getting Started with RealSense D457 #20

jon-terry-sisu opened this issue Nov 18, 2024 · 6 comments

Comments

@jon-terry-sisu
Copy link

jon-terry-sisu commented Nov 18, 2024

I'm trying to get started with a RealSense D457, using Ros2 Humble on Ubuntu 22.04. I'm currently running a simulated robot on my computer, which is publishing joint positions, with the RealSense just sitting on my desk. Here's what I'm running:

  • ros2 launch realsense2_camera rs_pointcloud_launch.py align_depth.enable:=true (I can see depth and RGB data visually in RVIZ)
  • ros2 launch industrial_reconstruction reconstruction.launch.xml depth_image_topic:=/camera/camera/aligned_depth_to_color/image_raw color_image_topic:=/camera/camera/color/image_raw camera_info_topic:=/camera/camera/color/camera_info (I verified that the published data matches between the two camera_info topics).
  • ros2 service call /start_reconstruction industrial_reconstruction_msgs/srv/StartReconstruction "tracking_frame: 'base_link' relative_frame: 'world' translation_distance: 0.0 rotational_distance: 0.0 live: true tsdf_params: voxel_length: 0.02 sdf_trunc: 0.04 min_box_values: {x: 0.0, y: 0.0, z: 0.0} max_box_values: {x: 0.0, y: 0.0, z: 0.0} rgbd_params: {depth_scale: 1000.0, depth_trunc: 0.75, convert_rgb_to_intensity: false}" After I run this service call, I see Start Reconstruction in the terminal where I launch Industrial Reconstruction
  • ros2 service call /stop_reconstruction industrial_reconstruction_msgs/srv/StopReconstruction "archive_directory: '/home/jonterry/Desktop/archive' mesh_filepath: '/home/jonterry/Desktop/results_mesh.ply' normal_filters: [{ normal_direction: {x: 0.0, y: 0.0, z: 1.0}, angle: 90}] min_num_faces: 1000" After running this service call, I get errors in the launch terminal, where I think the root issue is: [Open3D WARNING] Write PLY failed: mesh has 0 vertices.

I think part of my issue is that I don't understand how tracking_frame and relative_frame are intended to work. It seems that if I put in any values except for what I have above, the launch window constantly gives Failed to get transform errors.

Are there specific topics that my simulated robot should be publishing to in order for the Industrial Reconstruction package to get whatever it needs?

Also, I saw on a different thread on here that, if I set live to true, I should be able to see data from the /markers topic in RViz. What kind of visualization type should be added in RViz to see that data?

@jon-terry-sisu
Copy link
Author

Update: I think I've worked it out. If I set tracking_frame to camera_color_optical_frame in the service call, and publish a static transformation between the flange and camera frame with
ros2 run tf2_ros static_transform_publisher --x 0.01 --y 0.06 --z 0 --yaw 0 --pitch 0 --roll 0 --frame-id flange --child-frame-id camera_color_optical_frame, AND make sure my simulator is actually publishing joint values, then I get something that looks like it could be real data if the camera were actually moving with the robot.
Next step is to put it on a robot and see if I get reasonable data.

@marrts
Copy link
Collaborator

marrts commented Nov 18, 2024

Update: I think I've worked it out. If I set tracking_frame to camera_color_optical_frame in the service call

Yes, you need to set the tracking_frame to match the camera frame, this represents the frame that is moving on the end of the robot. The relative_frame is the frame all the points in the generated mesh will be saved relative to (typically a static frame outside the robot kinematic chain), so world is perfectly reasonable to use here.

I expect that the data would look very bad using a simulated robot and real camera, I have never tried that, but the fact that you got something out of it is a positive sign that you are calling the services correctly.

One important note to get the best possible results is to extrinsically calibrate your camera relative to the tool of your camera (hand-eye calibration). Extrinsic calibration will tell you precisely where the camera is mounted on the robot. You can use the industrial calibration library to do this. With a good enough estimate you should be able to see results that resemble the actual geometry of the object you are scanning, but you will see better results after a calibration has been performed.

@jon-terry-sisu
Copy link
Author

Good to know, thanks.

Is there a recommended way to link up camera_color_optical_frame with another frame? tf2 seems to only work intermittently.

And can you give me more detail on how to view the mesh that's being generated in RViz? Which visualization type works for it, and what topic should it subscribe to?

@marip8
Copy link
Member

marip8 commented Nov 19, 2024

Is there a recommended way to link up camera_color_optical_frame with another frame? tf2 seems to only work intermittently.

The Realsense driver optionally publishes its own TF tree to define frames relevant to the camera, and those frames are generally not connected to any other TF tree. You can work around this issue in a few ways:

  • Use the driver to publish the camera TF frames and connect the root of those frames to an existing frame in your URDF by setting a parameter called base_frame_id equal to the name of the frame in your URDF to which you want to connect the camera
  • Tell the driver not to publish the camera TF tree (set the parameter publish_tf to false) and set the parameter camera_name equal to the name of the TF frame in your URDF that represents the camera optical frame. This will set the frame_id of the image messages equal to the name of that frame

And can you give me more detail on how to view the mesh that's being generated in RViz? Which visualization type works for it, and what topic should it subscribe to?

I believe the mesh is of type visualization_msgs/msg/Marker, which can be viewed by a Marker display in Rviz. You can also add an Rviz display by topic in the Add Display dialog; that will add the appropriate display for a selected topic

@jon-terry-sisu
Copy link
Author

jon-terry-sisu commented Nov 20, 2024

OK, I'm making progress, but still not there. I have the RealSense mounted to the bottom of this tool, facing downward (where the frame is shown on the screenshot).
image
The robot passes over the table a few times while collecting RealSense data. But you can see the mesh being generated up above, and is definitely not an accurate representation of the table and floor.
When I run ros2 run tf2_tools view_frames, I can see that the base link of the camera is connected in the frame tree where I think it should be, but it seems like it thinks the camera is somewhere else, and rotated differently.

@jon-terry-sisu
Copy link
Author

Oop, nevermind. I forgot to update the tracking_frame parameter in the service call

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants