Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualize object sensor's data (rviz) #518

Open
soo4826 opened this issue Apr 6, 2021 · 3 comments
Open

Visualize object sensor's data (rviz) #518

soo4826 opened this issue Apr 6, 2021 · 3 comments

Comments

@soo4826
Copy link

soo4826 commented Apr 6, 2021

Hi!

I want to extract 3D BBOX data using ros-bridge.
Before that, I need to visualize object's tf data via rviz.
But I dont know how to add object's tf data from object sensor... (/carla/ego_vehicle/objects)
When I launched pcl_recorder.launch, I found that topic /carla/ego_vehicle/objects published.
There is no node which subscribes that topic.

So, If I want to visualize object via rviz, do I have to make tf listener?
Screenshot from 2021-04-06 18-58-50

@soo4826 soo4826 changed the title Add Object's tf coordinate to rviz Visualize object sensor's data (rviz) Apr 6, 2021
@joel-mb
Copy link
Contributor

joel-mb commented Apr 12, 2021

Hi @soo4826,
From which objects do you want to extract 3D bounding box data? Right now, there are two ways to retrieve this information from traffic participants (i.e., pedestrians and vehicles).

  • Using the marker sensor (in your case published in /carla/markers). You can directly visualize this topic on rviz.
  • Using the object sensor (in your case published in /carla/ego_vehicle/objects which contain information about the object shape and dimensions (not sure if there is a plugin to visualize this information directly in rviz).

You don't need to create any tf listener.

@soo4826
Copy link
Author

soo4826 commented Apr 12, 2021

Hi @soo4826,
From which objects do you want to extract 3D bounding box data? Right now, there are two ways to retrieve this information from traffic participants (i.e., pedestrians and vehicles).

  • Using the marker sensor (in your case published in /carla/markers). You can directly visualize this topic on rviz.
  • Using the object sensor (in your case published in /carla/ego_vehicle/objects which contain information about the object shape and dimensions (not sure if there is a plugin to visualize this information directly in rviz).

You don't need to create any tf listener.

Thanks for your reply joel!

I want to make KITTI ground truth data of 3D object tracking, so I want to get object's 3D BBOX data.
To do that, I want to get each object's 3D BBOX data and tracking_id.

Also I already try to use object sensor. But there are some problems.

  • The data(/carla/ego_vehicle/objects) have coordinates based on map coordinates, not 3D BBOX data based on ego_vehicles.
  • The data from sensor is not BBOX data, they are tf data.

So, I'll try to use marker sensor to visualize objects on carla.

And Is there a solution to make GT data from carla..?

@joel-mb
Copy link
Contributor

joel-mb commented Apr 14, 2021

@soo4826 The id field of the Marker message also relates to the CARLA actor id, so this may help for your tracking purposes.

And Is there a solution to make GT data from carla..?

Regarding bboxes, a new feature to get bounding bounding was added in 0.9.10 (https://carla.readthedocs.io/en/0.9.11/python_api/#carla.World.get_level_bbs and https://carla.org/2020/09/25/release-0.9.10/#global-bounding-box-accessibility). Unfortunately, this functionality has not been ported to the ROS bridge yet. Probably, this will be integrated in the ROS bridge as a new pseudo sensor or service.

On the other hand, there are other sensors that provide ground truth data in CARLA (e.g, semantic segmentation camera, semantic lidar). Here you can take a look to the CARLA sensor reference: https://carla.readthedocs.io/en/latest/ref_sensors/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants