Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ROS] Implemented Camera Renderer #23

Closed

Conversation

renezurbruegg
Copy link
Contributor

Implemented Camera Renderer to Project Semantic Classes / Uncertainty Values and Depth for a given Camera Pose


if (!request.use_provided_tf) {
// TF not provided. Look it up.
// TODO @zrene. This is hardcoded. Can i use global_frame_name_?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be sure, global_frame_name_ should be fine here, right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, global frame name would be better here. In principle, you'd still have to do the transformation of the submap to global frame (or easier of the requested pose to the submap). I'd specify in the comments in which frames lookups happen but global is the most intuitive.

@renezurbruegg renezurbruegg requested a review from Schmluk October 2, 2021 20:43
Copy link
Collaborator

@Schmluk Schmluk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, would be nice to have it in the main library with and additional visualizer but can also work like this (as a pure visualization tool). Btw I assume this mostly replaces the map_renderer? Maybe there's an easy way to combine the best of both?

bool use_depth
# Flag if custom tf is provided for the map lookup.
# If false, transform will be calculated using tf lookups from the sensor frame
bool use_provided_tf
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation for these fields is not entirely clear. E.g. when is the TF/transform used and which params to supply when.

@@ -17,6 +17,7 @@ cs_add_library(${PROJECT_NAME}
src/visualization/single_tsdf_visualizer.cpp
src/visualization/planning_visualizer.cpp
src/visualization/tracking_visualizer.cpp
src/visualization/camera_renderer.cpp
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's already a map renderer (quite ugly one though) in panoptic_mapping package. This is essentially a single_tsdf_renderer, right? Would be nicer to have this as part of the library and keep the ROS interface separate.

const Config config_;

// Members.
std::shared_ptr<Globals> globals_;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the one-hand it's nice to have these stored here only once, however if e.g. a new map is loaded the renderer will no longer work. Maybe passing the camera and submap collection as const ref when needed could be an option? Also these should likely be const pointers since you don't want the renderer to change the map, right?


namespace panoptic_mapping {

void CameraRenderer::Config::checkParams() const { /* No params */
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's no checks you can also not implement that method, however I also usually do it like you did, since it's easier to later on add checks.

&depth_image, method);
} else {
// Use 1000 as default depth value
depth_image = cv::Mat(globals_->camera()->getConfig().height,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 1000? Zero or so might be more intuitive (e.g. what you get from a failing depth camera I think), in particular if the output images are to be further processed.

case CameraRenderer::UNCERTAINTY_METHOD::PROBABILITY:
return panoptic_mapping::classVoxelEntropy(voxel);
case CameraRenderer::UNCERTAINTY_METHOD::VOXEL_ENTROPY:
return panoptic_mapping::classVoxelBelongingProbability(voxel);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

entropy and probability are mixed.


for (const Submap& submap : *submaps_) {
// Filter out submaps.
if (!submap.isActive()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these filters always hold? In the single_tsdf case I think the only submap is of type free_space. Alternatively these could also be params/part of the config?

}

if (!camera_->projectPointToImagePlane(p_C, &u, &v) || u < 0 ||
u >= cam_config_.width || v < 0 || v >= cam_config_.height) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

project point to plane already checks whether u and v are in bound I think.

*uncertainty_image, cv::Point(u_min, v_min),
cv::Point(u_max, v_max),
get_voxel_uncertainty_value(*c_voxel, uncertainty_method), -1);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice solution!

@renezurbruegg
Copy link
Contributor Author

Moved to issue #32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants