Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demo/Tutorial to show how it works #302

Open
abitrolly opened this issue Mar 13, 2021 · 5 comments
Open

Demo/Tutorial to show how it works #302

abitrolly opened this issue Mar 13, 2021 · 5 comments

Comments

@abitrolly
Copy link

It is not clear how to get floor plans into traffic_editor. Do they need to be drawn by hand, or there is a way to connect it to survey the floor plans with camera and stuff. A demo or tutorial would be nice to know the limitations and usage scenario.

@codebot
Copy link
Contributor

codebot commented Mar 15, 2021

Greetings. There is a walkthrough here which shows a step-by-step process of making a traffic map:
https://osrf.github.io/ros2multirobotbook/traffic-editor.html

Generally speaking, at least so far the concept is that floor-plan creation happens elsewhere, in tools specific for that purpose, and the map(s) are just imported into traffic-editor. Once the floor plan is imported into traffic-editor, then it can be annotated with robot traffic lanes and such. At least for commercial buildings, the floor plans will already exist somewhere and can either exported from their prior CAD format or scanned/rasterized. For non-commercial buildings, or other types of environments that don't already have a nice map, it is possible to build the map using robot tools like SLAM and import that robot-built map into traffic-editor the same way.

@abitrolly
Copy link
Author

I am interested to know more about the SLAM approach. Especially if traffic-editor can be for real-time plan construction and automatic annotation, without manual import/export of files.

@codebot
Copy link
Contributor

codebot commented Mar 16, 2021

Interesting. That would be a different architecture, one in which traffic-editor (or some descendant of it) would be annotating a "live" map database. That's neat. However, in a SLAM context this would be challenging, because during SLAM loop closures, the estimated map often changes (sometimes dramatically). If the traffic annotations are being added to a fixed coordinate system (i.e. in a GUI), how would you know how to update the traffic-lane annotations when SLAM updates the map?

@abitrolly
Copy link
Author

abitrolly commented Mar 16, 2021

For me the objects on the map need to be "probabilistic". And annotations being parts of objects themselves. Like there is 85% wall. We don't see it, but we believe it is still there, so we draw it. The traffic-editor chooses the most appropriate object for it. The object is selected by grouping and matching points reported by SLAM, while the certainty for these objects increase. The groups of points could probably be selected manually at first to make the algorithm learn, but there could be an algorithm that just makes 3D matching of spacial dots to 3D objects without prior learning. The coordinates system will be dynamic at first, but then anchored to some point that is common to most objects observed during the whole history of observations. The longer the history, the more expensive it to change the world rotation angle or coordinate. The rotation could be chosen by compass or by the smallest angle to turn to get the most parallel lines. For that traffic-editor will need to approximate the information from SLAM over the longer period of time, and maybe for the whole period of observations.

@codebot
Copy link
Contributor

codebot commented Mar 16, 2021

So I agree this would be awesome, but it is beyond the current state of the art in robotics, at least according to my understanding. I don't mean to say that as discouragement, just in trying to place it in context. As in, something like that would be an excellent paper to present at a robotics conference. Semantic understanding and automatic registration of annotations on 3D point clouds quickly gets very very difficult, even within the same building, and it gets really tricky when you want the same algorithm and parameter-tuning scheme to apply to more than one building. I'd suggest having a look at the Point Cloud Library (PCL): https://github.com/PointCloudLibrary

This package is currently working in a much less sophisticated domain, that of essentially a stack of 2d flatlands with known fixed geometry (via building floorplans). This is much less exciting, but it is sufficiently useful in a number of domains that we think it's worth pursuing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants