-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support from Hokuyo lidar #1186
base: main
Are you sure you want to change the base?
Conversation
…nd donkeycar/templates/complete.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this PR. It would be great to be able to use the Lidar on an F1Tenth car. The the code looks very good; I have left a few comments. There are other concerns:
- is there a parallel PR for the docs? Without documentation on how to setup the Lidar the software is not that useful and it becomes a maintenance burden. Links out to reliable, stable content can do most of the work. It would also be great to have a video of the webui output.
- we should also mention in the camera section of the docs about how to setup the 'camera' to be the lidar scan. That is a really good idea.
- we should rebase/squash these two commits into a single commit.
threaded=True) | ||
|
||
elif cfg.CAMERA_TYPE == "LIDAR_PLOT": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very interesting. So the idea here is that the lidar plot would be used as the input to the CNN? Separate from the code concerns, have you tried this? Does it produce a reasonable autopilot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking more along the lines of being able to see what the lidar sees. I haven't tried to train an autopilot with it although that does sound like an interesting idea.
LIDAR_LOWER_LIMIT = 90 # angles that will be recorded. Use this to block out obstructed areas on your car, or looking backwards. Note that for the RP A1M8 Lidar, "0" is in the direction of the motor | ||
LIDAR_UPPER_LIMIT = 270 | ||
LIDAR_MAX_DIST = 10_000 # Maximum distance for LiDAR. Measured in mm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These values need to have defaults based on the chose lidar type. RP/YD would be 0 for angle and I'm not sure for distance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean these should be set to zero for the user to put in themselves? I wanted to give this example to show that the back half of scans should likely be discarded in most set-ups. As for the max distance, I think this is reasonable for the entry level RP Lidar/ YD Lidar. If someone buys a fancier model then I would assume they would also know how to change this value to reflect the capabilities of their lidar.
Thanks @Ezward for the review, I'll probably get to the fixes/writing docs in a week or so. |
Hi @Ezward I addresssed your review and made a PR for documentation: autorope/donkeydocs#56 |
lidar.py
to interface with HokuyoLidarPlot2
to be used with Hokuyo so it can act as aCamera
in the web interfaceLidarPlot2
with existing templates/configsNotes:
To set it up you need to configure the Ethernet port on the Pi by creating a file in
/etc/interfaces.d
. This is well-described by many online tutorials, and googling "f1tenth hokuyo" also brings up a lot of helpful info.Also, unlike the RPLidar(2) parts, instead of returning a generator of individual measurements this part returns the entire scan at once. Thus in
complete.py
it is calledlidar/dist_scan
instead oflidar/dist_array
.Pictures: