Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace 3D velodyne with 2D lidar #156

Open
dangerousogr opened this issue Jul 21, 2024 · 13 comments
Open

Replace 3D velodyne with 2D lidar #156

dangerousogr opened this issue Jul 21, 2024 · 13 comments

Comments

@dangerousogr
Copy link

Hello,
I have been trying out this project for a while and have recently managed to work through many issues to achieve a desirable output. After many days of training with different seed values, I have tested today and my robot is navigating quite well in testing scenario aside from a few local optima related issues now and then.
I understand that we are using a 2D lidar for object detection in here. I would like to know if it is possible to use a 360 degree 2D lidar instead of the 180 Velodyne 3D to obtain laser point data and then convert into point cloud information to achieve the same behavior as the source code?

Sorry, I am quite inexperienced with ROS and I would like to make slight alterations to the project without causing too many errors to my current implementation. Please if possible, guide me on how I can do this as I have seen you mention this in many other cases that using a 2D lidar instead of a 3D lidar would not cause much issues.

Also, I would like to know how to perform training on another environment and what considerations should I keep to perform testing in a new environment.

  • OS: Ubuntu 20.04
  • ROS version: Noetic
  • Repository branch: Noetic-Turtlebot

Thank you

@reiniscimurs
Copy link
Owner

reiniscimurs commented Jul 22, 2024

Hi,

I am not sure why you would need a 2D point cloud?
In any case, you can replace the velodyne sensor to another 2D lidar sensor here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/p3dx/pioneer3dx.xacro#L26-L29

And actually a 2D sensor is already attached to the model here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/p3dx/pioneer3dx.xacro#L11-L13

You should simply update the code to use this sensor information instead of the velodyne pointcloud.

To change the FOV range of the sensor, you have to update its xacra file: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/laser/hokuyo.xacro#L42-L43

To change the environment, you can specify a different world file here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/launch/empty_world.launch#L16

@dangerousogr
Copy link
Author

dangerousogr commented Aug 10, 2024

Hi @reiniscimurs,

Thank you for your advice. I am trying to train a new model by using the pointcloud from the depth camera on the robot instead of the 3D lidar. Using a camera for navigation instead of an expensive sensor like the velodyne 3D lidar could be a prospective proposal for a project.
That said, I have made some modifications to the velodyne_env code to use this pointcloud. I changed the subscriber from velodyne point cloud to this point cloud and I have also changed the self.gaps to fit the fov of the camera. However, I notice that while running, it runs for a short while and then stops moving. An error message shows up on RVIZ as well. Could you take a look and see what I might have overlooked in the modification? A few screenshots have been attached for reference,

Error message:
Transform [sender=unknown_publisher]
For frame [camera_rgb_optical_frame]: No transform to fixed frame [odom]. TF error: [Lookup would require extrapolation 0.024000000s into the future. Requested time 300.433000000 but the latest data is at time 300.409000000, when looking up transform from frame [camera_rgb_optical_frame] to frame [odom]]

RVIZ
self gaps_modified
velodyne_env_subscriber
image

Thank you

@reiniscimurs
Copy link
Owner

I am assuming you are using the same camera that is in the repo. There you might try increasing the update rate of the sensor. See if that helps. https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/camera/camera.xacro#L41

@dangerousogr
Copy link
Author

Hi @reiniscimurs
The camera is the same. However, since I am running the ros noetic branch and using the turtlebot, do I have to edit the turtlebot3_waffle.gazebo.xacro file to update the camera rate?
Also what values for update rate do you think is ideal for this situation?

@reiniscimurs
Copy link
Owner

Yes, probably this line: https://github.com/reiniscimurs/DRL-robot-navigation/blob/Noetic-Turtlebot/catkin_ws/src/turtlebot3/turtlebot3_description/urdf/turtlebot3_waffle.gazebo.xacro#L168

Not sure, use your best judgment. My guess is that the pointcloud does not update quickly enough so that if a message gets dropped or desynced ros cannot handle it. I hope increasing the update rate would solve it, and is something I would try. So play around with it and see what good values could be.

@dangerousogr
Copy link
Author

Hi @reiniscimurs,
I managed to solve the issue, it was an issue with the build I had. The error stil appears but the robot is navigation. However, I noticed that the robot keeps hitting the wall and doesn't stay within the map while using the pointcloud from the depth camera.
Do you have any idea as to why this could be? Is there any other modifications I have to make to make the behavior similar to the one when using velodyne pointcloud?

@reiniscimurs
Copy link
Owner

You mean hitting the wall without detecting a collision?
I do not have much experience with using pointcloud from depth camera so I do not know the details, but if it is missing collision detection, then you should check what is the minimum range of the depth camera and if collisions can be detected with that range.

@dangerousogr
Copy link
Author

Hi @reiniscimurs
Sorry I am still quite new to ros and working with packages. I am assuming the specifications for the camera range would be found within the xacro file of the turtlebot_waffle?

Could you generally specify how I could check if the range is enough to detect collisions? Right now the robot keeps colliding with walls and objects and goal point is not reset. Also, the simulation takes a long time to start the next episode after reaching the maximum number of steps. How could I possibly change these behaviors so that the training is smooth like the one with velodyne sensor?

@reiniscimurs
Copy link
Owner

That would be my guess though I have not worked with r200 sensors and I don't immediately see where to set the range. Perhaps see the ros tutorial as the sensor here uses the same plugin: https://classic.gazebosim.org/tutorials?tut=ros_gzplugins

You could just log the min laser value. If it is never lower than the collision detection limit, the collision will not trigger. Then you will know that it is a range issue or something else.

@dangerousogr
Copy link
Author

Hi @reiniscimurs, I have managed to make some changes and now the robot is able to detect collisions well with the pointcloud from the depth camera.
However, noticed that while running training, after each episode is concluded specifically by a collision, it takes a considerable time to load the next episode.
Do you have any thoughts as to why this is? I did some debugging and turns out that the episode count is getting updated correctly. Also the environment is being reset, the gazebo environment is being called all correctly. However, when the episode enters and the network.train function is called, there is some delay. Below I have attached some screenshots as well as the debug statement I have entered inside the train function. Do you have any thoughts as to why this delay occurs while triggering the next episode?
image
Until the episode reset and collision detected is one flow and after that when the delay occurs, I can see the statement, sampling batch from replay buffer keeps printing.

image
image

@reiniscimurs
Copy link
Owner

Hi,

...network.train function is called, there is some delay

This is because we call the train function. As in, we only train the model after concluding an episode. The simulation is stopped, a batch of samples is taken from the replay buffer that are used to calculate the value and gradients that are then used for backpropagation. This process happens iterations number of times which is how many steps were executed during the episode. The longer the episode, the more iterations of training. Each training (backprop) takes some time and a new episode is started only after training is concluded. Depending on your computer resources and the length of an episode, it could take anywhere from 1 to 20 seconds.

I suggest stepping through the python code in detail, then it will all make sense.

@dangerousogr
Copy link
Author

WhatsApp.Video.2024-08-20.at.10.42.17.AM.mp4
WhatsApp.Video.2024-08-20.at.10.42.18.AM.mp4

Hi @reiniscimurs, I understand that the training is being called. If you refer to the videos I have attached, the first video is using the point cloud from the depth camera. As you can see, after collision, the whole training process seems very slow.
The second video is using the velodyne 3D lidar pointcloud. Here after collision, the training process is quite instant. Both videos are initial episodes of training as well. I have not included anything extra to the code to make the training slow, only a replacement of velodyne point cloud with depth camera point cloud. Do you have any thoughts as to why this change of training behavior is occuring?
I do not suspect it is a resource issue as I am running training on a fairly competent system and the velodyne training is not as slow as the depth camera training

@reiniscimurs
Copy link
Owner

You can see in the first video that there is also slowdown between executing each individual step. This could be because there are a lot more points in the pointcloud and the program needs to deal with a lot more data. So the state generation and saving to the replay buffer is where I would look at. I would double check that the way you save state for the depth camera and velodyne is the same.

What are the full code changes that you have made? Is there any increase in state dimensions or batch size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants