-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace 3D velodyne with 2D lidar #156
Comments
Hi, I am not sure why you would need a 2D point cloud? And actually a 2D sensor is already attached to the model here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/p3dx/pioneer3dx.xacro#L11-L13 You should simply update the code to use this sensor information instead of the velodyne pointcloud. To change the FOV range of the sensor, you have to update its xacra file: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/laser/hokuyo.xacro#L42-L43 To change the environment, you can specify a different world file here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/launch/empty_world.launch#L16 |
Hi @reiniscimurs, Thank you for your advice. I am trying to train a new model by using the pointcloud from the depth camera on the robot instead of the 3D lidar. Using a camera for navigation instead of an expensive sensor like the velodyne 3D lidar could be a prospective proposal for a project. Error message: Thank you |
I am assuming you are using the same camera that is in the repo. There you might try increasing the update rate of the sensor. See if that helps. https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/camera/camera.xacro#L41 |
Hi @reiniscimurs |
Yes, probably this line: https://github.com/reiniscimurs/DRL-robot-navigation/blob/Noetic-Turtlebot/catkin_ws/src/turtlebot3/turtlebot3_description/urdf/turtlebot3_waffle.gazebo.xacro#L168 Not sure, use your best judgment. My guess is that the pointcloud does not update quickly enough so that if a message gets dropped or desynced ros cannot handle it. I hope increasing the update rate would solve it, and is something I would try. So play around with it and see what good values could be. |
Hi @reiniscimurs, |
You mean hitting the wall without detecting a collision? |
Hi @reiniscimurs Could you generally specify how I could check if the range is enough to detect collisions? Right now the robot keeps colliding with walls and objects and goal point is not reset. Also, the simulation takes a long time to start the next episode after reaching the maximum number of steps. How could I possibly change these behaviors so that the training is smooth like the one with velodyne sensor? |
That would be my guess though I have not worked with r200 sensors and I don't immediately see where to set the range. Perhaps see the ros tutorial as the sensor here uses the same plugin: https://classic.gazebosim.org/tutorials?tut=ros_gzplugins You could just log the min laser value. If it is never lower than the collision detection limit, the collision will not trigger. Then you will know that it is a range issue or something else. |
Hi @reiniscimurs, I have managed to make some changes and now the robot is able to detect collisions well with the pointcloud from the depth camera. |
Hi,
This is because we call the train function. As in, we only train the model after concluding an episode. The simulation is stopped, a batch of samples is taken from the replay buffer that are used to calculate the value and gradients that are then used for backpropagation. This process happens I suggest stepping through the python code in detail, then it will all make sense. |
WhatsApp.Video.2024-08-20.at.10.42.17.AM.mp4WhatsApp.Video.2024-08-20.at.10.42.18.AM.mp4Hi @reiniscimurs, I understand that the training is being called. If you refer to the videos I have attached, the first video is using the point cloud from the depth camera. As you can see, after collision, the whole training process seems very slow. |
You can see in the first video that there is also slowdown between executing each individual step. This could be because there are a lot more points in the pointcloud and the program needs to deal with a lot more data. So the state generation and saving to the replay buffer is where I would look at. I would double check that the way you save state for the depth camera and velodyne is the same. What are the full code changes that you have made? Is there any increase in state dimensions or batch size? |
Hello,
I have been trying out this project for a while and have recently managed to work through many issues to achieve a desirable output. After many days of training with different seed values, I have tested today and my robot is navigating quite well in testing scenario aside from a few local optima related issues now and then.
I understand that we are using a 2D lidar for object detection in here. I would like to know if it is possible to use a 360 degree 2D lidar instead of the 180 Velodyne 3D to obtain laser point data and then convert into point cloud information to achieve the same behavior as the source code?
Sorry, I am quite inexperienced with ROS and I would like to make slight alterations to the project without causing too many errors to my current implementation. Please if possible, guide me on how I can do this as I have seen you mention this in many other cases that using a 2D lidar instead of a 3D lidar would not cause much issues.
Also, I would like to know how to perform training on another environment and what considerations should I keep to perform testing in a new environment.
Thank you
The text was updated successfully, but these errors were encountered: