-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems in training #143
Comments
Please describe the question in more detail. |
Target is not displayed in Gazebo environment. You can see it in Rviz though. Setting and reaching target criteria are explained in the tutorial: https://medium.com/@reinis_86651/deep-reinforcement-learning-in-mobile-robot-navigation-tutorial-part4-environment-7e4bc672f590 |
I suggest following some tutorials regarding ROS and familiarizing yourself with ROS as that will help you understand the implementation. It will be very difficult to explain any issues you encounter without some basic knowledge there and without knowing the differences between Gazebo and Rviz. This kind of data you are looking for is displayed in Rviz and not in Gazebo GUI. |
You are specifying the wrong topic. We are using velodyne lidar for pointcloud, not camera. Please familiarize yourself with the tutorial. This can help you clear up confusion you might have about the implementation. |
There are no pre-trained model weights available for this implementation. |
Hi author, I didn't find the code for selecting the local target point in the code of this article, which is the core innovation in the paper, can you please answer the question,thanks again |
This paper does not implement the whole functionality of the paper, just the DRL navigation training part. Parts of the code in the paper are in: https://github.com/reiniscimurs/GDAE |
Hi, I'm training Gazebo environment. but I don't know where his target point is. Can you tell me please?
The text was updated successfully, but these errors were encountered: