Before starting any work on this project, please complete all steps for Exercise 1, 2 and 3. At the end of Exercise-3 you have a pipeline that can identify points that belong to a specific object.
In this project, you must assimilate your work from previous exercises to successfully complete a tabletop pick and place operation using PR2.
The PR2 has been outfitted with an RGB-D sensor much like the one you used in previous exercises. This sensor however is a bit noisy, much like real sensors.
Given the cluttered tabletop scenario, you must implement a perception pipeline using your work from Exercises 1,2 and 3 to identify target objects from a so-called “Pick-List” in that particular order, pick up those objects and place them in corresponding dropboxes.
This repo can be used as a reference for Object Reecognition in Udacity's perception exercise.
For this setup, catkin_ws is the name of active ROS Workspace, if your workspace name is different, change the commands accordingly If you do not have an active ROS workspace, you can create one by:
$ mkdir -p ~/catkin_ws/src
$ cd ~/catkin_ws/
$ catkin_make
Now that you have a workspace, clone or download this repo into the src directory of your workspace:
$ cd ~/catkin_ws/src
Install git (Skip if already installed):
$ sudo apt install git
Clone the repo:
$ git clone https://github.com/YashKSahu/3D-Perception.git
Note: If you have the Kinematics Pick and Place project in the same ROS Workspace as this project, please remove the 'gazebo_grasp_plugin' directory from the RoboND-Perception-Project/
directory otherwise ignore this note.
Now install missing dependencies using rosdep install:
$ cd ~/catkin_ws
$ rosdep install --from-paths src --ignore-src --rosdistro=kinetic -y
Build the project:
$ cd ~/catkin_ws
$ catkin_make
Add following to your .bashrc file
export GAZEBO_MODEL_PATH=~/catkin_ws/src/RoboND-Perception-Project/pr2_robot/models:$GAZEBO_MODEL_PATH
If you haven’t already, following line can be added to your .bashrc to auto-source all new terminals
source ~/catkin_ws/devel/setup.bash
To run the demo:
$ cd ~/catkin_ws/src/RoboND-Perception-Project/pr2_robot/scripts
$ chmod u+x pr2_safe_spawner.sh
$ ./pr2_safe_spawner.sh
Once Gazebo is up and running, make sure you see following in the gazebo world:
-
Robot
-
Table arrangement
-
Three target objects on the table
-
Dropboxes on either sides of the robot
In your RViz window, you should see the robot and a partial collision map displayed:
Proceed through the demo by pressing the ‘Next’ button on the RViz window when a prompt appears in your active terminal
The demo ends when the robot has successfully picked and placed all objects into respective dropboxes (though sometimes the robot gets excited and throws objects across the room!)
Close all active terminal windows using ctrl+c before restarting the demo.
You can launch the project scenario like this:
$ roslaunch pr2_robot pick_place_project.launch
Launch the python script(in a new terminal) to excecute object recognition:
$ source ~/catkin_ws/devel/setup.bash
$ cd ~/catkin_ws/src/RoboND-Perception-Project/pr2_robot/scripts
$ rosrun pr2_robot project_template.py
Note: The robot is a bit moody at times and might leave objects on the table or fling them across the room :D