Skip to content

This project uses the CoppeliaSim simulator as an environment to demonstrate the application of computer vision in simulated robotics settings. The simulated scene depicts a table with a notebook and a banana, and a Python code is utilized to access the image captured by the CoppeliaSim camera through the Robot Operating System (ROS).

Notifications You must be signed in to change notification settings

beatriz-emiliano/yolov3_ros_coppeliasim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Using CoppeliaSim Image with ROS and Python+YOLOv3 Detection

Guide

Description

This project uses the CoppeliaSim simulator as an environment, creating a scene that simulates a table with a notebook and a banana. A Python code is used to access the image through ROS (Robot Operating System) and then uses YOLOv3 (You Only Look Once version 3) to recognize objects in the image that CoppeliaSim's camera captures and sends via ROS. The aim is to show how object recognition can be applied in simulated environments for robotics applications.

overview

Features

  • Capture of simulated environment image using ROS
  • Use of YOLOv3 for object recognition in the image
  • Dissplay of object recognition results in the image

Technologies used:

  • Python 3.x
  • CoppeliaSim
  • ROS (Robot Operating System) (ROS Noetic)
  • OpenCV
  • NumPy
  • YOLOv3

Installations:

  • CoppeliaSim
Use the instructions on the website, I used the EDU version
  • ROS
Use the instructions on the website, I used the ROS Noetic version
  • YOLOv3

You will need these files on your machine: (Available in the "yoloDados" folder)

  - YoloNames.names
  - yolov3.cfg
  - yolov3.weights

Application

  • Open a terminal and set up the ROS development environment and run the roscore for nodes to communicate:
$ source /opt/ros/noetic/setup.bash
$ roscore
  • Open the CoppeliaSim software:
$ source /opt/ros/noetic/setup.bash
$ cd ~/yolov3_ros_coppeliasim/CoppeliaSim
$ ./coppeliaSim.sh
  • Load the scene:
File > Open scene... >> realsense-yolo.ttt

overview

  • Run the scene:

overview

  • To check if the image has been published, execute the command, in the terminal:
$ source /opt/ros/noetic/setup.bash
$ rostopic list

overview

  • Another way to check is to run the "rqt" command to be able to visualize the image:

overview

  • Now let's run the Python code, in another terminal:
$ source /opt/ros/noetic/setup.bash
$ cd /yolov3_ros_copelliasim/scripts
$ python3 yolov3_ros_coppeliasim.py
  • You should have as output the window with the objects recognized in the scene and in the terminal the description:

overview overview

References

Reference article:

  • Redmon, Joseph, and Ali Farhadi. "YOLOv3: An Incremental Improvement." arXiv preprint arXiv:1804.02767 (2018).

About

This project uses the CoppeliaSim simulator as an environment to demonstrate the application of computer vision in simulated robotics settings. The simulated scene depicts a table with a notebook and a banana, and a Python code is utilized to access the image captured by the CoppeliaSim camera through the Robot Operating System (ROS).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published