Skip to content

In-hand object learning and recognition using 2D descriptors

Latest
Compare
Choose a tag to compare
@irenesanznieto irenesanznieto released this 27 Mar 15:51
· 46 commits to master since this release

In-hand object learning and recognition using 2D and 3D information [ v1.0 ] Release Notes

This first release covers the basic functionality of the software. The code up to now consists on a in-hand object learning and recognition using (only) 2D descriptors.

Functionalities

  • Message conversion between pi_tracker's skeleton message and the custom one used within this package
  • Hand segmentation in 2D and 3D
  • Descriptors extraction in 2D using ORB
  • Descriptors extraction in 3D using PFH
  • Learner 2D (aquisition of templates): storage of the descriptors obtained of different views of an object
  • Template storage in a folder in order to save the objects obtained previously.
  • Recognizer 2D (matching of templates): comparison between the acquired templates and the new object presented to the software, using the FlannBasedMatcher
  • Human-machine interface through gestures:
    • Learner / Recognizer modes: The adquisition of templates starts when the arm is stretched out towards the RGB-D sensor. The recognizer mode is launched otherwise.
    • Hand selection: The software must be used having the arm with the object to be recognized higher than the other one. Only one hand at a time can be used.

ToDo

  • Template acquisition from files (currently not working).
  • Feedback implementation: Communication between human and machine when recognizing. If the matching shows that the object can be different ones asks the human to give more information, for example another view.
  • Learner and recognizer 3D: Implement those features using 3D descriptors.
  • Decision-making algorithm that has as an input the matchers for 2D and 3D and as the output, the ID of the object.