This is a CLI tool for recognizing hand-gestures in real-time. Hand-gestures are an aspect of body language that can be conveyed through the center of the palm, the finger position and shape constructed by the hand.
Hand-gesture recognition offers an inspiring field of research as they can facilitate communication and provide a natural means of interaction with computers and variety of applications.
This project utilizes a pre-trained machine learning model from "MediaPipe" which is an open-source framework provided by Google. Primarily, it can detect 10 distinct gestures and utilizes "OpenCV" to capture frames in real-time.
In this project, the main purpose to use MediaPipe is to run optimized machine learning models on resource-constrained devices. Also, it offers a modular and extensible architecture, allowing developers to customize and extend the pipelines components to fit their specific needs. This flexibility enables integration with custom machine learning models, and adaptation to specific input/output requirements. In addition to this, we can easily deploy these pre-trained models on several devices and can make modular changes for individual usage.