Skip to content

Latest commit

 

History

History
77 lines (37 loc) · 3.12 KB

README.md

File metadata and controls

77 lines (37 loc) · 3.12 KB

Sign Language Gesture Recognition on the Intelligent Edge using Azure Cognitive Services

For this walkthrough we will use an Android Phone/Tablet as the Intelligent Edge device. The goal is to show how we can quickly create image recognition models using Custom Vision Service and export it to consume it offline at the Edge.

If you just want to test the app you can download the APK file from here.

Sign Language

Setup

Create Sign Language Recognition ML Model

  • Sigin to Custom Vision Service using your Azure Account

  • Create New Project with Domains as General (compact)

  • Upload all images from dataset\A folder with Tag A

  • Repeat above step for all alphabets in the dataset...

  • Click the Train button at top to start training the model

  • Once the training is complete use the Quick Test button to upload a new image and test it.

    Custom Vision Service

Export the ML Model

  • Under Performance tab click Export

  • Select Android (Tensorflow) and download the zip file

  • Extract the zip file and verify that it contains model.pb and labels.txt file

Create the Android App

Deploy the Android App on the device

  • First enable Developer Mode + USB Debugging on the Android device

    • See instructions for Samsung Galaxy S7 here
  • Connect your device to laptop via USB

  • Click Run and select the app

  • Select the Connected Device

    • For first time you need to allow the camera and other permissions and run it again.

Testing

Test 1 - Y

Test 2 - P

Test 3 - W

Test 4 - V