In this tutorial you’ll learn how to deploy an example use case to get yourself familiar with Azure DeepStream Accelerator. Using pre-trained models and pre-recorded videos of a simulated manufacturing environment, you’ll perform the steps required to create an Edge AI solution. You’ll learn how to identify product defects, prepare and deploy your computer vision (CV) model to your Edge device, and verify the results using the included web app.
This tutorial guides you through five major steps to create your Edge AI solution using Azure DeepStream Accelerator:
- Step 1. Confirm your Edge modules are running
- Step 2. Upload the model
- Step 3. Update the deployment manifest file
- Step 4. Deploy your updates to the Edge device
- Step 5. Verify your results
Before beginning this tutorial, make sure you have completed all the prerequisites in Prerequisite checklist for Azure DeepStream Accelerator.
Use Visual Studio (VS) Code to confirm the modules you deployed in the Prerequisite checklist for Azure DeepStream Accelerator section are running. You can do this by checking the module status in the Azure IoT Hub section of the left navigation in VS Code.
This tutorial includes a pre-trained MaskRCNN model and parser.
- For information about pre-trained models, visit pre-trained model. Then download the manufacturing.zip file.
- To familiarize yourself with the MaskRCNN model, visit Mask R-CNN model.
- For information on how the model was trained using transfer learning, follow the steps in this Jupyter Notebook.
We recommend you upload the pre-trained model to a publicly accessible web server. You can do this using the Azure Storage Account you created in Prerequisite checklist for Azure DeepStream Accelerator.
- To find the containers you downloaded, open the Azure portal and go to your Azure Storage. Then, on the left navigation, select Containers.
- Create a new Blob Storage Account with Container access so you can upload models that are available for public access. Save the path for this storage account as you’ll need it in the next step.
In this step you’ll learn how to update the deployment manifest file. First you’ll download the template file and then you’ll modify the values in the file for an x86 device.
You can also modify the template for an ARM device by downloading the appropriate file and using the same steps below.
-
Visit the template file location in your local clone of the repository.
-
Update the Controller section of the manifest template file. To do this, update the
unsecureZipUrl
value to point to the URL of the model you uploaded in the previous step.For this tutorial, we’ve created a zip file you can use, but if you want to understand how the zip file is structured, visit Zip File Structure.
-
Update the
primaryModelConfigPath
to the following value:pgie_config.txt
.```JSON "pipelineConfigs": [ { "id": "PeopleDetection", <--- Feel free to change this value to something more like 'ManufacturingDefectDetection' if you'd like, but if you do, please also change it in the 'streams' section of your JSON. "unsecureZipUrl": "", <------- HERE "primaryModelConfigPath": "pgie_config.txt", <----- CHANGE TO THIS VALUE "secondaryModelConfigPaths": "", "trackerConfigPath": "", "deepstreamPassthrough": "", "pipelineOptions": { "dewarp": { "enable": false }, "crop": { "enable": false }, "osd": { "enable": true } } } ] ```
-
Also in the Azure Controller section of the manifest template file, change the
endpoint
sensor URI tortsp://rtspsim:554/media/sample-input-1.mkv
, as shown in the following code sample.```JSON "azdaConfiguration": { "platform": "DeepStream", "components": { "sensors": [ { "name": "stream-01", "kind": { "type": "vision", "subtype": "RTSP" }, "endpoint": "rtsp://rtspsim:554/media/sample-input-1.mkv", <----- CHANGE TO THIS ```
-
To see inference bounding boxes on the video stream output, enable on-screen display (OSD) by updating
pipelineConfigs
, thenosd
and set the enable value totrue
. -
Save your manifest template file changes.
-
To generate your deployment manifest, right-click on the template in VS Code and select Generate IoT Edge deployment manifest.
In this step, you’ll use the module twin feature of Azure IoT Edge to update the Edge device with the pre-trained module and sample video.
- For more information about module twins, visit Understand and use module twins in IoT Hub.
- For more information about deploying edge modules, visit Deploy Azure IoT Edge modules from Visual Studio Code.
- Right-clicking on the deployment manifest JSON (not the template) and selecting Create deployment for single device.
When you’ve updated the Edge device with the model package, the simulated video runs automatically, and the model will begin making inferences.
More specifically, the system will go through these steps:
- The AI-Pipeline module will download the zip file from the URL that is specified.
- The AI-Pipeline module will extract the zip file's contents and verify that it can find the model configuration file.
- The AI-Pipeline module will construct a DeepStream pipeline on the fly by choosing and configuring a few DeepStream/Gstreamer plugins based on the model configuration and the Controller module's twin.
- This DeepStream pipeline will attempt to:
- Connect to all specified sources.
- Deserialize the AI model, then reserialize it into whatever format (in this case, TRT).
- Load the serialized AI model into the GPU.
- Run the pipeline.
Please note that DeepStream (and Gstreamer) is quite verbose with its logs and warnings. It is perfectly normal for it to attempt several model (de)serialization approaches before it settles on one that works.
Using one of the below approaches, you should be able to view the results, which should look something like this:
The Player Webb App is a web app that allows you to play recorded videos and define ROIs for the different video streams deployed as part of this solution. It communicates the ROI locations to the Business Logic module so that the module can determine if detected objects appear within an ROI. This web app does not enable you to view "live-stream" videos from your Edge device.
- To install the video player, follow the instructions in the How to use the Player web app.
- Use the Player web app to navigate to the Blob Storage Account that you created earlier in this tutorial to host your videos.
- Confirm that your videos are being uploaded there.
Azure DeepStream Accelerator by default streams a view of your results onto "rtsp://localhost:8554/". You can see the exact URI in the AI Pipeline's logs. For security purposes, this is a little trickier to view, but it can be done.
Specifically, you need to:
- Expose the stream from the Docker container to your device's host OS.
- Expose the stream from the device's host OS to whatever network or device you want to view the stream on.
Note This part should already be done by default in our deployment manifest templates. So this section serves to explain this a bit. If you'd like, you can remove this port binding, as it is a security best practice to remove bindings that you do not use.
- Ensure this snippet is present in your deployment manifest template inside the 'createOptions' for ai-pipeline (note that it should be there ):
"ExposedPorts": {
"8554/tcp": {}
},
- and add this snippet inside "HostConfig" of the same section:
"PortBindings": {
"8554/tcp": [
{
"HostPort": "8554"
}
]
},
Now regenerate your deployment manifest and redeploy (if these sections weren't already there).
Congratulations! You have successfully created and deployed an Edge AI solution using Azure DeepStream Accelerator Getting Started Path.
This part is a little more cumbersome. There are many ways to do this, depending on your end goal. If your device has a monitor and keyboard plugged in, then you can simply open up VLC on the device and view the results by going to your stream's endpoint, provided your device has the CPU/GPU bandwidth to handle that in addition to the running pipeline.
But if your device is headless, then you can try port forwarding. The following instructions will create an SSH tunnel between your device and your computer and we will forward the traffic from port 8554 to your computer through that SSH tunnel, thereby enabling encryption and authentication (by making use of the SSH protocol).
- First, make sure that your device allows port forwarding. Edit /etc/ssh/sshd_config (if present) to make sure "AllowTcpForwarding" is "yes". Then restart your device.
- Open a Powershell session (Windows) or a terminal session (Linux/Mac). This will be the SSH tunnel.
- Type
ssh -L 8554:localhost:8554 <ssh-user>@<ssh-address>
This should open an SSH tunnel with port forwarding to/from your device. - Make sure that SSH session stays open while opening VLC and entering
rtsp://localhost:8554/<your-pipeline-id>
into the stream endpoint. Careful! The stream endpoint URI in VLC is case-sensitive, but there is a bug where the endpoint is mapped in a case-insensitive way to a previously used endpoint if one matches. So if you enter the ID with the wrong case, then you will not be able to access your actual endpoint.
If you're not going to continue to use your Azure resources, you may choose to delete them. For more information, visit Azure Resource Manager resource group and resource deletion.
Note
When you delete a resource group:
- All the resources in that group are deleted.
- It’s irreversible.
If you want to save some of the resources, delete the unwanted resources individually.
Now that you have completed the Azure DeepStream Accelerator Getting Started Path tutorial, we recommend the following tutorials:
- Tutorial: Azure DeepStream Accelerator - Pre-built model path to build and deploy a CV solution using one of several popular pre-supported models and your own video stream.
- Tutorial: Azure DeepStream Accelerator - Bring your own model path (BYOM) model path to build and deploy a CV solution using your own custom model and parser.
If you encounter issues when you are creating an Edge AI solution using Azure DeepStream Accelerator, visit: