Tutorial source: https://aws.amazon.com/blogs/machine-learning/blur-faces-in-videos-automatically-with-amazon-rekognition-video/
Face blurring is one of the best-known practices when anonymizing both images and videos. We will implement an event-driven system for face blurring using composition of different Lambda functions and a state machine.
Here is the high level architecture:
Let's get started.
* Deploy your resources in a region where Amazon Rekognition is supported.
- Open the Functions page of the Lambda console.
- Choose Create function.
- Choose Author from scratch.
- Create a
Python 3.9
runtime function from scratch. - Copy and deploy
face-blur-lambdas/face-detection/*.py
as the function source code (use the console code editor). - The IAM role of this function should have the following permissions:
AmazonS3FullAccess
,AmazonRekognitionFullAccess
andAWSStepFunctionsFullAccess
. It's recommended to use the same IAM role for all functions! - Configure a trigger for an All object create events for a given S3 bucket on objects with
.mp4
suffix (create a bucket and enable event notification if needed).
- Create a
Python 3.9
runtime function from scratch. Choose the same IAM role as the above function. - Copy and deploy
face-blur-lambdas/check-rekognition-job-status/lambda_function.py
as the function source code.
- Create a
Python 3.9
runtime function from scratch. Choose the same IAM role as the above function. - Copy and deploy
face-blur-lambdas/get-rekognized-faces/lambda_function.py
as the function source code.
-
Create a Container image Lambda function based on the Docker image built from
face-blur-lambdas/blur-faces/Dockerfile
. Use an existing Docker image, or create an ECR and build the image by:- Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
- In the navigation pane, choose Repositories.
- On the Repositories page, choose Create repository.
- For Repository name, enter a unique name for your repository.
- Choose Create repository.
- Select the repository that you created and choose View push commands to view the steps to build and push an image to your new repository.
-
Add the following env var to this function:
OUTPUT_BUCKET=<bucket-name>
where<bucket-name>
is another bucket to which the processes videos will be uploaded (create one if needed). -
This function is CPU and RAM intensive since it processes the video frame-by-frame. Make sure this it has enough time and space to finish (in the General Configuration tab, increase the timeout to 5 minutes and the memory to 2048MB).
- Open the Step Functions page of the Lambda console.
- Choose Create state machine.
- Choose Write your workflow in code and edit the JSON in the Definition pane as follows:
- Copy and paste
face-blur-lambdas/state_machine.json
- Change
<check-rekognition-job-status ARN>
,<get-rekognized-faces ARN>
and<blur-faces ARN>
according to the corresponding Lambda functions ARN.
- Copy and paste
- Click Next.
- Enter a unique name to your state machine.
- Under Logging, enable ALL logging.
- Choose Create state machine.
- Add the following env var to the face-detection function (the first function you've created):
STATE_MACHINE_ARN=<state-machine-ARN>
- Upload a sample short mp4 video to the "input" S3 bucket (you can download this video).
- Observe the Lambda invocation, as well as the state machine execution flow.
- Download the processes video from in "output" S3 bucket and watch the results.
Enter the interactive self-check page
- In the navigation pane on the left side of the console, choose Tables.
- Choose your table from the table list.
- Choose the Exports and streams tab for your table.
- Under DynamoDB stream details choose Enable.
- Choose New and old images and click Enable stream.
-
Open the IAM console at https://console.aws.amazon.com/iam/.
-
In the navigation pane, choose Roles, Create role.
-
On the Trusted entity type page, choose AWS service and the Lambda use case.
-
On the Review page, enter a name for the role and choose Create role.
-
Edit your IAM role with the following inline policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:<region>:<accountID>:function:<lambda-func-name>*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:<region>:<accountID>:*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams"
],
"Resource": "arn:aws:dynamodb:<region>:<accountID>:table/<dynamo-table-name>/stream/*"
},
{
"Effect": "Allow",
"Action": [
"sns:Publish"
],
"Resource": [
"*"
]
}
]
}
Change the following placeholders to the appropriate values: <region>
, <accountID>
, <dynamo-table-name>
, <lambda-func-name>
The policy has four statements that allow your role to do the following:
- Run a Lambda function. You create the function later in this tutorial.
- Access Amazon CloudWatch Logs. The Lambda function writes diagnostics to CloudWatch Logs at runtime.
- Read data from the DynamoDB stream.
- Publish messages to Amazon SNS.
-
Open the Functions page of the Lambda console.
-
Choose Create function.
-
Under Basic information, do the following:
-
Enter Function name.
-
For Runtime, confirm that Node.js 16.x is selected.
-
For Permissions use your created role.
-
-
Choose Create function.
-
Enter your function, copy the content of
dynamodb_lambda_func/publishNewSong.js
and paste it in the Code source. Change<TOPIC-ARN>
to your SNS topic ARN you created in the previous exercise. -
Click the Deploy button.
-
On the same page, click Add trigger and choose your Dynamo table as a source trigger.
-
Test your Lambda function by creating new items in the Dynamo table and watch for new emails in your inbox.