This demo, based on Jason Mayes' sample application, shows how we can use a pre made machine learning solution to recognize pixels in an image that belong to a human body and even what sub parts of the body they belong to (e.g. an arm, leg, head etc). We can get data for all pixels in the image which is super useful as then you could do things like blur the background to get a cool depth of field effect, or maybe you want to blur just the face to retain user privacy. The possibilities are endless.
For this demo we are loading a bodypix model that uses the MobileNet architecture, to recognize the various body parts it has already been taught to find.
This project was generated with Angular CLI version 10.0.5.
Run ng serve
for a dev server. Navigate to http://localhost:4200/
. The app will automatically reload if you change any of the source files.
Run ng build
to build the project. The build artifacts will be stored in the dist/
directory. Use the --prod
flag for a production build.
Run ng test
to execute the unit tests via Karma.
Run ng e2e
to execute the end-to-end tests via Protractor.
To get more help on the Angular CLI use ng help
or go check out the Angular CLI README.
For more information on TensorFlow.js, please visit https://www.tensorflow.org/js.