- Understanding how to use pre-trained models other than image classification.
- Ability to work with PoseNet and ml5.js
- Ability to work with Image Segmentation models (UNet and BodyPix) and ml5.js.
- Week 3 Slides
- PoseNet Webcam Part Selection
- PoseNet Webcam Full Skeleton
- PoseNet Single Image -- This is broken at the moment
- UNet Image Segmentation
- BodyPix Image Segmentation
- Sidewalk Orchestra by Cristóbal Valenzuela
- Pose Music by Tero Parviainen
- Golan Levin’s Electronic Media Studio (Carnegie Mellon School of Art) students using ml5.js and p5.js:
- Nixel
- Chromsan
- Casher
- Shuann
- (all class projects in the Augmented Body Gallery using a variety of tools)
- Tensorflow Pose Estimation Use Cases:
- The Treachery of Sanctuary by Chris Milk
- Gait Analysis from runnersneed
- Explore the possibilities of physical interaction as the output of a machine learning system.
In each example, a p5.js sketch captures some input data and sends it to an Arduino. The Arduino sketch tells the microcontroller how to read that data and what to do with it. This type of communication is called asynchronous serial communication. (Fun fact: the Arduino can also capture data and send it to a p5.js sketch!)
- Webcam Image Classification using MobileNet to Turn LED On/Off
- Single Pose Detection using PoseNet to Fade LED
- Multiple Pose Detection using PoseNet to Turn LEDs On/Off
- Single Pose Detection using Multiple PoseNet Keypoints to Fade LEDs
- p5.js web editor
- Arduino IDE 1.8.9 app
- p5.serialcontrol app to enable serial communication between your p5.js sketch in the browser and your Arduino microcontroller. Download the latest version, and save it in your Applications folder. If you use a Mac, then download and install this option:
p5.serialcontrol-darwin-x64.zip
. - 1 USB Cable
- 1 Arduino Uno
- 1 Half-size Breadboard
- 3 LEDs
- 3 220 Ohm Resistor
- Jumper Wires
For the above examples, if nothing happens on the Arduino when you start the p5.js sketch, use this checklist to troubleshoot: Serial Communication Checklist
- Asynchronous Serial Communication: The Basics
- Lab: Serial Output from P5.js
- Servo Motor Control with an Arduino
- Tone Output Using An Arduino
- p5.Serialport Library
- Recent Updates to p5.Serial Library by Jiwon Shin
- PomPom Mirror by Danny Rozin
- Now You Are In the Conversation by Chelsea Chen Chen
- The Hand (Rock Paper Scissors) by Tong Wu and Nick Wallace
- Humans of AI by Philipp Schmitt
- Read Real-Time Human Pose Estimation in the Browser with TensorFlow.js by Dan Oved, with editing and illustrations by Irene Alvarado and Alexis Gallo.
- Read Mixing movement and machine by Maya Man
- Read Review of Deep Learning Algorithms for Image Semantic Segmentation by Arthur Ouaknine
- Read Humans of AI by Philipp Schmitt
- Explore COCO Dataset. What surprises you about this data set? How is it similar or different to ImageNet? What questions do you have? Can you think of any ethical considerations around how this data was collected? Are there privacy considerations with the data?
- Work in groups of 2 (see assignment 3 wiki) to prototype a physical interaction as the output of a machine learning model using any of the tools or techniques demonstrated in weeks 2 and 3. This can be a new idea or build off of your week 2 assignment. Here are some questions to explore:
- How might you use confidence score data as a type of creative input?
- For pose detection, how might you work with multiple keypoints?
- If you're working with multiple poses, how might people work together or against each other to trigger events?
- What other creative outputs can you use? speakers? motors? what else?
- Document your exercise in a blog post and add a link to the post and your sketch on the Assignment 3 Wiki. In your blog post, include visual documentation such as a recorded screen capture / video of your training session and sketch running in the browser. How the readings above inform your idea and development of the project? Include a response to COCO dataset question prompts.