Skip to content

Commit

Permalink
Merge pull request #63 from ml5js/BodyPix-Optimization
Browse files Browse the repository at this point in the history
bodyPix optimization
  • Loading branch information
shiffman authored Nov 27, 2023
2 parents a28727b + 433b241 commit 72bd2a6
Show file tree
Hide file tree
Showing 6 changed files with 319 additions and 206 deletions.
76 changes: 47 additions & 29 deletions documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,71 +4,89 @@ This is a temporary API reference for the next generation ml5 library. The proje

---

## ml5.bodyPix
## ml5.bodySegmentation

### Description

As written by the developers of BodyPix:

"Bodypix is an open-source machine learning model which allows for person and body-part segmentation in the browser with TensorFlow.js. In computer vision, image segmentation refers to the technique of grouping pixels in an image into semantic areas typically to locate objects and boundaries. The BodyPix model is trained to do this for a person and twenty-four body parts (parts such as the left hand, front right lower leg, or back torso). In other words, BodyPix can classify the pixels of an image into two categories: 1. pixels that represent a person and 2. pixels that represent background. It can further classify pixels representing a person into any one of twenty-four body parts."
BodySegmentation divides an image input into the people and the background.

### Methods

#### ml5.bodyPix()
#### ml5.bodySegmentation()

This method is used to initialize the bodyPix object.
This method is used to initialize the bodySegmentation object.

```javascript
const bodyPix = ml5.bodyPix(?video, ?options, ?callback);
const bodySegmentation = ml5.bodySegmentation(?modelName, ?options, ?callback);
```
**Parameters:**
- **video**: OPTIONAL. An HTMLVideoElement or p5.Video to run the segmentation on.
- **modelName**: OPTIONAL: A string specifying which model to use, "SelfieSegmentation" or "BodyPix".
- **options**: OPTIONAL. An object to change the default configuration of the model. The default and available options are:
- **options**: OPTIONAL. An object to change the default configuration of the model. See the example options:
```javascript
{
architecture: "ResNet50", // "MobileNetV1" or "ResNet50"
multiplier: 1, // 0.5, 0.75 or 1
outputStride: 16, // 8, 16, or 32
quantBytes: 2, //1, 2 or 4
runtime: "mediaPipe", // "mediapipe" or "tfjs"
modelType: "general", // "general" or "landscape"
maskType: :"background", //"background", "body", or "parts" (used to change the type of segmentation mask output)
}
```
More info on options [here](https://github.com/tensorflow/tfjs-models/tree/master/body-segmentation/src/body_pix#create-a-detector).
[More info on options for SelfieSegmentation with mediaPipe runtime](https://github.com/tensorflow/tfjs-models/tree/master/body-segmentation/src/selfie_segmentation_mediapipe#create-a-detector).
[More info on options for SelfieSegmentation with tfjs runtime](https://github.com/tensorflow/tfjs-models/tree/master/body-segmentation/src/selfie_segmentation_tfjs#create-a-detector).
- **callback(bodyPix, error)**: OPTIONAL. A function to run once the model has been loaded. Alternatively, call `ml5.bodyPix()` within the p5 `preload` function.
- **callback(bodySegmentation, error)**: OPTIONAL. A function to run once the model has been loaded. Alternatively, call `ml5.bodySegmentation()` within the p5 `preload` function.
**Returns:**
The bodyPix object.
The bodySegmentation object.
#### bodyPix.segment()
#### bodySegmentation.detectStart()
This method allows you to run segmentation on an image.
This method repeatedly outputs segmentation masks on an image media through a callback function.
```javascript
bodyPix.segment(?input, callback);
bodySegmentation.detectStart(media, callback);
```
**Parameters:**
- **input**: HTMLImageElement, HTMLVideoElement, ImageData, or HTMLCanvasElement. NOTE: Videos can be added through `ml5.bodyPix`.
- **media**: An HTML or p5.js image, video, or canvas element to run the segmentation on.
- **callback(output, error)**: A function to handle the output of `bodyPix.segment()`. Likely a function to do something with the segmented image. See below for the output passed into the callback function:
- **callback(output, error)**: A function to handle the output of `bodySegmentation.detectStart()`. Likely a function to do something with the segmented image. See below for the output passed into the callback function:
```javascript
{
backgroundMask,
bodyParts,
partMask,
personMask,
raw: { backgroundMask, partMask, personMask },
segmentation: [{ mask, maskValueToLabel}, ...],
mask: {},//A p5 Image object, can be directly passed into p5 image() function
maskImageData: {}//A ImageData object
}
```
#### bodySegmentation.detectStop()
This method can be called after a call to `bodySegmentation.detectStart` to stop the repeating pose estimation.
```javascript
bodySegmentation.detectStop();
```
#### bodySegmentation.detect()
This method asynchronously outputs a single segmentation mask on an image media when called.
```javascript
bodySegmentation.detect(media, ?callback);
```
**Parameters:**
- **media**: An HTML or p5.js image, video, or canvas element to run the segmentation on.
- **callback(output, error)**: OPTIONAL. A callback function to handle the output of the estimation, see output example above.
**Returns:**
A promise that resolves to the segmentation output.
### Examples
TODO (link p5 web editor examples once uploaded)
Expand Down Expand Up @@ -124,9 +142,9 @@ const bodypose = ml5.bodypose(?modelName, ?options, ?callback);
}
```
[More info on options for MediaPipe BlazePose](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_mediapipe) and for TFJS BlazePose [here](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_tfjs#create-a-detector).
[More info on options for MediaPipe BlazePose](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_mediapipe) and for TFJS BlazePose [here](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_tfjs#create-a-detector).
- **callback(bodypose, error)**: OPTIONAL. A function to run once the model has been loaded. Alternatively, call `ml5.bodyPix()` within the p5 `preload` function.
- **callback(bodypose, error)**: OPTIONAL. A function to run once the model has been loaded. Alternatively, call `ml5.bodypose()` within the p5 `preload` function.
**Returns:**
The bodypose object.
Expand Down
44 changes: 17 additions & 27 deletions examples/BodySegmentation-maskbackground/sketch.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,46 +3,36 @@
// This software is released under the MIT License.
// https://opensource.org/licenses/MIT

/* ===
ml5 Example
BodyPix
=== */

let bodypix;
let bodyPix;
let video;
let segmentation;

let options = {
outputStride: 16, //adjust the output stride and see which one works best!
multiSegmentation: false,
segmentBodyParts: true,
flipHorizontal: true,
maskType: "background",
};

function preload() {
bodyPix = ml5.bodySegmentation("SelfieSegmentation", options);
}

function setup() {
createCanvas(480, 360);
createCanvas(640, 480);
// Create the video
video = createCapture(VIDEO);
video.size(width, height);
video.hide();
bodypix = ml5.bodyPix(video, options, modelReady);
bodypix.on("bodypix", gotResults);
}

// Event for body segmentation
function gotResults(result) {
// Save the latest part mask from the model in global variable "segmentation"
segmentation = result;
//Draw the video
image(video, 0, 0, width, height);
image(segmentation.backgroundMask, 0, 0, width, height);
tint(255, 128); // opacity tuning
}

// Event when model is loaded
function modelReady() {
console.log("Model ready!");
bodyPix.detectStart(video, gotResults);
}

function draw() {
background(0, 0, 255);
if (segmentation) {
video.mask(segmentation);
image(video, 0, 0);
}
}
// callback function for body segmentation
function gotResults(result) {
segmentation = result.mask;
}
44 changes: 17 additions & 27 deletions examples/BodySegmentation-maskbodyparts/sketch.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,46 +3,36 @@
// This software is released under the MIT License.
// https://opensource.org/licenses/MIT

/* ===
ml5 Example
BodyPix
=== */

let bodypix;
let bodyPix;
let video;
let segmentation;

let options = {
outputStride: 16, //adjust the output stride and see which one works best!
multiSegmentation: false,
segmentBodyParts: true,
flipHorizontal: true,
maskType: "parts",
};

function preload() {
bodyPix = ml5.bodySegmentation("BodyPix", options);
}

function setup() {
createCanvas(480, 360);
createCanvas(640, 480);
// Create the video
video = createCapture(VIDEO);
video.size(width, height);
video.hide();
bodypix = ml5.bodyPix(video, options, modelReady);
bodypix.on("bodypix", gotResults);
}

// Event for body segmentation
function gotResults(result) {
// Save the latest part mask from the model in global variable "segmentation"
segmentation = result;
//Draw the video
image(video, 0, 0, width, height);
image(segmentation.partMask, 0, 0, width, height);
tint(255, 128); //opacity tuning
}

// Event when model is loaded
function modelReady() {
console.log("Model ready!");
bodyPix.detectStart(video, gotResults);
}

function draw() {
background(255);
image(video, 0, 0);
if (segmentation) {
image(segmentation, 0, 0, width, height);
}
}
// callback function for body segmentation
function gotResults(result) {
segmentation = result.mask;
}
40 changes: 18 additions & 22 deletions examples/BodySegmentation-maskperson/sketch.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,41 +8,37 @@ ml5 Example
BodyPix
=== */

let bodypix;
let bodyPix;
let video;
let segmentation;

let options = {
outputStride: 16, //adjust the output stride and see which one works best!
multiSegmentation: false,
segmentBodyParts: true,
flipHorizontal: true,
maskType: "person",
};

function preload() {
bodyPix = ml5.bodySegmentation("SelfieSegmentation", options);
}

function setup() {
createCanvas(480, 360);
createCanvas(640, 480);
// Create the video
video = createCapture(VIDEO);
video.size(width, height);
video.hide();
bodypix = ml5.bodyPix(video, options, modelReady);
bodypix.on("bodypix", gotResults);
}

// Event for body segmentation
function gotResults(result) {
// Save the latest part mask from the model in global variable "segmentation"
segmentation = result;
//Draw the video
image(video, 0, 0, width, height);
image(segmentation.personMask, 0, 0, width, height);
tint(255, 128); //opacity tuning
}

// Event when model is loaded
function modelReady() {
console.log("Model ready!");
bodyPix.detectStart(video, gotResults);
}

function draw() {
background(0, 255, 0);

if (segmentation) {
video.mask(segmentation);
image(video, 0, 0);
}
}
// callback function for body segmentation
function gotResults(result) {
segmentation = result.mask;
}
Loading

0 comments on commit 72bd2a6

Please sign in to comment.