-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Predict on batch #195
Comments
It wouldn't be difficult to add a predict on batch function. e.g. currently with a single image it looks like this: result = bodypix_model.predict_single(image_array)
# simple mask
mask = result.get_mask(threshold=0.75)
# colored mask (separate colour for each body part)
colored_mask = result.get_colored_part_mask(mask) For a batch it would then be something like: batch_result = bodypix_model.predict_batch(image_array_batch)
# simple mask
mask_batch = batch_result.get_mask_batch(threshold=0.75)
# colored mask (separate colour for each body part)
colored_mask_batch = batch_result.get_colored_part_mask_batch(mask_batch) I also wonder whether there would be any noticable speed improvement. What have you observed when using the JS version? |
Thank you for the reply. My current use case is measuring BodyPix's performance over a dataset of 10000+ images, but given the high image resolution (1080p), it's taking ~1.3-1.5 sec/image on Colab's default CPU. Any improvement on inference time is useful (without having to downsample). I have not tried with the JS version yet, but it seems that there is "a large performance difference" according to tensorflow/tfjs#2197 |
Would you be happy to see whether you want to submit a PR to add batch support? |
Is there a way to perform batch prediction to leverage GPU? I believe it is a functionality in the JS version, but I am not sure how to do that in this python version.
The text was updated successfully, but these errors were encountered: