-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model server inference using Python Mediapipe #2768
Comments
Could you share more details? You mention python in the title. Do you mean python mediapipe package or do you want to execute python code in mediapipe graph in OVMS? I assume the flow you want to achieve would look like:
Is that right? |
@mzegla Thanks for your prompt response. I want to excecute python code alongwith media pipe graph. I want to iterate through OVMS_PY_TENSOR containing detected regions inside the mediapipe graph to facilitate classification over those detected regions. |
Well, python execution is enabled via separate node in the mediapipe graph (there is a dedicated calculator for that). I suppose the easiest solution would be to have your whole processing done in Python - so you would have a single node that loads your detection and classification model in Note that you will likely need to extend docker image with layers containing python packages you need like The other solution is to have multiple nodes: detection, extraction and classification where only extraction is done in Python and detection/classification is executed via CPP based calculators. This approach also requires converter nodes between nodes mentioned earlier. See CLIP demo for reference: |
How can i loop over a set of detection results to run classifier over detected regions using mediapipe graph. It will be helpful if i get a graph example using openvino inference calculator
The text was updated successfully, but these errors were encountered: