Question: Is this repo right for my use case? #111
Replies: 1 comment
-
I can't say I know enough (or anything...) about DeepStream to be certain - however, it's not immediately clear to me that GBulb will fix the problem you're seeing. The problem that GBulb solves is that both If the issue is a clean shutdown, I'd be highly surprised if adding asyncio resolved the issue you're using - or, if it does, that the same result couldn't be achieved in a much easier way. Adding asyncio to the mix won't make shutdown any easier - if anything, it just opens up more ways that things could go wrong. What is needed in this case is to use If the issue is maintaining a connection to dbus over multiprocessing - that's a whole other problem. In this case, you're dealing with a process barrier, which is a whole other set of complications. Again, adding asyncio won't help here. asyncio would only help if you're able to recompose your background work into atomic, non-blocking asynchronous methods - which is usually only possible if your tasks are network or disk bound. If they're CPU or GPU bound, asyncio won't help. |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
I'm just curious if this is the right tool for my project, here is my use case:
Overview
I have a Python application that is using PyGObject and NVIDIA DeepStream 6.2 Python bindings. I currently have a working pipeline that makes N connections to RTSP sources and outputs the decoded frames as numpy arrays using an appsink 'new-sample' callback. I then take these numpy arrays and perform some post processing, AI model inference, etc..
Problem
The issue that I am running into is when I call
MyPipeline.gst_loop.run()
the method blocks and I have no good way to properly shutdown the pipeline. The only way that has worked for me is setting up a Python signal listener for SIGINT or SIGTERM and then running a graceful stop method to send the pipeline an eos_event, however, if I try to use my pipeline in a Process() as described below that no longer works.In order to do my other tasks I put my pipeline into a Python mp.Process() and use an mp.Queue() to access the numpy arrays output by appsink, this allows me to go and run an inference model and perform some other work. However, when I stop my application it seems that I am unable to properly shutdown the pipeline with an eos_event as it is running in its own process.
Using the async method for GLib that this repo provides would I be able to implement some event/message handling system to 'gracefully' shutdown the pipeline? I am pretty new to PyGObject, so perhaps there is a more standard way to run/shutdown pipelines?
Steps to reproduce
Relevant code snippets
Expected behavior
I want to be able to run other processing tasks while decoding frames from the RTSP sources. Then when I am complete my processing I want to be able to shutdown all parts of code (inference engines and pipeline) in a proper manner like sending an eos event to the pipeline etc. With my current implementation putting gst_loop.run() in a seperate Process() I lose access to the bus for sending messages.
Again, I'm new to GLib and async stuff so was curious if this repo could help in my use case.
Screenshots
No response
Environment
Logs
No response
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions