I have a GStreamer pipeline whose topology is changed on occasion. What we do is:
gst_element_set_state(pipeline, GST_STATE_READY);
gst_element_unlink(node1, tee);
gst_element_link(node1, oldfilm);
gst_element_link(oldfilm, tee);
gst_element_set_state(pipeline, GST_STATE_PLAYING);
We assume the pipeline must be stopped while elements are re-connected. Problem: Our app hangs, typically video stops streaming after the first few times we change topology, and then the next call to gst_element_set_state(pipeline, GST_STATE_PLAYING) never returns. The app still responds to ^C, which of course kills it.
We conclude we are not doing this right. What is the right way to alter pipeline topology while the application is running?
Back in 2016 at the GStreamer conference I heard a talk about this topic which felt quite useful in this context.
Slides:
https://gstreamer.freedesktop.org/data/events/gstreamer-conference/2016/Jose%20A.%20Santos%20-%20How%20to%20work%20with%20dynamic%20pipelines%20using%20GStreamer.pdf
Talk:
https://gstconf.ubicast.tv/videos/how-to-work-dynamic-pipelines/
I hope this explains how to work with these type of problems.
Related
I am developing an Nvidia deepstream inference application with multiple RTSP sources.
Where each individual source is constructed using uridecodebin plugin. Until this point, I have developed a pipeline with multiple source bins connected to a typical inference pipeline as per our use case something like this.
[source-bin-0]---[Pipeline as per Nvidia Deepstream Inference plugins]
[source-bin-%d]
is either [uridecodebin] or [rtspsrc--decodebin].
Which is working totally fine!
I am looking to incorporate RTSP reconnection in case any of the RTSP sources (camera) is down for a while and comes up after some time.
In case of a source error, I am setting a particular uridecodebin state to NULL and then to PLAY again.
My observations after performing some test cases are:
When I am using [rtspsrc-decodebin] as the source-bin my reconnection logic of setting the state to NULL and PLAY works fine and I am able to reconnect to my RTSP source successfully. Here when I set the source-bin state to PLAY it returns me GST_STATE_CHANGE_ASYNC and the source-bin is able to provide frames to the upstream elements.
But In the case of [uridecodebin] as the source-bin, my same reconnection logic does not work. Here the observation is after I set the source-bin state to PLAY it returns me GST_STATE_CHANGE_NO_PREROLL and my overall pipeline gets stuck. It is not giving me the further error of source disconnected but also not able to provide frames to the upstream elements.
The main difference I can conclude here is that when I am using uridecodebin and changing state to PLAY it is returning with GST_STATE_CHANGE_NO_PREROLL and I am not able to reconnect, while with rtspsrc it is returning GST_STATE_CHANGE_ASYNC and I am able to connect.
I am seeking help to successfully reconnect to my RTSP source when I am using uridecodebin as the source-bin.
Thank you in advance!!
To get all bus messages from my GStreamer pipelines, I am currently calling gst_bus_set_sync_handler (returning GST_BUS_DROP from my handler). This seems to work perfectly as far as I can tell, but the documentation states:
This function is usually only called by the creator of the bus.
Applications should handle messages asynchronously using the gst_bus
watch and poll functions.
Should I be worried? I assume that the "creator of the bus" is not the same as the creator of the pipeline (me), or is it?
There are a couple considerations I'm aware of regarding the use of gst_bus_set_sync_handler.
Firstly, since it runs your code synchronously in the same thread that posted the message, you'll be blocking that thread from doing other work for as long as your callback runs. If you do a fair bit of work when handling a message, this could cause performance issues.
Secondly, you may not reliably be able to use gst_element_set_state in a synchronous message handler, because elements are not allowed to set their own state in their streaming thread. Whether or not this is a problem will depend on which streaming thread in the pipeline sent the message, something you normally don't have to worry about in asynchronous messaging handlers.
I would recommend using asynchronous messaging whenever possible, as it has fewer caveats. On the other hand, if gst_bus_set_sync_handler works for you, using it shouldn't be a problem.
Currently I am working on an application using gstreamer 1.0, this application should to open a stream over rtsp, all work fine if no problem is detected on stream. But when arrived an ERROR or EOS message and I try to call:
gst_element_set_state(pipeline, GST_STATE_NULL)
over the pipeline, this call block the thread and nothing happen.
Could anyone help me with this issue on GStreamer.?
As per my understanding We should not call gst_element_set_state() in any of the gstreamer thread. It may cause deadlock.
I am quite new to gstreamer and trying to get some metrics on an existing pipeline. The pipeline is set as 'appsrc queue mpegvideoparse avdec_mpeg2video deinterlace videobalance xvimagesink'.
xvimagesink only has a sink pad and I am not sure where and how its output is connected to but I am interested in knowing when the actual video device/buffer displays the first I frame and then video starts rolling.
The application sets the pipeline state to 'playing' quite early on, so, listening on this event does not help.
Regards,
Check out GST_MESSAGE_STREAM_START and probes. However, I am not sure, what exactly do you want: at GStreamer level you can only detect moment when buffer is handled via some element, not when it's actually displayed.
xvimagesink has no srcpad (output), only sinkpad (input).
You can read about preroll here: http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-preroll.txt
Be sure to read GStreamer manual first:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html
On the application I'm working on, which uses Gstreamer 0.10, we receive streaming audio data from a TCP socket (from another process running locally).
We can issue "seek" command to the process, which is working: we start receiving data corresponding the new position we specify.
So far so good.
However, there is delay between the time we issue the seek and the time we start playing the data at the correct position.
I'm pretty sure this is because we buffer data.
So I would like to flush the data buffered in our pipeline when we issue the seek command.
However, I didn't managed to do this: I used gst_pad_push_event (gst_event_new_flush_start()) on the pad, then gst_event_new_flush_stop short after, which both return TRUE.
However, music stops, and never start again.
Using export GST_DEBUG=2 I can see the following warning:
gdpdepay gstgdpdepay.c:429:gst_gdp_depay_chain:<gdpdepay-1> pushing depayloaded buffer returned -2
As the other process continue to push data while flush might be "on" for a short amount of time, that might explain this warning. But I would expect the other process to be able to continue to push data, and our pipeline to be able to continue to read data from this socket and process them in the pipeline, after sending a flush_stop event.
Googling this issue, I found some suggestions like changing the pipeline state, but that didn't help either.
Any help very welcome!