Rtsp stream reconnection when using uridecodebin source plugin - gstreamer

I am developing an Nvidia deepstream inference application with multiple RTSP sources.
Where each individual source is constructed using uridecodebin plugin. Until this point, I have developed a pipeline with multiple source bins connected to a typical inference pipeline as per our use case something like this.
[source-bin-0]---[Pipeline as per Nvidia Deepstream Inference plugins]
[source-bin-%d]
is either [uridecodebin] or [rtspsrc--decodebin].
Which is working totally fine!
I am looking to incorporate RTSP reconnection in case any of the RTSP sources (camera) is down for a while and comes up after some time.
In case of a source error, I am setting a particular uridecodebin state to NULL and then to PLAY again.
My observations after performing some test cases are:
When I am using [rtspsrc-decodebin] as the source-bin my reconnection logic of setting the state to NULL and PLAY works fine and I am able to reconnect to my RTSP source successfully. Here when I set the source-bin state to PLAY it returns me GST_STATE_CHANGE_ASYNC and the source-bin is able to provide frames to the upstream elements.
But In the case of [uridecodebin] as the source-bin, my same reconnection logic does not work. Here the observation is after I set the source-bin state to PLAY it returns me GST_STATE_CHANGE_NO_PREROLL and my overall pipeline gets stuck. It is not giving me the further error of source disconnected but also not able to provide frames to the upstream elements.
The main difference I can conclude here is that when I am using uridecodebin and changing state to PLAY it is returning with GST_STATE_CHANGE_NO_PREROLL and I am not able to reconnect, while with rtspsrc it is returning GST_STATE_CHANGE_ASYNC and I am able to connect.
I am seeking help to successfully reconnect to my RTSP source when I am using uridecodebin as the source-bin.
Thank you in advance!!

Related

Custom Media Foundation sink never receives samples

I have my own MediaSink in Windows Media Foundation with one stream. In the OnClockStart method, I instruct the stream to queue (i) MEStreamStarted and (ii) MEStreamSinkRequestSample on itself. For implementing the queue, I use the IMFMediaEventQueue, and using the mtrace tool, I can also see that someone dequeues the event.
The problem is that ProcessSample of my stream is actually never called. This also has the effect that no further samples are requested, because this is done after processing a sample like in https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DX11VideoRenderer.
Is the described approach the right way to implement the sink? If not, what would be the right way? If so, where could I search for the problem?
Some background info: The sink is an RTSP sink based on live555. Since the latter is also sink-driven, I thought it would be a good idea queuing a MEStreamSinkRequestSample whenever live555 requests more data from me. This is working as intended.
However, the solution has the problem that new samples are only requested as long as a client is connected to live555. If I now add a tee before the sink, eg to show a local preview, the system gets out of control, because the tee accumulates samples on the output of my sink which are never fetched. I then started playing around with discardable samples (cf. https://social.msdn.microsoft.com/Forums/sharepoint/en-US/5065a7cd-3c63-43e8-8f70-be777c89b38e/mixing-rate-sink-and-rateless-sink-on-a-tee-node?forum=mediafoundationdevelopment), but the problem is either that the stream does not start, queues are growing or the frame rate of the faster sink is artificially limited depending on which side is discardable.
Therefore, the next idea was rewriting my sink such that it always requests a new sample when it has processed the current one and puts all samples in a ring buffer for live555 such that whenever clients are connected, they can retrieve their data from there, and otherwise, the samples are just discarded. This does not work at all. Now, my sink does not get anything even without the tee.
The observation is: if I just request a lot of samples (as in the original approach), at some point, I get data. However, if I request only one (I also tried moderately larger numbers up to 5), ProcessSample is just not called, so no subsequent requests can be generated. I send MeStreamStarted once the clock is started or restarted exactly as described on https://msdn.microsoft.com/en-us/library/windows/desktop/ms701626, and after that, I request the first sample. In my understanding, MEStreamSinkRequestSample should not get lost, so I should get something even on a single request. Is that a misunderstanding? Should I request until I get something?

How to start pipeline with pendings inputs

I'm working for a few days now on a pipeline with the following configuration :
- 2 live input streams (RTMP)
- going into one compositor
- outputing to another RTMP stream
With some converter, queue, etc. in between, it works pretty well.
But my problem is that one of the RTMP input may not be available at start time, so the pipeline can't start, crashing with the followings errors:
- error: Failed to read any data from stream
- error: Internal data flow error
What would be the proper way to make this work, that is, to start the stream with the first input, even if the second one is not ready yet ?
I tried several ways : dynamically changing the pipeline, playing with pad probes, listening to error message, .. but so far I can't make it work.
Thanks,
PL
As you didnt post any code I guess you are ok with conceptual answer..
There are few options for rtspsrc with which you can control when it will fail - reagarding timeout exceeded or number of trials exceeded maximum. Those are (not sure if all):
retry - this may be not very useful if it deals only with ports ..
timeout - if you want to try with UDP some longer time you can enlarge this one
tcp-timeout - this is important, try to play with it - make it much larger
connection-speed - maybe it will help to make smaller this one
protocols - I have experience that for bad streams TCP was much better for me
The actual concept (I am not an expert, take it as another view to the problem):
You can create two bins - one for each stream. I would use rtspsrc and decodebin and block the output pads of decodebin untill I have all the pads, then I would connect to the compositor.
When you recieve any error (it should be during the phase of waiting for all pads) then you would put the bin to NULL state (I mean GStreamer state called NULL) and PLAYING/PAUSED again..
Well you have to use the pad probles properly (no idea what that is :D) .. can you post your code regarding this?
Maybe try to discard the error message to not disintegrate the pipe..
Also, do you have only video inputs?
I guess no, you can use audiomixer for audio.. also the compositor has nice OpenGL version which is much faster its called glvideomixer.. but it may introduce another OpenGL related problems.. if you have intel GPUs then you are probably safe.

gstreamer get video playing event

I am quite new to gstreamer and trying to get some metrics on an existing pipeline. The pipeline is set as 'appsrc queue mpegvideoparse avdec_mpeg2video deinterlace videobalance xvimagesink'.
xvimagesink only has a sink pad and I am not sure where and how its output is connected to but I am interested in knowing when the actual video device/buffer displays the first I frame and then video starts rolling.
The application sets the pipeline state to 'playing' quite early on, so, listening on this event does not help.
Regards,
Check out GST_MESSAGE_STREAM_START and probes. However, I am not sure, what exactly do you want: at GStreamer level you can only detect moment when buffer is handled via some element, not when it's actually displayed.
xvimagesink has no srcpad (output), only sinkpad (input).
You can read about preroll here: http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-preroll.txt
Be sure to read GStreamer manual first:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html

gstreamer 0.10: flush pipeline buffers

On the application I'm working on, which uses Gstreamer 0.10, we receive streaming audio data from a TCP socket (from another process running locally).
We can issue "seek" command to the process, which is working: we start receiving data corresponding the new position we specify.
So far so good.
However, there is delay between the time we issue the seek and the time we start playing the data at the correct position.
I'm pretty sure this is because we buffer data.
So I would like to flush the data buffered in our pipeline when we issue the seek command.
However, I didn't managed to do this: I used gst_pad_push_event (gst_event_new_flush_start()) on the pad, then gst_event_new_flush_stop short after, which both return TRUE.
However, music stops, and never start again.
Using export GST_DEBUG=2 I can see the following warning:
gdpdepay gstgdpdepay.c:429:gst_gdp_depay_chain:<gdpdepay-1> pushing depayloaded buffer returned -2
As the other process continue to push data while flush might be "on" for a short amount of time, that might explain this warning. But I would expect the other process to be able to continue to push data, and our pipeline to be able to continue to read data from this socket and process them in the pipeline, after sending a flush_stop event.
Googling this issue, I found some suggestions like changing the pipeline state, but that didn't help either.
Any help very welcome!

Multiple applications using GStreamer

I want to write (but first I want to understand how to do it) applications (more than one) based on GStreamer framework that would share the same hardware resource at the same time.
For example: there is a hardware with HW acceleration for video decoding. I want to start simultaneously two applications that are able to decode different video streams, using HW acceleration. Of course I assume that HW is able to handle such requests, there is appropriate driver (but not GStreamer element) for doing this, but how to write GStreamer element that would support such resource sharing between separate processes?
I would appreciate any links, suggestions where to start...
You have h/w that can be accessed concurrently. Hence two gstreamer elements accessing it concurrently should work! There is nothing Gstreamer specific here.
Say you wanted to write a decoding element, it is like any decoding element and you access your hardware correctly. Your drivers should take care of the concurrent access.
The starting place is the Gstreamer plugin writer's guide.
So you need a single process that controls the HW decoder, and decodes streams from multiple sources.
I would recommend building a daemon, possibly itself based on GStreamer also. The gdppay and gdpdepay provide quite simple ways to pass data through sockets to the daemon and back. The daemon would wait for connections on a specified port (or unix socket) and open a virtual decoder per each connection. The video decoder elements in the separate applications would internally connect to the daemon and get back the decoded video.