Divide Gstreamer source in several branches - gstreamer

I need to read rtsp and after that to divide to several branches.
Smth like tee plugin does, but tee copy the frame.
I need that 1 frame to go to one branch of pipeline and another one to go to another branch the third frame to the first branch and so on... Is it possible to do in Gstreamer?

Related

Why is the delay in my GStreamer pipeline dependent on the blocksize of audiotestsrc?

I have a GStreamer pipeline I'm instantiating on two time-synchronized computers (running GStreamer 1.4.5) as follows:
On the data-generating computer:
gst-launch-1.0 audiotestsrc samplersperbuffer=<samps_per_buf> ! alwaenc ! inserttimecode consumer=0 ! udpsink port=5002 host=<ip address of data-consuming computer>
On the data-consuming computer:
gst-launch-1.0 udpsrc caps="audio/x-alaw,rate=(int)44100,channels=1" port=5002 ! inserttimecode consumer=1 ! alawdec ! alasasink
inserttimecode is an instance of GstInsertTimeCode, a custom plugin subclass of GstTransform that either
consumer=0: copies the incoming data and then adds 20 bytes (custom marker, current time in nanoseconds, and a 1-up sequence number) to the packet and then sends the data downstream
OR
consumer=1: Strips out the 20 bytes and then sends the rest of the data downstream. Reports jumps in sequence number and also reports latency using the consumed 20 bytes and the current clock time.
I would have expected inserttimecode to add the timestamp, pass it to udpsink, then udpsrc would receive it and pass it on directly to inserttimecode. In my mind, there should be very little delaying this transfer: just network delay plus a small processing delay. However, when I run this code, my latency numbers are larger than expected. I have found empirically that the unexpected offset grows over time but levels out (after just a handful of timestamps) at a value that can be calculated as
offset_seconds = samps_per_buf / sample_rate
In other words, the larger I set samps_per_buf in the audiotestsrc call, the larger the computed latency. Why would this be?
On the receiving side, is it back pressure from alsasink?
On the sending side, is inserttimecode clocked to send out packets after each block is sent from audiotestsrc?
SIDE NOTE: I suppose I'm not super surprised by this. In the parlance of signal processing and filtering, I would call this "filter delay". I am used to seeing a delay induced by a filter (often by half the length of the filter) but if I just use that paradigm, I think the delay is half as large as I would expect.
I want to understand the origin of offset_seconds.
It seems like it could be a function of clocking of the audiotestsrc. I'm still trying to understand how that works (see Different scheduling modes). I assume the pads of GstInsertTimeCode are operating in push mode (means it's waiting for the upstream elements to push data to it). I am unsure if this default changes if my transform is not "in-place" (it isn't since I'm adding extra data to the stream). I'm not sure this has anything to do with it since I would expect the offset to be constant instead of ramping up to a steady-state value. But if it is, how do I change a GstTransform object to go to pull mode? Or would setting pull mode in udpsink do the trick? Does the sync property with udpsink have anything to do with it?
I've also looked at the possibility that this is latency reported to the elements during setup. However, I have found (using GST_DEBUG=6 on the command line) that this latency is not variant with samps_per_buf and is always 0.2 seconds. (For more information see Clocking)
Since the value is ramping up, it feels like buffering happening somewhere in the pipeline but I cannot figure out where.
Why is the delay in my GStreamer pipeline dependent on the blocksize of audiotestsrc?
Are there certain debug outputs I could search for to shed light on the delay?
Where would I look to understand more about delays induced by elements in the pipeline?

GSTREAMER access video before an event

I have a SW that performs some video analysis as soon as an event (alarm) happens.
Since I have not enough space on my embedded board, I should start recording the video only when an alarm happens;
The algorithm works on a video stored offline (it is not a real time algorithm, so the video should be stored, it doesn't suffice to attach to video stream).
At present time I'm able to attach to video and to store it as soon as I detect the alarm condition.
However I would like to analyze the data 10 seconds before the event happens.
Is it possible to pre-record up to 10 seconds as a FIFO queue, without storing the whole stream on disk?
I found something similar to my requirements here:
https://developer.ridgerun.com/wiki/index.php/GStreamer_pre-record_element#Video_pre-recording_example
but I would like to know if there is some way I can have the same result, without using the ridgerun tool.
Best regards
Giovanni
I think I mixed up my ideas, and both of them seem to be the similar.
What I suggest is the following :
Have an element that behaves like ringbuffer, though which you can stream backwards in time. A good example to try out might be the èlement queue. Have a look at time-shift buffering.
Then store the contents to a file on alarm, and use another pipeline that read from it. For eg. use tee or output-selector.
| -> ring-buffer
src -> output-selector -> |
|-> (on alarm) -> ringbuffer + live-src -> file-sink
From your question, I understand that your src might be a live camera, and hence doing this can be tricky. Probably you might have to implement your own plugin as done by the RidgeRun team, otherwise this solution is more of a hack rather than a meaningful solution. Sadly there aren't many references for such a solution, you might have to try it out.

Custom Media Foundation sink never receives samples

I have my own MediaSink in Windows Media Foundation with one stream. In the OnClockStart method, I instruct the stream to queue (i) MEStreamStarted and (ii) MEStreamSinkRequestSample on itself. For implementing the queue, I use the IMFMediaEventQueue, and using the mtrace tool, I can also see that someone dequeues the event.
The problem is that ProcessSample of my stream is actually never called. This also has the effect that no further samples are requested, because this is done after processing a sample like in https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DX11VideoRenderer.
Is the described approach the right way to implement the sink? If not, what would be the right way? If so, where could I search for the problem?
Some background info: The sink is an RTSP sink based on live555. Since the latter is also sink-driven, I thought it would be a good idea queuing a MEStreamSinkRequestSample whenever live555 requests more data from me. This is working as intended.
However, the solution has the problem that new samples are only requested as long as a client is connected to live555. If I now add a tee before the sink, eg to show a local preview, the system gets out of control, because the tee accumulates samples on the output of my sink which are never fetched. I then started playing around with discardable samples (cf. https://social.msdn.microsoft.com/Forums/sharepoint/en-US/5065a7cd-3c63-43e8-8f70-be777c89b38e/mixing-rate-sink-and-rateless-sink-on-a-tee-node?forum=mediafoundationdevelopment), but the problem is either that the stream does not start, queues are growing or the frame rate of the faster sink is artificially limited depending on which side is discardable.
Therefore, the next idea was rewriting my sink such that it always requests a new sample when it has processed the current one and puts all samples in a ring buffer for live555 such that whenever clients are connected, they can retrieve their data from there, and otherwise, the samples are just discarded. This does not work at all. Now, my sink does not get anything even without the tee.
The observation is: if I just request a lot of samples (as in the original approach), at some point, I get data. However, if I request only one (I also tried moderately larger numbers up to 5), ProcessSample is just not called, so no subsequent requests can be generated. I send MeStreamStarted once the clock is started or restarted exactly as described on https://msdn.microsoft.com/en-us/library/windows/desktop/ms701626, and after that, I request the first sample. In my understanding, MEStreamSinkRequestSample should not get lost, so I should get something even on a single request. Is that a misunderstanding? Should I request until I get something?

One bus for multiple pipelines - is it possible?

Is it possible to link N pipelines to one bus?
I have N "source pipelines" and one "sink pipeline", every time one source-pipeline finishes to transmit file to the sink-pipeline, the next pipeline needs to transmit other file also to that sink-pipeline, by setting its state to "playing".
So my question is how to manage the bus to handle N pipelines, is it possible?
if not, is there one entity that could do that?
The code is written in c++, gstreamer 1.0
Thanks.
You can have several distinct bins in one pipeline. Then those share the bus.

How to start pipeline with pendings inputs

I'm working for a few days now on a pipeline with the following configuration :
- 2 live input streams (RTMP)
- going into one compositor
- outputing to another RTMP stream
With some converter, queue, etc. in between, it works pretty well.
But my problem is that one of the RTMP input may not be available at start time, so the pipeline can't start, crashing with the followings errors:
- error: Failed to read any data from stream
- error: Internal data flow error
What would be the proper way to make this work, that is, to start the stream with the first input, even if the second one is not ready yet ?
I tried several ways : dynamically changing the pipeline, playing with pad probes, listening to error message, .. but so far I can't make it work.
Thanks,
PL
As you didnt post any code I guess you are ok with conceptual answer..
There are few options for rtspsrc with which you can control when it will fail - reagarding timeout exceeded or number of trials exceeded maximum. Those are (not sure if all):
retry - this may be not very useful if it deals only with ports ..
timeout - if you want to try with UDP some longer time you can enlarge this one
tcp-timeout - this is important, try to play with it - make it much larger
connection-speed - maybe it will help to make smaller this one
protocols - I have experience that for bad streams TCP was much better for me
The actual concept (I am not an expert, take it as another view to the problem):
You can create two bins - one for each stream. I would use rtspsrc and decodebin and block the output pads of decodebin untill I have all the pads, then I would connect to the compositor.
When you recieve any error (it should be during the phase of waiting for all pads) then you would put the bin to NULL state (I mean GStreamer state called NULL) and PLAYING/PAUSED again..
Well you have to use the pad probles properly (no idea what that is :D) .. can you post your code regarding this?
Maybe try to discard the error message to not disintegrate the pipe..
Also, do you have only video inputs?
I guess no, you can use audiomixer for audio.. also the compositor has nice OpenGL version which is much faster its called glvideomixer.. but it may introduce another OpenGL related problems.. if you have intel GPUs then you are probably safe.