Getting notified of an appsink's negotiated caps in GStreamer - gstreamer

Is there a way to get the negotiated caps on an appsink element? Basically I want gst_pad_get_current_caps() but instead of returning null if the caps are not negotiated yet, block and wait for negotiation. Alternatively, an event would also work.

Ah, as normal when I post the question I find the answer. The function I'm looking for is gst_pad_set_event_function().

Related

Make GStreamer pipeline drop erroneous buffers

My pipeline splits in the middle to be sent over an unreliable connection. This results in some buffers having bit errors that break the pipeline if I do not account for them. To solve this, I have an appsink that parses buffers for their critical information (timestamps, duration, data, and data size), serializes them, and then sends that over the unreliable channel with a CRC. If the receiving pipeline reads a buffer from the unreliable channel and detect a bit error with the CRC, the buffer is dropped. Most decoders are able to recover fine from a dropped buffer, aside from some temporary visual artifacts.
Is there a GStreamer plugin that does this automatically? I looked into the GDPPay and GDPDepay plugins which appeared to meet my needs due to there serialization of buffers and inclusion of CRC's for their header and payload, however the plugin assumes that the data is being sent over a reliable channel (why this assumption and the inclusion of CRCs, I do not know).
I am tempted to take the time to make a plugin/make a pull request to the GDP plugins that just drop bad buffers instead of pausing the pipeline with a GST_FLOW_ERROR.
Any suggestions would be greatly appreciated. Ideally it would also be tolerant to either pipeline crashing/restarting. (The plugin also expects the Caps filter information to be the first buffer sent, which in my case I do not need to send as I have a fixed purpose and can hard-code both ends to know what to expect. This is only a problem if the receiver restarts and the sender is already sending data, but the receiver will not get the data because it is waiting for the Caps data that the sender already sent.)
When faced with similar issue (but for GstEvents), I used GstProbe. You'll probably need to install it for GST_PAD_PROBE_TYPE_BUFFER and return GST_PAD_PROBE_DROP for the buffers that doesn't satisfy your conditions. It is easier than defining a plugin and definitely it is easier to modify (GstProbe can be created and handled from the code, so changing the dropping logic is easier). Caveat: I haven't done it for the buffers, but it should be doable.
Let me know if it worked!

Gstreamer H264 RTP

I am using GStreamer 1.0 to capture and display a video broadcast by an MGW ACE encode(or from a VLC itself), I am using RTP with H264
I have read that the sender's SPS and PPS information is needed in order to decode.
Both information is added in the sprop-parameter-sets parameter.
But if I can't get that information, is there any way I can decode and display without adding that parameter?
My Payload is the following:
gst-launch-1.0 -vvv udpsrc port = 9001 caps = "application / x-rtp, media = (string) video"! rtph264depay! decodebin! autovideosink
I have verified that from two different hosts, one to emit and another to receive through gstreamer, I have no problem, I can send and receive it without problem.
But when I try to receive a video from a MGW ACE encode from a VLC itself, I cannot display it.
Some RTP streaming scenarios repeat SPS/PPS periodically in-band before each IDR-frame. However I believe they do for convenience for that particular case. If i remember correctly RTP defines SPS/PPS transmission to occur out of band, via SDP information.

How to make rtpjitterbuffer work on a stream without timestamps?

I am sending an H.264 bytestream over RTP using gstreamer.
# sender
gst-launch-1.0 filesrc location=my_stream.h264 ! h264parse disable-passthrough=true ! rtph264pay config-interval=10 pt=96 ! udpsink host=localhost port=5004
Then I am receiving the frames, decoding and displaying in other gstreamer instance.
# receiver
gst-launch-1.0 udpsrc port=5004 ! application/x-rtp,payload=96,media="video",encoding-name="H264",clock-rate="90000" ! rtph264depay ! h264parse ! decodebin ! xvimagesink
This works as is, but I want to try adding an rtpjitterbuffer in order to perfectly smooth out playback.
# receiver
gst-launch-1.0 udpsrc port=5004 ! application/x-rtp,payload=96,media="video",encoding-name="H264",clock-rate="90000" ! rtpjitterbuffer ! rtph264depay ! h264parse ! decodebin ! xvimagesink
However, as soon as I do, the receiver only displays a single frame and freezes.
If I replace the .h264 file with an MP4 file, the playback works great.
I assume that my h264 stream does not have the required timestamps to enable the jitter buffer to function.
I made slight progress by adding identity datarate=1000000. This allows the jitterbuffer to play, however this screws with my framerate because P frames have less data than I frames. Clearly the identity element adds the correct timestamps, but just with the wrong numbers.
Is it possible to automatically generate timestamps on the sender by specifying the "framerate" caps correctly somewhere? So far my attempts have not worked.
You've partially answered the problem already:
If I replace the .h264 file with an MP4 file, the playback works great.
I assume that my h264 stream does not have the required timestamps to enable the jitter buffer to function.
Your sender pipeline has no negotiated frame rate because you're using a raw h264 stream, while you should really be using a container format (e.g., MP4) which has this information. Without timestamps udpsink cannot synchronise against clock to throttle, so the sender is spitting out packets as fast as pipeline can process them. It's not a live sink.
However adding a rtpjitterbuffer makes your receiver act as live source. It freezes because it's trying its best to cope with the barrage of packets of malformed timestamps. RTP doesn't transmit "missing" timestamps to best of my knowledge, so all packets will probably have the same timestamp. Thus it probably reconstructs the first frame and drops the rest as duplicates.
I must agree with user1998586 in the sense that it ought to be better for the pipeline to crash with a good error message in this case rather trying its best.
Is it possible to automatically generate timestamps on the sender by specifying the "framerate" caps correctly somewhere? So far my attempts have not worked.
No. You should really use a container.
In theory, however, an au aligned H264 raw stream could be timestamped by just knowing the frame rate, but there are no gstreamer elements (I know of) that do this and just specifying caps won't do it.
I had the same problem, and the best solution I found was to add timestamps to the stream on the sender side, by adding do-timestamp=1 to the source.
Without timestamps I couldn't get rtpjitterbuffer to pass more than one frame, no matter what options I gave it.
(The case I was dealing with was streaming from raspvid via fdsrc, I presume filesrc behaves similarly).
It does kinda suck that gstreamer so easily sends streams that gstreamer itself (and other tools) doesn't process correctly: if not having timestamps is valid, then rtpjitterbuffer should cope with it; if not having timestamps is invalid, then rtph264pay should refuse to send without timestamps. I guess it was never intended as a user interface...
You should try to set the rtpjitterbuffer mode to another value than the default one:
mode : Control the buffering algorithm in use
flags: readable, writable
Enum "RTPJitterBufferMode" Default: 1, "slave"
(0): none - Only use RTP timestamps
(1): slave - Slave receiver to sender clock
(2): buffer - Do low/high watermark buffering
(4): synced - Synchronized sender and receiver clocks
Like that:
... ! rtpjittrbuffer mode=0 ! ...

How to start pipeline with pendings inputs

I'm working for a few days now on a pipeline with the following configuration :
- 2 live input streams (RTMP)
- going into one compositor
- outputing to another RTMP stream
With some converter, queue, etc. in between, it works pretty well.
But my problem is that one of the RTMP input may not be available at start time, so the pipeline can't start, crashing with the followings errors:
- error: Failed to read any data from stream
- error: Internal data flow error
What would be the proper way to make this work, that is, to start the stream with the first input, even if the second one is not ready yet ?
I tried several ways : dynamically changing the pipeline, playing with pad probes, listening to error message, .. but so far I can't make it work.
Thanks,
PL
As you didnt post any code I guess you are ok with conceptual answer..
There are few options for rtspsrc with which you can control when it will fail - reagarding timeout exceeded or number of trials exceeded maximum. Those are (not sure if all):
retry - this may be not very useful if it deals only with ports ..
timeout - if you want to try with UDP some longer time you can enlarge this one
tcp-timeout - this is important, try to play with it - make it much larger
connection-speed - maybe it will help to make smaller this one
protocols - I have experience that for bad streams TCP was much better for me
The actual concept (I am not an expert, take it as another view to the problem):
You can create two bins - one for each stream. I would use rtspsrc and decodebin and block the output pads of decodebin untill I have all the pads, then I would connect to the compositor.
When you recieve any error (it should be during the phase of waiting for all pads) then you would put the bin to NULL state (I mean GStreamer state called NULL) and PLAYING/PAUSED again..
Well you have to use the pad probles properly (no idea what that is :D) .. can you post your code regarding this?
Maybe try to discard the error message to not disintegrate the pipe..
Also, do you have only video inputs?
I guess no, you can use audiomixer for audio.. also the compositor has nice OpenGL version which is much faster its called glvideomixer.. but it may introduce another OpenGL related problems.. if you have intel GPUs then you are probably safe.

gstreamer get video playing event

I am quite new to gstreamer and trying to get some metrics on an existing pipeline. The pipeline is set as 'appsrc queue mpegvideoparse avdec_mpeg2video deinterlace videobalance xvimagesink'.
xvimagesink only has a sink pad and I am not sure where and how its output is connected to but I am interested in knowing when the actual video device/buffer displays the first I frame and then video starts rolling.
The application sets the pipeline state to 'playing' quite early on, so, listening on this event does not help.
Regards,
Check out GST_MESSAGE_STREAM_START and probes. However, I am not sure, what exactly do you want: at GStreamer level you can only detect moment when buffer is handled via some element, not when it's actually displayed.
xvimagesink has no srcpad (output), only sinkpad (input).
You can read about preroll here: http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-preroll.txt
Be sure to read GStreamer manual first:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html