Gstreamer: Pausing/resuming video in RTP streams - gstreamer

I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!

I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19

Related

Add stream meta to a stream via Gstreamer

The goal here is to be able to send some metadata (timestamps, objects found per frame) with a stream within a single pipeline or multiple pipelines over network (e.g. RTP UDP).
Within The pipeline:
This is straight forward by defining a new GstMeta API and register and implement it. Then add it to GstBuffers via buffer probes or another element solely designed for this purpose.
Multiple pipelines over Network:
The only solution I have heard about is to add one or more RTP header extension to RTP packets coming out of a payloader. Actually some people decided to transform their Custom GstMeta into those header extensions. But Is This really a good use-case for it?
There's more ....
Another idea is to create your own custom media type video/x-mytype and for that you will need to write a typefinding function and an autoplugger on top of it, Not to mention a couple of elements to deal with this new type and convert it to other exiting media types. Now that new type should actually have a place where it can handle the meta data I was talking about.
That summarizes my research, Are there any other methods that I am not aware of?
I would definitely appreciate your input on this!
Thanks in advance!
I decided to implement a new media type inside gstreamer (e.g. video/x-h265-with-meta). This new media type shall have inherent metadata handling capabilities. You can design how it encode/decode the metadata as you see fit in this new type. You would also need write 2 more gstreamer elements. one to convert regular video/x-h265 to video/x-h265-with-meta (say muxmymeta) and the other to reverse such conversion (say demuxmymeta). So essentially muxmymeta will read the metadata injected in the GstBuffer* via GstMeta API implementation and encodes in the outgoing stream. demuxmymeta will decode such metadata from the stream and add it back to GstBuffer*. To ensure that for every encoded h265 frame will have a corresponding meta, muxmymeta must have its src caps in this format:
video/x-h265
stream-format: byte-stream
alignment: au
The byte-stream is mandatory for network transmission and the au alignment forces a whole encoded frame as input for muxmymeta.
I also used rtpgstpay and rtpgstdepay elements to use this new media type with RTP/RTSP. You will NOT need to write a typefinding function at this point since rtpgstpay/rtpgstdepay will be able to figure that out from the caps. This works well out of the box for h265 streams in gstreamer v1.6.2 but I had to make a fix inside gstreamer itself to make it work with h264 as well. Finally, to make autopluggers such as decodebin/uridecodebin to decode your stream, you must set the klass for demuxmymeta to Codec/Decoder/Demuxer and set a priority for it other than none, this basically tells gstreamer to use this element for decoding.
Here are 2 pipelines for illustration:
Sending H265 RTSP with metadata pipeline (set as the launch line for GstRTSPMediaFactory)
( rtspsrc location=rtsp://localhost:8554/test latency=0 ! rtph265depay ! h265parse ! muxmymeta ! rtpgstpay name=pay0 pt=96 )
Here, the rtsp://localhost:8554/test stream is a regular h265 stream video/x-h265. yet assume that the rtsp-server will produce rtsp://localhost:8553/test-meta which has the type video/x-h265-with-meta
Receiver
rtspsrc location=rtsp://localhost:8553/test-meta ! rtpgstdepay ! demuxmymeta ! h265parse ! avdec_h265 ! videoconvert ! autovideosink
With an Autoplugger it will look as neat as:
uridecodebin uri=rtsp://localhost:8553/test-meta ! autovideosink
Though this might seem like a longshot solution, it provides a way to transport self-synchronizing metadata between pipelines over the network.

Send EOS to pipeline containing webrtcbin and appsrc

I have a pipeline that receives from an application a video stream using appsrc and streams that stream to a WebRTC client. At the same time, the pipeline attempts to save the video to a file (as MP4 but could also be Matroska etc) in a separate branch using the tee command. The pipeline is created programmatically using gst_parse_launch and looks as follows:
webrtcbin bundle-policy=max-bundle name=myserver stun-server=stun://global.stun.twilio.com:3478?transport=udp
appsrc name=TextureSource-1 ! videoconvert ! video/x-raw,format=I420 ! x264enc name=VideoEncoder-1 tune=zerolatency
speed-preset=superfast ! tee name=t ! queue ! video/x-h264,stream-format=byte-stream !
filesink location=capture.mp4 t. ! queue ! rtph264pay !
application/x-rtp,media=video,encoding-name=H264,payload=96 ! myserver.
I can receive the stream without issues on my WebRTC client but the problem is, the saved MP4 file is mostly unplayable. Somehow, I can only play the file using ffplay which plays it at approximately twice the speed of the capture rate. After some web search, I found out that I need to send an EOS event to the pipeline (so that the MP4 header could be written properly and the file becomes playable) through a command like: gst_element_send_event(m_pPipeline, gst_event_new_eos()); where m_pPipeline is a pointer to my pipeline element.
However, on my bus I never get a message of type GST_MESSAGE_EOS which, in my understanding, means that the EOS message somehow does not travel downstream to my sinks. I tried to add the message-forward parameter using g_object_set (G_OBJECT(m_pPipeline), "message-forward", true, nullptr); but I observed the same behaviour.
What am I doing wrong here, should the EOS message not directly be sent to the pipeline but to the individual sinks (here filesink and webrtcbin)?

x264 enc v4l2 /dev/video2 stream

I have this working but have been unable to get video from my magwell to intergrate and could use help with the correct pipline.
gst-launch-1.0 videotestsrc ! video/x-raw,width=848,height=480,framerate=25/1 ! x264enc bitrate=700 ! video/x-h264,width=848,height=480,framerate=25/1,stream-format=byte-stream,profile=baseline ! tee name=t\
t. ! queue ! tcpclientsink host=172.18.0.3 port=8000 \
t. ! queue ! tcpclientsink host=172.18.0.4 port=8000
I do not see the receiver side pipeline in the question description. This is required to verify that there are no issues at the receiver side. Based on your current pipeline I have the following suggestions:
You don't need set the caps again after the element x264enc, because the output is anyhow of type video/x-h264. What you need is to add h264parse after x264enc. You need to also add h264parse, before passing the data to decoder you are using at the receiver side.
The bitrate set for x264enc is also very less. The units are in kbits/sec, and for a video this might be very less. It's best to leave this to default setting if you do not have any strict resource constraints. Otherwise try for a higher value.
Also is there any reason why you are using TCP. Using UDP might be a better idea for video, in case video data/packet loss is not an issue.

GStreamer: VBI data stream decoding

I have a question related to processing VBI data stream through 'teletextdec' plug-in.
Basically, I've already checked functionality of the next configuration:
gst-launch-1.0 -v -m filesrc location=sample.mpeg ! tsdemux ! teletextdec ! videoconvert ! ximagesink
where sample.mpeg is a video file containing not only video data, but also VBI data. Result is presented as video frames with overlayed VBI decoded VBI data.
At the moment, I'm interested to repeat similar configuration, but when video and VBI data are separated from each other (need to work with two data sources: one for video data providing, another for VBI data providing).
In other words, we have two next data sources:
/dev/vbi0 - is a character device, with binary data as an output
/dev/video0 - v4l2 device, with RAW data video as an output
I understand, that generally I need to have something like that:
avfvideosrc name=src ! videomixer src.vbi_01 ! teletextdec
Does anyone have similar experience?
Could anyone explain how correctly to execute mixing and decoding of RAW video data with VBI data?
Thanks in advance!

Stream Icecast using Gstreamer

I'm designing a program to stream an icecast server (radio.clarkson.edu). Ultimately it will be written in Python3, but for now I'm using gst-launch to test the pipeline. I've been working on Debian Jessie and using gstreamer-1.0. Using a file on Wikimedia, I was able to play pretty easily using:
url=https://upload.wikimedia.org/wikipedia/commons/0/0c/Muriel-Nguyen-Xuan-Korsakov-Flight-of-the-bumblebee.flac.oga
gst-launch-1.0 -v souphttpsrc location =$url ! decodebin ! audioconvert ! audioresample ! alsasink
Running the same commands with my stream, I get the output:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = text/uri-list
Missing element: text/uri-list decoder
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Your GStreamer installation is missing a plug-in.
Additional debug info:
gstdecodebin2.c(3977): gst_decode_bin_expose (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0:
no suitable plugins found
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = "NULL"
Freeing pipeline ...
I have tried too many other pipelines to put on one post, but I can answer any other questions.
Thank you
By now you probably have solved that problem, but still here's an idea: text/uri-list indicates that you didn't hand an actual stream to gstreamer, but rather a (textual) playlist that contains stream addresses. I guess gstreamer can't handle those, hence you need to parse them beforehand and then hand an actual audio stream address to it.