Add stream meta to a stream via Gstreamer - c++

The goal here is to be able to send some metadata (timestamps, objects found per frame) with a stream within a single pipeline or multiple pipelines over network (e.g. RTP UDP).
Within The pipeline:
This is straight forward by defining a new GstMeta API and register and implement it. Then add it to GstBuffers via buffer probes or another element solely designed for this purpose.
Multiple pipelines over Network:
The only solution I have heard about is to add one or more RTP header extension to RTP packets coming out of a payloader. Actually some people decided to transform their Custom GstMeta into those header extensions. But Is This really a good use-case for it?
There's more ....
Another idea is to create your own custom media type video/x-mytype and for that you will need to write a typefinding function and an autoplugger on top of it, Not to mention a couple of elements to deal with this new type and convert it to other exiting media types. Now that new type should actually have a place where it can handle the meta data I was talking about.
That summarizes my research, Are there any other methods that I am not aware of?
I would definitely appreciate your input on this!
Thanks in advance!

I decided to implement a new media type inside gstreamer (e.g. video/x-h265-with-meta). This new media type shall have inherent metadata handling capabilities. You can design how it encode/decode the metadata as you see fit in this new type. You would also need write 2 more gstreamer elements. one to convert regular video/x-h265 to video/x-h265-with-meta (say muxmymeta) and the other to reverse such conversion (say demuxmymeta). So essentially muxmymeta will read the metadata injected in the GstBuffer* via GstMeta API implementation and encodes in the outgoing stream. demuxmymeta will decode such metadata from the stream and add it back to GstBuffer*. To ensure that for every encoded h265 frame will have a corresponding meta, muxmymeta must have its src caps in this format:
video/x-h265
stream-format: byte-stream
alignment: au
The byte-stream is mandatory for network transmission and the au alignment forces a whole encoded frame as input for muxmymeta.
I also used rtpgstpay and rtpgstdepay elements to use this new media type with RTP/RTSP. You will NOT need to write a typefinding function at this point since rtpgstpay/rtpgstdepay will be able to figure that out from the caps. This works well out of the box for h265 streams in gstreamer v1.6.2 but I had to make a fix inside gstreamer itself to make it work with h264 as well. Finally, to make autopluggers such as decodebin/uridecodebin to decode your stream, you must set the klass for demuxmymeta to Codec/Decoder/Demuxer and set a priority for it other than none, this basically tells gstreamer to use this element for decoding.
Here are 2 pipelines for illustration:
Sending H265 RTSP with metadata pipeline (set as the launch line for GstRTSPMediaFactory)
( rtspsrc location=rtsp://localhost:8554/test latency=0 ! rtph265depay ! h265parse ! muxmymeta ! rtpgstpay name=pay0 pt=96 )
Here, the rtsp://localhost:8554/test stream is a regular h265 stream video/x-h265. yet assume that the rtsp-server will produce rtsp://localhost:8553/test-meta which has the type video/x-h265-with-meta
Receiver
rtspsrc location=rtsp://localhost:8553/test-meta ! rtpgstdepay ! demuxmymeta ! h265parse ! avdec_h265 ! videoconvert ! autovideosink
With an Autoplugger it will look as neat as:
uridecodebin uri=rtsp://localhost:8553/test-meta ! autovideosink
Though this might seem like a longshot solution, it provides a way to transport self-synchronizing metadata between pipelines over the network.

Related

x264 enc v4l2 /dev/video2 stream

I have this working but have been unable to get video from my magwell to intergrate and could use help with the correct pipline.
gst-launch-1.0 videotestsrc ! video/x-raw,width=848,height=480,framerate=25/1 ! x264enc bitrate=700 ! video/x-h264,width=848,height=480,framerate=25/1,stream-format=byte-stream,profile=baseline ! tee name=t\
t. ! queue ! tcpclientsink host=172.18.0.3 port=8000 \
t. ! queue ! tcpclientsink host=172.18.0.4 port=8000
I do not see the receiver side pipeline in the question description. This is required to verify that there are no issues at the receiver side. Based on your current pipeline I have the following suggestions:
You don't need set the caps again after the element x264enc, because the output is anyhow of type video/x-h264. What you need is to add h264parse after x264enc. You need to also add h264parse, before passing the data to decoder you are using at the receiver side.
The bitrate set for x264enc is also very less. The units are in kbits/sec, and for a video this might be very less. It's best to leave this to default setting if you do not have any strict resource constraints. Otherwise try for a higher value.
Also is there any reason why you are using TCP. Using UDP might be a better idea for video, in case video data/packet loss is not an issue.

GStreamer: VBI data stream decoding

I have a question related to processing VBI data stream through 'teletextdec' plug-in.
Basically, I've already checked functionality of the next configuration:
gst-launch-1.0 -v -m filesrc location=sample.mpeg ! tsdemux ! teletextdec ! videoconvert ! ximagesink
where sample.mpeg is a video file containing not only video data, but also VBI data. Result is presented as video frames with overlayed VBI decoded VBI data.
At the moment, I'm interested to repeat similar configuration, but when video and VBI data are separated from each other (need to work with two data sources: one for video data providing, another for VBI data providing).
In other words, we have two next data sources:
/dev/vbi0 - is a character device, with binary data as an output
/dev/video0 - v4l2 device, with RAW data video as an output
I understand, that generally I need to have something like that:
avfvideosrc name=src ! videomixer src.vbi_01 ! teletextdec
Does anyone have similar experience?
Could anyone explain how correctly to execute mixing and decoding of RAW video data with VBI data?
Thanks in advance!

Gstreamer, rtspsrc and payload type

I'm having difficulties in retrieving rtsp stream from a specific camera, because the rtp payload type the camera is providing is 35 (unassigned) and payload types accepted by the rtph264depay plugin are in range [96-127]. The result is that gstreamer displays ann error like:
<udpsrc0> error: Internal data flow error.
<udpsrc0> error: streaming task paused, reason not-linked (-1)
Other cameras that I have tested are working because they define a good payload type.
FFmpeg, MPlayer and other tools play the stream, although they may display a warning for the unknown type, for instance in Mplayer:
rtsp_session: unsupported RTSP server. Server type is 'unknown'
Is there any way in gstreamer to fake the payload type, ignore the mismatching property, force linking between the plugins or otherwise create a workaroud to my problem?
Pipeline I am using is:
gst-launcg-0.10 rtspsrc location="..." ! rtph264depay ! capsfilter caps="video/x-h264,width=1920,height=1080,framerate=(fraction)25/1" ! h264parse ! matroskamux ! filesink location="test.mkv"
I figured it out and got it working. Posting an answer here in hope that it might benefit someone. There are multiple similar questions out there, but they lack proper answers.
Following does the trick:
GstElement* depay = gst_element_factory_make("rtph264depay", "video_demux");
assert(depay);
GstPad* depay_sink = gst_element_get_static_pad(depay, "sink");
GstCaps* depay_sink_caps = gst_caps_new_simple("application/x-rtp",
"media", G_TYPE_STRING, "video",
"encoding-name", G_TYPE_STRING, "H264",
NULL);
gst_pad_use_fixed_caps(depay_sink);
gst_pad_set_caps(depay_sink, depay_sink_caps);
gst_object_unref(depay_sink);
it overrides the rtph264depay plugin's sink pad caps to be less restrictive, now it accepts any payload type (and any clock-rate) as long as it is rtp and has H.264 encoding.
I don't think this is possible with gst-launch.
There is a select-stream signal in rtspsrc module documented here http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-rtspsrc.html#GstRTSPSrc-select-stream
it's a callback where you check the stream and if you return true, gstreamer will SETUP and PLAY the stream, if you return false it will ignore it, this should let you ignore the unsupported stream, in my case I'm having trouble with ONVIF metadata stream, it always tries to play it and there is no parser for it, I really wish gstreamer will just ignore the streams that can't play and work with what it has or at least a flag to toggle that behaviour.

dynamically replacing elements in a playing gstreamer pipeline

I'm looking for the correct technique, if one exists, for dynamically replacing an element in a running gstreamer pipeline. I have a gstreamer based c++ app and the pipeline it creates looks like this (using gst-launch syntax) :
souphttpsrc location="http://localhost/local.ts" ! mpegtsdemux name=d ! queue ! mpeg2dec ! xvimagesink d. ! queue ! a52dec ! pulsesink
During the middle of playback (i.e. GST_STATE_PLAYING is the pipeline state and the user is happily watching video), I need to remove souphttpsrc from the pipeline and create a new souphttpsrc, or even a new neonhttpsource, and then immediately add that back into the pipeline and continue playback of the same uri source stream at the same time position where playback was before we performed this operation. The user might see a small delay and that is fine.
We've barely figured out how to remove and replace the source, and we need more understanding. Here's our best attempt thus far:
gst_element_unlink(source, demuxer);
gst_element_set_state(source, GST_STATE_NULL);
gst_bin_remove(GST_BIN(pipeline), source);
source = gst_element_factory_make("souphttpsrc", "src");
g_object_set(G_OBJECT(source), "location", url, NULL);
gst_bin_add(GST_BIN(pipeline), source);
gst_element_link(source, demuxer);
gst_element_sync_state_with_parent(source);
This doesn't work perfectly because the source is playing back from the beginning and the rest of the pipeline is waiting for the correct timestamped buffers (I assume) because after several seconds, playback picks back up. I tried seeking the source in multiple ways but nothing has worked.
I need to know the correct way to do this. It would be nice to know a general technique, if one exists, as well, in case we wanted to dynamically replace the decoder or some other element.
thanks
I think this may be what you are looking for:
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
(starting at line 115)

Gstreamer: Pausing/resuming video in RTP streams

I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19