pipeline Gstremer video streaming with delay - gstreamer

Is it possible to give some delay in between before sending demuxed, h264-decoded output to autovideosink in gstreamer pipeline. If so can anybody post sample pipeline to do that.
The pipeline which I used is
udpsrc port=5000 ! mpegtsdemux name=demux ! queue ! ffdec_h264 ! ffmpegcolorspace ! autovideosink demux. ! queue ! ffdec_mp3 ! audioconvert ! alsasink demux
In this case once the stream is received at upd-port 5000 it will immediately start playing after demuxing-queuing-decoding. Is there any-possibilty of delay say 60sec befoe sending it to autovideosink where it is actually played.Is there any Gstreamer plugin/element to do that.

You might want look at queue's parameters (run gst-inspect queue):
max-size-buffers : Max. number of buffers in the queue (0=disable)
flags: lesbar, schreibbar
Unsigned Integer. Range: 0 - 4294967295 Default: 200
max-size-bytes : Max. amount of data in the queue (bytes, 0=disable)
flags: lesbar, schreibbar
Unsigned Integer. Range: 0 - 4294967295 Default: 10485760
max-size-time : Max. amount of data in the queue (in ns, 0=disable)
flags: lesbar, schreibbar
Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 1000000000
min-threshold-buffers: Min. number of buffers in the queue to allow reading (0=disable)
flags: lesbar, schreibbar
Unsigned Integer. Range: 0 - 4294967295 Default: 0
min-threshold-bytes : Min. amount of data in the queue to allow reading (bytes, 0=disable)
flags: lesbar, schreibbar
Unsigned Integer. Range: 0 - 4294967295 Default: 0
min-threshold-time : Min. amount of data in the queue to allow reading (in ns, 0=disable)
flags: lesbar, schreibbar
Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
By setting min-threshold-time you can delay the output by n nanoseconds.
I've just tried that out with my webcam and it worked (60secs delay):
gst-launch v4l2src ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=60000000000 ! autovideosink
Note that I've set the max-size-* parameters to 0 because if the queue fills up before the threshold is met, you won't get data out the queue.
And keep in mind that queueing a decoded video stream might result in huge memory usage.
With your encoded udpsrc I'd recommend delaying the encoded h264 stream. You might need to set the threshold in bytes instead of nanoseconds (I don't think the queue knows enough about the encoded data to make a guess on the bitrate).

My solution was to add the delay to the autoaudiosink. A nifty feature, cryptically called ts-offset:
$ gst-launch-1.0 souphttpsrc location=http://server:10000/ ! queue \
max-size-bytes=1000000000 max-size-buffers=0 max-size-time=0 ! \
decodebin ! autoaudiosink ts-offset=500000000
min-threshold-* weren't working for me.
The delay works. Disabling synchronisation also worked:
$ gst-launch-1.0 souphttpsrc location=http://server:10000/ ! \
decodebin ! autoaudiosink sync=false
For music, like what I am using it for, the synchronisation didn't really matter, except that it's nice having the next song come on sooner than later when changing tracks. So I still preferred the half second delay.
When disabling synchronisation, typically, the stream slowly goes out of sync. For a live stream, whose data is being generated in real time, the stream synchronisation can be maintained by asking the queue to dump extra data:
gst-launch-1.0 souphttpsrc location=http://server:10000/ ! \
queue max-size-bytes=65536 max-size-buffers=0 max-size-time=0 \
leaky=downstream ! decodebin ! autoaudiosink sync=false
This keeps the stream synchronised to within 64KiB of the time the data was first made available on the server. This ended up being my preferred solution, since I was streaming data that was being generated in real time by the sound card of a computer on the same wifi network. This is for live streams only. This will not work if the stream's data has been predetermined, in which case the entire stream will be downloaded as quickly as possible, resulting in the whole thing being played more or less in fast forward.

Related

adding audio delay in decoding pipeline - decklinkaudiosink

Dear gstreamer community,
I am running gstreamer (1.20.3) on ubuntu 22.04 LTS with decklinkdrivers (12.4)
After building (and playing around with gstreamer, wathing tutorials etc) the following pipeline I am able to decode a high quality HD SRT Stream (udp streaming) and am outputting it to SDI (in 1080i50), works very well.
gst-launch-1.0 -v srtsrc uri=srt://x.x.x.x:xxxx latency=200 ! tsdemux name=demux demux. ! h264parse ! video/x-h264 ! avdec_h264 ! queue ! videoconvert ! video/x-raw,format=UYVY ! decklinkvideosink mode=1080i50 sync=false demux. ! avdec_aac ! queue ! audioconvert ! audio/x-raw, format=S32LE, channels=2 ! decklinkaudiosink
Audio to Videosync is stable to each other for hours (didn't test for days), but after testing the encoder to decoder end to end on my gstreamer pipeline audio comes a little too early (about 60ms early).
I tried to only change buffersize in audiopart of the pipeline to correct the timing on the audiosite e.g.
queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=60000000
but audio to video offset didn't change here trying several different min-threshold-times.
for the decklinkaudiosink there is no ts-offset cap to change the timing here and also changing the buffer-time property here didn't change anything.
Can anybody please help me here how to correct the audio timing or audio latency to accurate videodecoding on my pipeline!?
Thanks!

What does 'num-buffers' do in gstreamer?

I could only find few pages giving a one-liner explanation of num-buffers. Like this one.
Number of buffers to output before sending EOS (End of Stream). Default = -1 (unlimited)
I have a dummy pipeline using gst-launch-1.0 multifilesrc with default loop=False. The pipeline loops because of num-buffers=-1 as the default.
I don't want it to loop, which happens to be when I set num-buffers=1 or literally any other finite number.
What does it mean to be num-buffers=1 (or any value in that sense)?
Edit: Sample pipelines with a 10-second video
# 1. With loop=false and num-buffers=1
$> GST_DEBUG=3 gst-launch-1.0 multifilesrc location=preview.h264 loop=false num-buffers=1 ! h264parse ! avdec_h264 ! fakesink
...
Got EOS from element "pipeline0".
Execution ended after 0:00:00.425738029
...
# 2. With loop=false and num-buffers=10
$> GST_DEBUG=3 gst-launch-1.0 multifilesrc location=preview.h264 loop=false num-buffers=10 ! h264parse ! avdec_h264 ! fakesink
...
Got EOS from element "pipeline0".
Execution ended after 0:00:04.256451070
...
# 3. With neither loop flag (default=false) nor num-buffers (default=-1, unlimited)
$> GST_DEBUG=3 gst-launch-1.0 multifilesrc location=preview.h264 ! h264parse ! avdec_h264 ! fakesink
...This never ends because num-buffers=-1. Why?...
I didn't get any warnings in any case.
"num-buffers" defines how many frames will be published by a given element like videotestsrc. After sending "num-buffers", EOS event is published.
I find it useful in tests when you can define number of frames and framerate and then set expectations about how many frames shall be received during given time (e.g. using probe).
multifilesrc doesn't seem to support "num-buffers": it will read all files and exits (or start again when loop=True). You should see a warning when setting "num-buffers" on multifilesrc.
multifilesrc inherits from GstBaseSrc element and has num-buffers property. It should be used to replay a sequence of frames as video:
gst-launch-1.0 multifilesrc location="%08d.png" loop=true num-buffers=1000 ! decodebin ! videoconvert ! ximagesink
To replay images named 00000000.png to 99999999.png one after another.
For your purpose, just use filesrc element, not multifilesrc.

Synchronize two RTSP/RTP H264 video streams capture using GStreamer

I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH

Getting Warning messages from alsasrc

gst-launch-1.0 v4l2src ! videoconvert ! video/x-raw,format=I420 ! videoparse width=640 height=480 framerate=30/1 ! x264enc bitrate=2048 ref=4 key-int-max=20 byte-stream=true tune=zerolatency ! video/x-h264,stream-format=byte-stream,profile=main ! queue ! mux. alsasrc ! audioparse rate=44100 format=raw raw-format=s16le channels=2 ! faac perfect-timestamp=true ! aacparse ! queue ! mux. mpegtsmux name=mux ! rtpmp2tpay ! udpsink host=10.0.0.239 port=9090 sync=true async=false qos=true qos-dscp=46
While Executing above pipeline I am receiving below warning messages continuously..
Additional debug info:
gstaudiobasesrc.c(863): gst_audio_base_src_create (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
Dropped 12789 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.
WARNING: from element /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0: Can't record audio fast enough
Additional debug info:
gstaudiobasesrc.c(863): gst_audio_base_src_create (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
Dropped 8820 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.
So how to overcome it??
The problem is same as warning message said This is most likely because downstream can't keep up and is consuming samples too slowly.
In other word, your process is slow and thus cannot keep up the speed of input.
Try below solutions, from highest priority:
Add queue to video branch as well
Set property sync=false to udpsink (work in some cases, but of course not very good as it may causes weird speed at some parts)
Set property provide-clock=false to alsasrc (may work in certain cases when audio clock is bad)
Tweak the pipeline to improve speed in video processing branch
If you cannot tweak the pipeline, only option is .... stop printing the debug log, and accept this as limitation ...

Gstreamer: How to get audio and video to play at the same rate

My pipe line is simply trying to mux an audiotestsrc with a videotestsrc and output to a filesink.
videotestsrc num-buffers=150 ! video/x-raw-yuv,width=1920, height=1080 !
timeoverlay ! videorate ! queue ! xvidenc ! avimux name=mux mux.
! filesink sync=true location=new.avi
audiotestsrc num-buffers=150 !
queue ! audioconvert ! audiorate ! mux.
new.avi is produced.
Video is exactly 5 seconds long as expected
Audio is about 3.5 seconds long and the remaining 1.5 seconds is
slient.
What am I missing here? I've tried every combination of sync="" properties, etc.
What pipeline would generate a test clip with autotestpattern and videotest pattern muxed together where audio and video are the same duration?
Thanks
audiotestsrc num-buffers=150
By default each buffer contains 1024 samples: samplesperbuffer
Which means you are generating 150*1024=153600 samples.
Asuming 44.1kHz, the duration would be 153600/44100=3.48 seconds.
So if you need 5 seconds audio, you need 5*44100=220500 samples. with samplesperbuffer==1024, this means 220500/1024=215.33 buffers. (ie 215 or 216 buffers).
It would be easier if you set samplesperbuffer to 441, then you need exactly 100 buffers for every second audio:
audiotestsrc num-buffers=500 samplesperbuffer=441
You can make use of the blocksize roperty of audiotesrc to match the duration of a frame. this is in bytes and thus you might want to use a caps filter after audiotestsrc to select a sampling-rate and sample format.