I'm converting an RTSP stream to HLS but I get about 30s of delay. Can someone help me reducing it?
Thanks in advice. This is the pipeline:
gst-launch-1.0 rtspsrc location=rtsp://admin:#192.168.1.27:554/ch0_0.265 ! decodebin ! videoconvert ! video/x-raw ! x264enc ! mpegtsmux ! hlssink location=/somewhere/segment_%05d.ts playlist-location=/somewhere/playlist.m3u8 target-duration=5 max-files=5
This latency is unavoidable here (HLS is not designed for low latency streaming), you are creating .ts video files, 5 seconds long.
By default most HLS players will buffer 3 .ts files before playback continues.
This means you have at the very least 15 seconds latency.
If you bring your target duration down to 1 second. You might be able to get a latency of about 3 to 6 seconds. This might add more computational overhead.
Related
I'm writing a Qt 5.15 application that should play an RTP / MPETGS / H.264 video on Linux UbuntuĀ 20.04 (Focal Fossa).
I'm running GStreamer 1.16.3.
Since I'm new to GStreamer, I made everything step by step starting from official tutorials... at this moment I'm able to play an RTP / H.264 stream almost realtime.
Now the last step (adding MPEGTS support) seems to be the hardest.
My source to make a test is an MP4 H.264 QuickTime file, and I stream it over the network through gst-launch.
The working RTP / H.264 output pipeline is the following shell command:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! rtph264pay ! udpsink host=127.0.0.1 port=5000;
To test the input pipeline without messing up the Qt/C++ code, I use another shell command like this:
gst-launch-1.0 -v udpsrc port=5000 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
AFAIK, if the shell input pipeline works, it will work in my C++ code (of course elements after avdec_h264 depends on my programming/running environment, but if someone needs it I can share without problem).
To add mpegts support, I tried with these lines (the last of a long sequence of trials):
OUTPUT:
gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! h264parse ! avdec_h264 ! x264enc tune=zerolatency ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=5000;
INPUT:
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp" ! rtpmp2tdepay ! tsparse ! tsdemux ! h264parse ! avdec_h264 ! videoconvert ! autovideosink;
It works, but the video seems to stumble/bounce while playing.
What I'm missing?
As a side question, I would like to avoid to re-encode the video of the source prior sending it trough RTP. I would like to remove such elements from the output pipeline:
avdec_h264 ! x264enc tune=zerolatency
I tried, but the result goes from nothing to this,
if I add the config-interval=-1 parameter to h264parse.
Please note that I would like to keep the latency as low as possible.
--- UPDATE ---
I tried putting a queue element between rtpmp2tdepay and tsparse and this makes video playing fluid, but latency grows to seconds while playing RTP / H.264, only it's nearly real-time.
Since MPEGTS is only a transport protocol, why should it add more delay than actual encoding?
Is there a way to shorten this delay? No matter if it changes the whole pipeline as long as protocols and encoding are kept the same.
BTW, I tried tuning max-size-buffers size, but using values under 150 will cause play to stumble.
--- UPDATE ---
If I use VLC to create the output stream using the same file, things get even worse:
*:sout=#rtp{dst=127.0.0.1,port=5000,mux=ts} :no-sout-all :sout-keep*
It is the same stumbling and scrambled video without a chance to fix it:
I found a partial fix to the latency problem and compatibility with VLC:
! autovideosink sync=false
Disabling the clock synchronisation allows to shorten delays, and also VLC output streaming is now collected correctly by GStreamer.
This also makes the queue element unnecessary (not in the general use case probably), and AFAIK also tsparse is redundant.
Anyway, I still need to understand why I need to re-encode H.264 video (in the output pipeline).
I'm trying to forward an RTSP stream through WebRTC using GStreamer. I keep getting a massive amount of warnings about latency:
Can't determine running time for this packet without knowing configured latency
The pipeline is:
rtspsrc location=my_rtsp_url is-live=true !
queue !
decodebin !
videoconvert !
openh264enc !
video/x-h264,profile=constrained-baseline !
rtph264pay aggregate-mode=zero-latency !
webrtcbin turn-server=turn://test:test#localhost:3478 bundle-policy=max-bundle name=webrtcbin
I can't seem to figure out what I need to set to get rid of these messages. I tried looking through the source code for GStreamer. As best I can tell, the rtp session (gstrtpsession.c) is finding that send_latency is GST_CLOCK_TIME_NONE.
Is there something I can add to my pipeline to fix this?
I'm developing an app receiving an H.264 video stream from an RTSP camera and displaying and storing it to MP4 without transcoding. For the purpose of my current test, I record for 5 sec only.
My problem is that the MP4 is not playable. The resulting file varies in size from one run of the app to another showing something is very wrong (unexpected since the recording time is fixed).
Here are my pipelines:
rtspsrc location = rtsp://192.168.0.61:8554/quality_h264 latency=0 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! queue ! interpipesink name=cam1
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! decodebin ! autovideoconvert ! d3dvideosink sync=false
interpipesrc allow-renegotiation=true name=src listen-to=cam1 is-live=true ! h264parse ! queue ! mp4mux ! filesink location=test.mp4
In a next step I will add more cameras and will need to be able to change which camera gets recorded to MP4 on the fly, as well as pause/resume the recording. For this reason, I've opted to use interpipesink/src. It's a set of gstreamer elements that allow communication between two independent pipelines. https://github.com/RidgeRun/gst-interpipe
A thread waits for 10 sec, then sends EOS on the 3rd pipeline (recording). Then, when the bus receives GST_MESSAGE_EOS it sets the pipeline state to NULL. I have checked with a pad probe that the EOS event is indeeed received on the sink pad of the filesink.
I send EOS using this code: gst_element_send_event(m_pipeline, gst_event_new_eos()); m_pipeline is the 3rd pipeline.
Those exact pipelines produce a playable MP4 when run with gst-launch adding -e at the end.
If I replace mp4mux by matroskamux in my app, the mkv is playable and has the expected size. However, there's something wrong with the timestamps as the player shows it starting at time 10 sec insteasd of 0. Do I need to edit the timestamps before passing the buffers to the mux (mp4mux or matroskamux)?
It looks to me as if the MP4 is not fully written, but I can't see what else I can do appart from sending EOS?
I'm opened to suggestions to restructure the app, in case the use of the interpipe elements may cause a problem (although I can't see why at the moment).
I'm using Gstreamer 1.18.2 on Windows 10 (x64).
I have this existing program that uses gst-plugin-1.0 and it passes this:
-e udpsrc port=3003 buffer-size=200000 ! h264parse ! queue ! http://mux.video_0 alsasrc device=plughw:1,0 ! "audio/x-raw,channels=1,depth=16,width=16,rate=44100" ! voaacenc bitrate=128000 ! aacparse ! queue ! http://mux.audio_0 qtmux name=mux ! filesink location="$RECPATH/record-`date +%Y%m%d%-H%M%S`.mp4" sync=true
This takes the video from an udp source which is in x264 and the audio directly from the microphone. It works but since it doesn't encode the video and the audio at the same time I have a bit of delay on the audio when the video stream has latency (due to higher quality settings).
So as a quick-fix I was thinking about adding a delay on the audio recording to compensate. I would calculate that delay by hand depending on the video quality.
Constraint: gst-launch-1.0 version 1.10.4 (on a raspberry pi, debian stretch), use-driver-timestamps doesn't seem to be accessible, I get the error 'WARNING: erroneous pipeline: no property "use-driver-timestamps" in element "alsasrc0"'.
So my question is: is there an easy way to add delay to the audio?
the queue element had the min-threshold-time property, which lets you hold on to data for n amount of time.
https://gstreamer.freedesktop.org/documentation/coreelements/queue.html?gi-language=c#queue:min-threshold-time
Alternatively I found this too, might be useful for your case pipeline Gstremer video streaming with delay
Try ! autoaudiosink ts-offset=100000000
ts-offset is documented here.
You can also experiment pipelines with latency compensation;
https://gstreamer.freedesktop.org/documentation/additional/design/latency.html#latency-compensation
How do I put unrelated audio into any generated video stream in a way that keeps them in sync in gstreamer?
Context:
I want to stream audio from icecast into a Kinesis Video stream, and then view it with Amazon's player. The player only works if there is video as well as audio, so I generate video with testvideosrc.
The video and audio need to be in sync in terms of timestamps, or the Kinesis sink 'kvssink' throws an error. But because they are two separate sources, they are not in sink.
I am using gst-launch-1.0 to run my pipeline.
My basic attempt was like this:
gst-launch-1.0 -v \
videotestsrc pattern=red ! video/x-raw,framerate=25/1 ! videoconvert ! x264enc ! h264parse ! video/x-h264,stream-format=avc,alignment=au ! \
queue ! kvssink name=sink stream-name="NAME" access-key="KEY" secret-key="S_KEY" \
uridecodebin uri=http://ice-the.musicradio.com/LBCLondon ! audioconvert ! voaacenc ! aacparse ! queue ! sink.
The error message I get translates to:
STATUS_MAX_FRAME_TIMESTAMP_DELTA_BETWEEN_TRACKS_EXCEEDED
This indicates that the audio and video timestamps are too different, so I want to force them to match, maybe by throwing away the video timestamps?
There are different meanings of "sync". Let us ignore lip sync for a moment (where audio and video match to each other).
There is sync in terms of timestamps - e.g. do they carry similar timestamps in their representation. And sync in terms of when in real time do these samples with timestamps actually arrive at the sink (latency).
Hard to tell by the error which one exactly the sink is complaining about.
Maybe try x264enc tune=zerolatency for a start, as without that options the encoder produces a two second latency which may cause issues for certain requirements.
Then again the audio stream will have some latency too. It may not be easy to tune these two to match. The sink should actually do the buffering ans synchronization.