I am currently playing around with an AudioOverIP Project and wondered if you could help me out.
I have a LAN, with an Audio Source (Dante/AES67-RTP-Stream) which I would like to distribute to multiple receivers (SBC (e.g. RaspberryPi) with an Audio Output (e.g. Headphone jack):
PC-->Audio-USB-Dongle-->AES67/RTP-Multicast-Stream-->LAN-Network-Switch-->RPI (Gstreamer --> AudioJack)
I currently use Gstreamer for the Pipeline:
gst-launch-1.0 -v udpsrc uri=udp://239.69.xxx.xx:5004 caps="application/x-rtp,channels=(int)2,format=(string)S16LE,media=(string)audio,payload=(int)96,clock-rate=(int)48000,encoding-name=(string)L24" ! rtpL24depay ! audioconvert ! alsasink device=hw:0,0
It all works fine, but if I watch a video on the PC and listen to the Audio from the RPI, I have some latency (~200-300ms), therefore my questions:
Do I miss something in my Gstreamer Pipeline to be able to reduce latency?
What is the minimal Latency to be expected with RTP-Streams, is <50ms achievable?
Would the latency occur due to the network or due to the speed of the RPi?
Since my audio-input is not a Gstreamer input, I assume rtpjitterbuffer or similar would not help to decrease latency?
Related
I'm trying to forward an RTSP stream through WebRTC using GStreamer. I keep getting a massive amount of warnings about latency:
Can't determine running time for this packet without knowing configured latency
The pipeline is:
rtspsrc location=my_rtsp_url is-live=true !
queue !
decodebin !
videoconvert !
openh264enc !
video/x-h264,profile=constrained-baseline !
rtph264pay aggregate-mode=zero-latency !
webrtcbin turn-server=turn://test:test#localhost:3478 bundle-policy=max-bundle name=webrtcbin
I can't seem to figure out what I need to set to get rid of these messages. I tried looking through the source code for GStreamer. As best I can tell, the rtp session (gstrtpsession.c) is finding that send_latency is GST_CLOCK_TIME_NONE.
Is there something I can add to my pipeline to fix this?
I have a two GStreamer pipelines, one is like a "source" pipeline streaming a live camera feed into an external channel, and the second pipeline is like a "sink" pipeline that reads from the other end of that channel and outputs the live video to some form of sink.
[videotestsrc] -> [appsink] ----- Serial Channel ------> [appsrc] -> [autovideosink]
First Pipeline Second Pipeline
The first pipeline starts from a videotestsrc, encodes the video and wraps it in gdppay payload, and then sinks the pipeline into a serial channel (but for the sake of the question, any sink that can be read from to start another pipeline like a filesink writing to serial port or udpsink), where it is read by the source of the next pipeline and shown via a autovideosrc:
"Source" Pipeline
gst-launch-1.0 -v videotestsrc ! videoconvert ! video/x-raw,format=I420 ! x265enc ! gdppay ! udpsink host=127.0.0.1 port=5004
"Sink" pipeline
gst-launch-1.0 -v udpsrc uri=udp://127.0.0.1:5004 ! gdpdepay ! h265parse ! avdec_h265 ! autovideosink
Note: Given the latency induced using a udpsink/udpsrc, that pipeline complains about timestamp issues. If you replace the udpsrc/udpsink with a filesrc/filesink to a serial port you can see the problem that I am about to describe.
Problem:
Now that I have described the pipelines, here is the problem:
If I start both pipelines, everything works as expected. However, if after 30s, I stop the "source" pipeline, and restart the pipeline, the Running Time gets reset back to zero, causing the timestamps of all buffers to be sent to be considered old buffers by the sink pipeline because it has already received buffers for timestamps 0 through 30s, so the playback on the other end won't resume until after 30s:
Source Pipeline: [28][29][30][0 ][1 ][2 ][3 ]...[29][30][31]
Sink Pipeline: [28][29][30][30][30][30][30]...[30][30][31]
________________________^
Source pipeline restarted
^^^^^^^^^^^^^^^^...^^^^^^^^
Sink pipeline will continue
to only show the "frame"
received at 30s until a
"newer" frame is sent, when
in reality each sent frame
is newer and should be shown
immediately.
Solution
I have found that adding sync=false to the autovideosink does solve the problem, however I was hoping to find a solution where the source would send its timestamps (DTS and PTS) based on the Clock time as seen in the image on that page.
I have seen this post and experimented with is-live and do-timestamp on my video source, but they do not seem to do what I want. I also tried to manually set the timestamps (DTS, PTS) in the buffers based on system time, however to no avail.
Any suggestions?
I think you should just restart the receiver pipeline as well. You could add the -e switch to the sender pipeline and when you stop the pipeline it should correctly propagate EOS via the GDP element to the receiver pipeline. Else I guess you can send a new segment or discontinuity to the receiver. Some event has to be signaled though to make the pipeline aware of that change, else it is somewhat bogus data. I'd say restarting the receiver is the simplest way.
I have this existing program that uses gst-plugin-1.0 and it passes this:
-e udpsrc port=3003 buffer-size=200000 ! h264parse ! queue ! http://mux.video_0 alsasrc device=plughw:1,0 ! "audio/x-raw,channels=1,depth=16,width=16,rate=44100" ! voaacenc bitrate=128000 ! aacparse ! queue ! http://mux.audio_0 qtmux name=mux ! filesink location="$RECPATH/record-`date +%Y%m%d%-H%M%S`.mp4" sync=true
This takes the video from an udp source which is in x264 and the audio directly from the microphone. It works but since it doesn't encode the video and the audio at the same time I have a bit of delay on the audio when the video stream has latency (due to higher quality settings).
So as a quick-fix I was thinking about adding a delay on the audio recording to compensate. I would calculate that delay by hand depending on the video quality.
Constraint: gst-launch-1.0 version 1.10.4 (on a raspberry pi, debian stretch), use-driver-timestamps doesn't seem to be accessible, I get the error 'WARNING: erroneous pipeline: no property "use-driver-timestamps" in element "alsasrc0"'.
So my question is: is there an easy way to add delay to the audio?
the queue element had the min-threshold-time property, which lets you hold on to data for n amount of time.
https://gstreamer.freedesktop.org/documentation/coreelements/queue.html?gi-language=c#queue:min-threshold-time
Alternatively I found this too, might be useful for your case pipeline Gstremer video streaming with delay
Try ! autoaudiosink ts-offset=100000000
ts-offset is documented here.
You can also experiment pipelines with latency compensation;
https://gstreamer.freedesktop.org/documentation/additional/design/latency.html#latency-compensation
I want to read the feed from a webcam and host a RTSP stream without encoding the feed. I have access to high bandwidth network but the CPUs are very low end and have other tasks to full fill due to which I want to skip the encoding/decoding steps to save up on CPU usage. Before jumping on to RTSP I tried a simple MJPG stream and tried to skip the jpegenc (JPG encoding) as it can be done directly with a simple gst pipeline:
gst-launch-1.0 -v autovideosrc ! videoconvert ! videoscale ! video/x-raw,format=I420,width=800,height=600,framerate=25/1 ! rtpjpegpay ! udpsink host=10.0.1.10 port=5000
However, I got a warning:
WARNING: erroneous pipeline: could not link videoscale0 to
rtpjpegpay0, rtpjpegpay0 can't handle caps video/x-raw,
format=(string)I420, width=(int)800, height=(int)600,
framerate=(fraction)25/1
I'm new to Gstreamer and not sure if this is possible and how to move forward next. The same command above works if I include the jpg encoding. Any suggestions would be appreciated.
rtpjpegpay is an element that takes in a Motion JPEG stream and translates it to RTP. The input you're giving it however isvideo/x-raw, which means it is unencoded, rather than encoded with Motion JPEG. If you want to use this element, you'll first have to encode it to Motion JPEG, using something like jpegenc.
Like #vermaete already mentions: if you really, really don't want to encode your video, you can use someting like rtpvrawpay, which will translate your raw video into RTP packets. However: sending raw, unencoded video over the network is not really advisable (and not even workable if you have a bad connection, or even impossible if you plan on sending it over the Internet). You might also end up using a lot of resources on your CPU just to get everything payloaded properly, and gettign it sent to your network card.
I am running a 9 sec video on my NVIDIA Tegra jetson TK1 board using gstreamer as:
gst-launch-0.10 playbin uri=file:///home/ubuntu/widescreen.avi
I notice this drops a lot of frames and gstreamer prints these messages:
WARNING: from element /GstPlayBin:playbin0/GstBin:vbin/GstAutoVideoSink:videosink/GstXvImageSink:videosink-actual-sink-xvimage: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2875): gst_base_sink_is_too_late (): /GstPlayBin:playbin0/GstBin:vbin/GstAutoVideoSink:videosink/GstXvImageSink:videosink-actual-sink-xvimage:
There may be a timestamping problem, or this computer is too slow.
I ran top while executing this and indeed gstreamer is taking 95% of the CPU.
Now, when i play this video through the default media player, it plays completely fine and without any lag. I was wondering if anyone knows what may be the reason that gstreamer is unable to play it properly. I am new to gstreamer and wondering if I can do something to alleviate this.
Bins are many elements combined into one element type structure(i.e with appropriate number of sinks and sources). playbin is a standard bin with an "autovideosink" element which automatically detects video sinks.
Firstly I would suggest you to upgrade to gstreamer-1.0 in which many bugs were fixed.
Secondly the problem seems to be with your xvimagesink, hence try using ximagesink by defining explicitly, your autovideosink is selecting xvimagesink by default.
try this pipeine:-
(for hardware decoding)
gst-launch-1.0 filesrc location="location of h264 video/file.avi" ! avidemux ! h264parse ! omxh264dec ! videoconvert ! ximagesink
or for cpu decoding use avdec_h264 instead of omxh264dec
The default media player is likely using video hardware decoding if the media file is H.264, which allows it decode with less CPU resources. Try creating a gstreamer pipeline that explicitly uses the omx H.264 decoding element as discussed here (http://elinux.org/Jetson/H264_Codec).
eg. gst-launch-0.10 filesrc location=/home/ubuntu/widescreen.avi ! avidemux ! h264parse ! nv_omx_h264dec ! autovideosink