I have following pipeline. One branch is used to display video, second one uploads frames every second to the HTTP server.
gst-launch-1.0 -e \
filesrc location=test.mp4 ! queue ! qtdemux name=d d.video_0 ! h264parse ! avdec_h264 ! tee name=t \
t. ! queue ! videoscale ! 'video/x-raw,width=(int)640,height=(int)480' ! autovideosink \
t. ! queue ! videorate ! 'video/x-raw,framerate=1/1' ! jpegenc ! curlhttpsink \
location=http://192.168.100.150:8080/upload_picture \
user=admin passwd=test \
content-type=image/jpeg \
use-content-length=false
Problem occurs in case server is unreachable or does not process uploads fast enough. In that case video playback will stop for the time it takes for upload branch to catch up. I would expect tee in combination with queue to allow video to run out of the queued buffers while queue in upload branch gets filled.
Is such out of sync behavior possible? I tried both sync and async properties but without desired result.
Related
:)
I'm trying to receive an rtp audio stream using gstreamer and forward it to multiple target hosts with different delays.
To insert a delay, I use the queue element with min-threshold-time as suggested here: https://stackoverflow.com/a/17218113/4881938
This works fine so far, however, if I want to have multiple output streams with different delays (or one with no delay at all), no data is set (i.e. the pipeline is paused) until the queue with the longest min-threshold-time is full.
This is not what I want - I want all forwarded streams to start as soon as possible, so if I have target1 one with no delay and target2 with 10s delay, target1 should receive data immediately, and not having to wait 10s.
I tried different sink options (sync=false, async=true) and tee option allow-not-linked=true to no avail; the pipeline remains paused until the longest delay in one of the queues.
What am I missing? How do I get gstreamer to activate the branch with no delay immediately? (and, in case I have multiple different delays: activate each delayed branch as soon as the buffer is full, not only after the longest buffer is filled?)
This is the complete test command I used:
% gst-launch-1.0 \
udpsrc port=10212 caps='application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
tee name=t1 allow-not-linked=true \
t1. ! queue name=q1 leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=0 q1. ! \
udpsink host=target1 port=10214 sync=false async=true \
t1. ! queue name=q2 leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=5000000000 q2. ! \
udpsink host=target2 port=10215 sync=false async=true
version: GStreamer 1.18.4
Thanks everyone for even reading this far! :)
According to #SeB 's comment, I tried out interpipes:
Thank you very much for your input. I tried it out, and it seems the problem is similar. If I omit the queue elements or don't set min-threshold-time to more than 0, it works, but as soon as I add any delay to one or more of the queue elements, the whole pipeline does nothing, the time counter never goes up from 0:00:00.0
I tried out different combinations of the interpipe sink/source options forward-/accept-events and forward-/accept-eos but it didn't change anything.
What am I doing wrong? As I understand interpipe, it should decouple any sink/source elements from each other so one stalling pipe doesn't affect the rest(?).
command and output:
% gst-launch-1.0 \
udpsrc port=10212 caps='application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
interpipesink name=t1 \
interpipesrc listen-to="t1" ! queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=5000000000 ! \
udpsink host=targethost1 port=10214 async=true sync=false \
interpipesrc listen-to="t1" ! queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=10000000000 ! \
udpsink host=targethost2 port=10215 async=true sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.0 / 99:99:99..
I also tried shmsource/shmsink, but this also kinda fails -- as soon as I add a delay to one of the pipelines with the shmsource, it remains stuck in prerolling state:
shmsink:
% gst-launch-1.0 \
udpsrc port=10212 caps='application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
queue ! shmsink socket-path=/tmp/blah shm-size=20000000 wait-for-connection=false
shmsource (without is-live):
% gst-launch-1.0 \
shmsrc socket-path=/tmp/blah ! 'application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=5000000000 ! \
udpsink host=targethost port=10215 async=true sync=false
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
shmsource (with is-live):
% gst-launch-1.0 \
shmsrc is-live=true socket-path=/tmp/blah ! 'application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=50 ! \
udpsink host=targethost port=10215 async=true sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
depending on setting is-live on the src, the behavior is different, but in both cases, no data is actually sent. without the min-threshold-time for the queue element, both shmsource commands work.
I have a GStreamer pipeline that grabs an mjpeg webcam stream from 3 separate cameras, and saves 2 frames each. I'm executing these commands on the USB 3.0 bus of an Odroid XU4 with Ubuntu 18.04 When doing this, I discovered I would occasionally get a garbled image like this in the collection of output images:
It wouldn't always happen, but maybe 1/5 executions an image might look like that.
I then discovered if I decode the jpeg then re-encode, this doesn't ever happen. See the below pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! \
jpegenc ! multifilesink location=/root/cam0_%02d.jpg v4l2src \
device=/dev/video1 num-buffers=2 ! image/jpeg,width=3840,height=2160 ! \
queue ! jpegdec ! queue ! jpegenc ! multifilesink \
location=/root/cam1_%02d.jpg v4l2src device=/dev/video2 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! jpegenc ! \
multifilesink location=/root/cam2_%02d.jpg
Now, when I run this pipeline I have a 1/5 chance of getting this error:
/GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Is there a way to make gstreamer wait for the frames instead of simply failing? I attempted this by adding the !queue! in the above pipeline, to no success.
gst-launch-1.0 -v -e \
videotestsrc ! tee name=t0 \
t0. ! queue ! x264enc ! matroskamux ! filesink location="test.mkv" \
t0. ! queue ! queue ! autovideosink
Works, with both the file, and the on-screen display working
gst-launch-1.0 -v -e \
videotestsrc ! tee name=t0 \
t0. ! queue ! x264enc ! matroskamux ! filesink location="test.mkv" \
t0. ! queue ! autovideosink
Does not work.
Here's another set of examples.
gst-launch-1.0 -v -e \
videotestsrc ! tee name=t0 \
t0. ! queue ! autovideosink \
t0. ! queue ! autovideosink
Works.
gst-launch-1.0 -v -e \
videotestsrc ! tee name=t0 \
t0. ! queue ! autovideosink \
t0. ! autovideosink
Doesn't. Why not? Why do both outputs from the tee need to be queues? At worst, I'd expect one autovideosink to work and the other to be blank, but instead one displays a single frame and the other is black.
But the following DOES work. What's going on?
gst-launch-1.0 -v -e \
videotestsrc ! tee name=t0 \
t0. ! queue ! autovideosink \
t0. ! queue ! autovideosink \
t0. ! autovideosink
Why does adding a third output negate the need for a queue on all of them?
gst-launch-1.0 --version
gst-launch-1.0 version 1.12.4
GStreamer 1.12.4
https://packages.gentoo.org/package/media-libs/gstreamer
Does anyone know why queue behaves like this?
This is the pipeline that I'm trying to make. The above are just minified examples.
(Note: the weird caps in the first line of the pipeline are to make sure my Logitech c920 camera outputs h264 instead of raw, and that my Logitech BRIO outputs video jpeg at 1080p, instead of raw at 720p. This has been tested, and works much better than simply "decodebin")
gst-launch-1.0 -e \
v4l2src device=/dev/video0 ! 'video/x-h264;image/jpeg;video/x-raw' ! decodebin ! 'video/x-raw' ! tee name=t0 \
v4l2src device=/dev/video1 ! 'video/x-h264;image/jpeg;video/x-raw' ! decodebin ! 'video/x-raw' ! tee name=t1 \
v4l2src device=/dev/video2 ! 'video/x-h264;image/jpeg;video/x-raw' ! decodebin ! 'video/x-raw' ! tee name=t2 \
v4l2src device=/dev/video3 ! 'video/x-h264;image/jpeg;video/x-raw' ! decodebin ! 'video/x-raw' ! tee name=t3 \
matroskamux name=mux \
t0. ! queue ! autovideoconvert ! x264enc ! mux. \
t1. ! queue ! autovideoconvert ! x264enc ! mux. \
t2. ! queue ! autovideoconvert ! x264enc ! mux. \
t3. ! queue ! autovideoconvert ! x264enc ! mux. \
mux. ! queue ! filesink location="test.mkv" \
videomixer name=mix \
sink_0::zorder=1 sink_0::alpha=1.0 sink_0::ypos=0 sink_0::xpos=0 \
sink_1::zorder=1 sink_1::alpha=1.0 sink_1::ypos=0 sink_1::xpos=960 \
sink_2::zorder=1 sink_2::alpha=1.0 sink_2::ypos=540 sink_2::xpos=0 \
sink_3::zorder=1 sink_3::alpha=1.0 sink_3::ypos=540 sink_3::xpos=960 \
t0. ! queue ! autovideoconvert ! video/x-raw, width=960, height=540 ! mix.sink_0 \
t1. ! queue ! autovideoconvert ! video/x-raw, width=960, height=540 ! mix.sink_1 \
t2. ! queue ! autovideoconvert ! video/x-raw, width=960, height=540 ! mix.sink_2 \
t3. ! queue ! autovideoconvert ! video/x-raw, width=960, height=540 ! mix.sink_3 \
mix. ! queue ! autovideosink sync=false
This question was solved by adding max-size-bytes=0 max-size-buffers=0 max-size-time=10000000000 to the queue.
For anyone not initiated into the gstreamer low level bits, this is incredibly counter-intuitive. But if it works it works, I guess.
Read about the concept of PREROLLING in GStreamer:
https://gstreamer.freedesktop.org/documentation/design/preroll.html
A sink element can only complete the state change to PAUSED after a
buffer has been queued on the input pad or pads.
What is not emphasized in the documentation is that the pipeline will only transition from PAUSED to PLAYING after all sinks have PREROLLED.
Also note that a tee is not threaded, so it is sequentially pushing samples downstream.
Here is what happens: sink 1 receives a sample, but will not start playing because it waits until all other sinks in the pipeline have received a sample so audio/video sync can be respected.
So now that sink 1 is waiting it is effectively blocking the tee preventing it from sending more data - in that case so sink 2. Since no data will ever reach sink 2 you are in a deadlock.
A queue will automatically add a thread in the pipeline path as a side effect - preventing the deadlock.
If you have only one queue it may actually work - depending in which order you connect your sinks to the tee. If the path with the queue is delivered first it won't deadlock and the tee can deliver data to the other one and the state change will be successful. (Same as the example with three sinks, if all paths have a queue but not the last you may get away with it)
It is good practice to use queues for all tee outputs.
The x264enc example is especially tricky. The problem you face here is that the encoder consumes too much data but not producing anything (yet) effectively stalling the pipeline.
Two ways to fix it:
use tune=zerolatency for the x264enc element
increase the buffer sizes in the queue of the non-encoder path to compensate for the encoder latency.
With queue ! queue you are actually doing case 2. by doubling the buffer sizes.
I try to write a GStreamer pipeline to capture the screen, put a box on the corner capturing the webcam and record audio (all at the same time).
If I hit Ctrl+C to stop after ten seconds, for example, I realize I only record about 2 seconds of video (and audio). Actually, I don't care that the recording were done in real time, but I just want that GStreamer records the full lenght it should be.
This is the pipeline I have so far:
gst-launch-1.0 --gst-debug=3 ximagesrc use-damage=0 \
! video/x-raw,width=1366,height=768,framerate=30/1 ! videoconvert \
! videomixer name=mix sink_0::alpha=1 sink_1::alpha=1 sink_1::xpos=1046 sink_1::ypos=528 \
! videoconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! vp8enc ! webmmux name=mux ! filesink location="out.webm" \
pulsesrc ! audioconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! vorbisenc ! mux. \
v4l2src do-timestamp=true ! video/x-raw,width=320,height=240,framerate=30/1 ! mix.
I hope to have a solution, thank you.
I am attempting to demux a live recording from a MiniDV camera using the dv1394src element and then transcode it into a vorbis/theora ogg file. My pipeline below stalls after a few seconds. I think I have queue elements in the right space.
gst-launch -e dv1394src ! dvdemux name=demux \
oggmux name=mux ! queue ! filesink location=/tmp/test.ogg \
demux. ! queue ! audioconvert ! vorbisenc ! queue ! mux. \
demux. ! queue ! dvdec ! ffmpegcolorspace ! theoraenc ! queue ! mux.
If I remove the muxer and add filesink end points to the video and audio paths then it does not stall, but that creates two files I have to mux afterwards. I would rather do it in one pipeline.
You could try using a multiqueue after the demuxer. The multiqueue might be able to balance the amount of queued data better.