I have a GStreamer pipeline that records three live cameras and basically does the following: capture 3 cam in a first thread ; then do some processing over the 3 streams in 3 separate threads ; in parallel re-scale the frames for a compositor (videomixer adapted for live sources) in 3 other threads ; and finally do the composition. The plan for each camera is following (so x3) :
[capture] -> TEE |-> QUEUE -> [someProcessing] -> _
|-> QUEUE -> [rescale] -> COMPOSITOR
gst-launch-1.0 \
${capture0} ! tee name='t0' ! queue ! ${someProcessing0} \
${capture1} ! tee name='t1' ! queue ! ${someProcessing1} \
${capture2} ! tee name='t2' ! queue ! ${someProcessing2} \
${someStuff} \
compositor name=compo ${compositorSinkProperties} \
t0. ! queue ! ${rescale0} ! compo.sink_0 \
t1. ! queue ! ${rescale1} ! compo.sink_1 \
t2. ! queue ! ${rescale2} ! compo.sink_2 \
-e
My pipeline works well, I just need to clarify it's internal behavior:
I know how to force using separate threads with the element queue. However I do not know what happens when my 3 [rescale] branches are merged inside a single element such as compo in my case.
Does GStreamer create 3 threads as asked?
If yes, then in what thread(s) does compositor run?
If not, do I have only 1 thread for the whole rescaling+compositing process?
Thanks for any info you might share!
Regards
To my knowledge you are correct. You will have threads for all the queue paths downstream. And I think the aggregator has its own thread too. I lack proof for it - perhaps you can discover it in the GstAggregator class.
But its aggregate function fires once all sink pads on the aggregator have data.
Taken from the base-classes documentation here:
aggregate ()
Mandatory. Called when buffers are queued on all sinkpads. Classes should
iterate the GstElement->sinkpads and peek or steal buffers from the
GstAggregatorPads. If the subclass returns GST_FLOW_EOS, sending of the
eos event will be taken care of. Once / if a buffer has been constructed
from the aggregated buffers, the subclass should call _finish_buffer.
Related
I have following pipeline. One branch is used to display video, second one uploads frames every second to the HTTP server.
gst-launch-1.0 -e \
filesrc location=test.mp4 ! queue ! qtdemux name=d d.video_0 ! h264parse ! avdec_h264 ! tee name=t \
t. ! queue ! videoscale ! 'video/x-raw,width=(int)640,height=(int)480' ! autovideosink \
t. ! queue ! videorate ! 'video/x-raw,framerate=1/1' ! jpegenc ! curlhttpsink \
location=http://192.168.100.150:8080/upload_picture \
user=admin passwd=test \
content-type=image/jpeg \
use-content-length=false
Problem occurs in case server is unreachable or does not process uploads fast enough. In that case video playback will stop for the time it takes for upload branch to catch up. I would expect tee in combination with queue to allow video to run out of the queued buffers while queue in upload branch gets filled.
Is such out of sync behavior possible? I tried both sync and async properties but without desired result.
:)
I'm trying to receive an rtp audio stream using gstreamer and forward it to multiple target hosts with different delays.
To insert a delay, I use the queue element with min-threshold-time as suggested here: https://stackoverflow.com/a/17218113/4881938
This works fine so far, however, if I want to have multiple output streams with different delays (or one with no delay at all), no data is set (i.e. the pipeline is paused) until the queue with the longest min-threshold-time is full.
This is not what I want - I want all forwarded streams to start as soon as possible, so if I have target1 one with no delay and target2 with 10s delay, target1 should receive data immediately, and not having to wait 10s.
I tried different sink options (sync=false, async=true) and tee option allow-not-linked=true to no avail; the pipeline remains paused until the longest delay in one of the queues.
What am I missing? How do I get gstreamer to activate the branch with no delay immediately? (and, in case I have multiple different delays: activate each delayed branch as soon as the buffer is full, not only after the longest buffer is filled?)
This is the complete test command I used:
% gst-launch-1.0 \
udpsrc port=10212 caps='application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
tee name=t1 allow-not-linked=true \
t1. ! queue name=q1 leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=0 q1. ! \
udpsink host=target1 port=10214 sync=false async=true \
t1. ! queue name=q2 leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=5000000000 q2. ! \
udpsink host=target2 port=10215 sync=false async=true
version: GStreamer 1.18.4
Thanks everyone for even reading this far! :)
According to #SeB 's comment, I tried out interpipes:
Thank you very much for your input. I tried it out, and it seems the problem is similar. If I omit the queue elements or don't set min-threshold-time to more than 0, it works, but as soon as I add any delay to one or more of the queue elements, the whole pipeline does nothing, the time counter never goes up from 0:00:00.0
I tried out different combinations of the interpipe sink/source options forward-/accept-events and forward-/accept-eos but it didn't change anything.
What am I doing wrong? As I understand interpipe, it should decouple any sink/source elements from each other so one stalling pipe doesn't affect the rest(?).
command and output:
% gst-launch-1.0 \
udpsrc port=10212 caps='application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
interpipesink name=t1 \
interpipesrc listen-to="t1" ! queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=5000000000 ! \
udpsink host=targethost1 port=10214 async=true sync=false \
interpipesrc listen-to="t1" ! queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=10000000000 ! \
udpsink host=targethost2 port=10215 async=true sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
0:00:00.0 / 99:99:99..
I also tried shmsource/shmsink, but this also kinda fails -- as soon as I add a delay to one of the pipelines with the shmsource, it remains stuck in prerolling state:
shmsink:
% gst-launch-1.0 \
udpsrc port=10212 caps='application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
queue ! shmsink socket-path=/tmp/blah shm-size=20000000 wait-for-connection=false
shmsource (without is-live):
% gst-launch-1.0 \
shmsrc socket-path=/tmp/blah ! 'application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=5000000000 ! \
udpsink host=targethost port=10215 async=true sync=false
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
shmsource (with is-live):
% gst-launch-1.0 \
shmsrc is-live=true socket-path=/tmp/blah ! 'application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS' ! \
queue leaky=downstream max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=50 ! \
udpsink host=targethost port=10215 async=true sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
depending on setting is-live on the src, the behavior is different, but in both cases, no data is actually sent. without the min-threshold-time for the queue element, both shmsource commands work.
I'm writing a c++ application with gstreamer and am trying to achieve the following: connect to an rtp audio stream (opus), write one copy of the entire stream to an audio file, and then additionally, based on events triggered by the user, create a separate series of audio files consisting of segments of the rtp stream (think a start/stop record toggle button).
Currently using udpsrc -> rtpbin -> rtpopusdepay -> queue -> tee (pipeline splits here)
tee_stream_1 -> queue -> webmmux -> filesink
tee_stream_2 -> queue -> webmmux -> filesink
tee_stream_1 should be active during the entire duration of the pipeline. tee_stream_2 is what should generate multiple files based on user toggle events.
An example scenario:
pipeline receive rtp audio stream, tee_stream_1 begins writing audio to full_stream.webm
2 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_1.webm
5 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_1.webm and closes file.
8 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_2.webm
9 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_2.webm and closes file.
10 seconds into rtp audio stream, stream ends, full_stream.webm finishes writing audio and closes.
End result being 3 audio files, full_stream.webm with 10 seconds of audio, stream_segment_1.webm with 3 seconds of audio, and stream_segment_2.webm with 1 second of audio.
Attempts to do this so far have been met with difficulty since the muxers seem to require an EOS event to finish properly writing the stream_segment files, however this EOS is propogated to the other elements of the pipeline which has the undesired effect of ending all of the recordings. Any ideas on how to best accomplish this? I can provide code if it would be helpful.
Thank you for any and all assistance!
For such case, I'd suggest to give a try to RidgeRun's open source gstd and interpipe plugins that provide high level control of dynamic pipelines.
You may install with something like:
# Some required packages to be installed, not exhaustive...
# If not enough you would see errors and figure out any other missing package
sudo apt install libsoup2.4-dev libjson-glib-dev libdaemon-dev libjansson-dev libreadline-dev gtk-doc-tools python3-pip
# Get gstd sources from github
git clone --recursive https://github.com/RidgeRun/gstd-1.x.git
# Configure, build and install (meson may be better, but here using autogen/configure
cd gstd-1.x
./autogen.sh
./configure
make -j $(nproc)
sudo make install
cd ..
# Get gst-interpipe sources from github
git clone --recursive https://github.com/RidgeRun/gst-interpipe.git
# Configure, build and install (meson may be better, but here using autogen/configure
cd gst-interpipe
./autogen.sh
./configure
make -j $(nproc)
sudo make install
cd ..
# Tell gstreamer about the new plugins interpipesink and interpipesrc
# First clear gstreamer cache (here using arm64, you would adapt for your arch)
rm ~/.cache/gstreamer-1.0/registry.aarch64.bin
# add new plugins path
export GST_PLUGIN_PATH=/usr/local/lib/gstreamer-1.0
# now any gstreamer command would rebuild the cache, so if ok this should work
gst-inspect-1.0 interpipesink
interpipes need a daemon that manages, so in a first terminal you would just start it. It will display some operations and errors if any:
gstd
Now in a second terminal you would try this script (here recording into directory /home/user/tmp/tmp2...adjust for your case):
#!/bin/sh
gstd-client pipeline_create rtpopussrc udpsrc port=5004 ! application/x-rtp,media=audio,encoding-name=OPUS,clock-rate=48000 ! queue ! rtpbin ! rtpopusdepay ! opusparse ! audio/x-opus ! interpipesink name=opussrc
gstd-client pipeline_create audio_record_full interpipesrc name=audiofull listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/full_stream.webm
gstd-client pipeline_play rtpopussrc
gstd-client pipeline_play audio_record_full
sleep 2
gstd-client pipeline_create audio_record_1 interpipesrc name=audio_rec1 listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/stream_segment_1.webm
gstd-client pipeline_play audio_record_1
sleep 3
gstd-client pipeline_stop audio_record_1
gstd-client pipeline_delete audio_record_1
sleep 3
gstd-client pipeline_create audio_record_2 interpipesrc name=audio_rec2 listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/stream_segment_2.webm
gstd-client pipeline_play audio_record_2
sleep 1
gstd-client pipeline_stop audio_record_2
gstd-client pipeline_delete audio_record_2
sleep 1
gstd-client pipeline_stop audio_record_full
gstd-client pipeline_delete audio_record_full
gstd-client pipeline_stop rtpopussrc
gstd-client pipeline_delete rtpopussrc
echo 'Done'
and check resulting files.
I'm trying separate the work on frames to a few parallel pipeline branch. Let's say we have this pipeline:
some_merger_frames_by_ts ! autovideosink
v4l2src device=/dev/video2 ! tee name=t
t. ! queue ! /*some work with received frame*/ ! some_merger_frames_by_ts.
t. ! queue ! /*some work with received frame*/ ! some_merger_frames_by_ts.
The same work is being doing on every branch. I couldn't found anything like some_merger_frames_by_ts with the nedeed functionality. I've been searching a lot, but couldn't found anything needed.
I want to filter frames in each queue by ts, and then merge them in some_merger_frames_by_ts and get aligned stream.
Really, had no one tried?
Is this possible? Any help or alternative approaches would be much appreciated.
Thank a lot in advance.
This may be silly question. How gstreamer elements are loaded multiple times in a process?. When Gstreamer elements are created are they shared if already one created and present in memory? In my case, one process creates multiple thread, for each thread I am creating following gstreamer elements, linking and set pipeline to PLAYING state, filesrc->Q->filesink and this works. But when I add gstreamer element(newly written for processing gst buffers data) between Q->filesink all thread stops working, what might be the problem? How can I debug? Please provide inputs. Thanks in advance. -opensid
The elements are within shared libraries and thus the code will be just once in memory. Each instance will occupy some memory for its own state though. When doing multithreaded stuff, you should call gst_init() just once from your main thread. As gstreamer already creates new threads for the data processing, it is saver to create all the gstreamer pipeline from one main thread. You can run several pipelines in parallel.
I agree with ensonic's answer as it applies to data stored in klass. However it doesn't seem to apply to gbuffers. I am working my way through versions of a IEEE1278 audio transform based on basetransform. In one version there is a filter plug-in to allow UDP packets through based on set-able properties and a plug-in for two-way transform, IEEE1278 <-> mulaw depending on what the pads are set to.
For a simple test I tried a loop:
gst-launch-1.0 -v filesrc
location=IsaacAsimov-Foundation1Of8_64kb.mp3 \
! mpegaudioparse \
! mpg123audiodec
! 'audio/x-raw,rate=8000,channels=1' \
! audioresample \
! 'audio/x-raw,rate=8000,channels=1' \
! mulawenc \
! 'audio/x-mulaw,rate=8000,channels=1' \
! dissignalaudio \
! disfilter \
! dissignalaudio \
! 'audio/x-mulaw,rate=8000,channels=1' \
! mulawdec \
! 'audio/x-raw,rate=8000,channels=1' \
! autoaudiosink
No matter what I did to the gbuffer data or metadata in dissignalausio_transform the output audio had a lot of strong clicking noise. gprints in mulawdec showed that none of my transform changes were arriving at mulawdec. I separated the loop into two launch pipelines using UDP loop-back and the noise went away. Somehow the gbuffer from the first instance of dissignalaudio was overriding the second instance.
Lesson learned:
There is a reason there are no examples of two-way transforms and all transforms have separate encode and decode plug-ins.