This may be silly question. How gstreamer elements are loaded multiple times in a process?. When Gstreamer elements are created are they shared if already one created and present in memory? In my case, one process creates multiple thread, for each thread I am creating following gstreamer elements, linking and set pipeline to PLAYING state, filesrc->Q->filesink and this works. But when I add gstreamer element(newly written for processing gst buffers data) between Q->filesink all thread stops working, what might be the problem? How can I debug? Please provide inputs. Thanks in advance. -opensid
The elements are within shared libraries and thus the code will be just once in memory. Each instance will occupy some memory for its own state though. When doing multithreaded stuff, you should call gst_init() just once from your main thread. As gstreamer already creates new threads for the data processing, it is saver to create all the gstreamer pipeline from one main thread. You can run several pipelines in parallel.
I agree with ensonic's answer as it applies to data stored in klass. However it doesn't seem to apply to gbuffers. I am working my way through versions of a IEEE1278 audio transform based on basetransform. In one version there is a filter plug-in to allow UDP packets through based on set-able properties and a plug-in for two-way transform, IEEE1278 <-> mulaw depending on what the pads are set to.
For a simple test I tried a loop:
gst-launch-1.0 -v filesrc
location=IsaacAsimov-Foundation1Of8_64kb.mp3 \
! mpegaudioparse \
! mpg123audiodec
! 'audio/x-raw,rate=8000,channels=1' \
! audioresample \
! 'audio/x-raw,rate=8000,channels=1' \
! mulawenc \
! 'audio/x-mulaw,rate=8000,channels=1' \
! dissignalaudio \
! disfilter \
! dissignalaudio \
! 'audio/x-mulaw,rate=8000,channels=1' \
! mulawdec \
! 'audio/x-raw,rate=8000,channels=1' \
! autoaudiosink
No matter what I did to the gbuffer data or metadata in dissignalausio_transform the output audio had a lot of strong clicking noise. gprints in mulawdec showed that none of my transform changes were arriving at mulawdec. I separated the loop into two launch pipelines using UDP loop-back and the noise went away. Somehow the gbuffer from the first instance of dissignalaudio was overriding the second instance.
Lesson learned:
There is a reason there are no examples of two-way transforms and all transforms have separate encode and decode plug-ins.
Related
A VP8 stream comes from Janus Videoroom plugin with restreaming to 10002/10004 locally. From there, it's picked up with the following gstreamer pipeline:
gst-launch-1.0 -v udpsrc \
caps="application/x-rtp,media=(string)video,encoding-name=(string)VP8,payload=100" \
address=127.0.0.1 port=10004 ! \
rtpvp8depay ! rtpvp8pay ! \
udpsink host=127.0.0.1 port=5004
and sent to Streaming plugin. As you can see, no transcoding here, just depayloading and payloading. resulting video breaks down into artifacts on some keyframes, approximately once in 10 keyframes, only to be fixed on next keyframe.
if i remove depay and pay, simply forwarding on rtp level, to get this
gst-launch-1.0 -v udpsrc \
caps="application/x-rtp,media=(string)video,encoding-name=(string)VP8,payload=100" \
address=127.0.0.1 port=10004 ! \
udpsink host=127.0.0.1 port=5004
it never happens.
i understand this is not the Janus issue but a gstreamer issue. but maybe anyone has an idea what could be the problem? this has been very reliably tested, the problem is easy to reproduce in the former case and never happens in the latter.
Of course, the goal of what i am doing is transcoding, and there was a lot more in the setup and the pipeline before i boiled it down to this level. Reproduced on Janus installed on a fresh Ubuntu 18.04 machine with all out-of-the-box settings.
update:
export GST_DEBUG="rtp*:4";
revealed this error message which drops out each time artifacts appear:
rtpbasedepayload gstrtpbasedepayload.c:473:gst_rtp_base_depayload_handle_buffer:
<rtpvp8depay0> 12 <= 100, dropping old packet
with the number which is "12" fluctuating being typically between 5 and 12.
This was the fix:
rtpjitterbuffer latency=50 !
before rtpvp8depay.
Logically, the order of packets is the same by that point as the one that came through the internet between the sending browser and Janus. If we don't depay+pay, it goes the same way to the receiving browser that's connected to Streaming plugin, and it has it's own jitter buffer so able to fix order, but if we do depay+pay here, there is no buffer so these packets are dropped, resuling in broken frames.
And yes, i got back transcoding and all the rest of my pipeline and all of other bells and whistles that were around, and it still works fine.
I have a GStreamer pipeline that records three live cameras and basically does the following: capture 3 cam in a first thread ; then do some processing over the 3 streams in 3 separate threads ; in parallel re-scale the frames for a compositor (videomixer adapted for live sources) in 3 other threads ; and finally do the composition. The plan for each camera is following (so x3) :
[capture] -> TEE |-> QUEUE -> [someProcessing] -> _
|-> QUEUE -> [rescale] -> COMPOSITOR
gst-launch-1.0 \
${capture0} ! tee name='t0' ! queue ! ${someProcessing0} \
${capture1} ! tee name='t1' ! queue ! ${someProcessing1} \
${capture2} ! tee name='t2' ! queue ! ${someProcessing2} \
${someStuff} \
compositor name=compo ${compositorSinkProperties} \
t0. ! queue ! ${rescale0} ! compo.sink_0 \
t1. ! queue ! ${rescale1} ! compo.sink_1 \
t2. ! queue ! ${rescale2} ! compo.sink_2 \
-e
My pipeline works well, I just need to clarify it's internal behavior:
I know how to force using separate threads with the element queue. However I do not know what happens when my 3 [rescale] branches are merged inside a single element such as compo in my case.
Does GStreamer create 3 threads as asked?
If yes, then in what thread(s) does compositor run?
If not, do I have only 1 thread for the whole rescaling+compositing process?
Thanks for any info you might share!
Regards
To my knowledge you are correct. You will have threads for all the queue paths downstream. And I think the aggregator has its own thread too. I lack proof for it - perhaps you can discover it in the GstAggregator class.
But its aggregate function fires once all sink pads on the aggregator have data.
Taken from the base-classes documentation here:
aggregate ()
Mandatory. Called when buffers are queued on all sinkpads. Classes should
iterate the GstElement->sinkpads and peek or steal buffers from the
GstAggregatorPads. If the subclass returns GST_FLOW_EOS, sending of the
eos event will be taken care of. Once / if a buffer has been constructed
from the aggregated buffers, the subclass should call _finish_buffer.
I have this working but have been unable to get video from my magwell to intergrate and could use help with the correct pipline.
gst-launch-1.0 videotestsrc ! video/x-raw,width=848,height=480,framerate=25/1 ! x264enc bitrate=700 ! video/x-h264,width=848,height=480,framerate=25/1,stream-format=byte-stream,profile=baseline ! tee name=t\
t. ! queue ! tcpclientsink host=172.18.0.3 port=8000 \
t. ! queue ! tcpclientsink host=172.18.0.4 port=8000
I do not see the receiver side pipeline in the question description. This is required to verify that there are no issues at the receiver side. Based on your current pipeline I have the following suggestions:
You don't need set the caps again after the element x264enc, because the output is anyhow of type video/x-h264. What you need is to add h264parse after x264enc. You need to also add h264parse, before passing the data to decoder you are using at the receiver side.
The bitrate set for x264enc is also very less. The units are in kbits/sec, and for a video this might be very less. It's best to leave this to default setting if you do not have any strict resource constraints. Otherwise try for a higher value.
Also is there any reason why you are using TCP. Using UDP might be a better idea for video, in case video data/packet loss is not an issue.
I'm looking for the correct technique, if one exists, for dynamically replacing an element in a running gstreamer pipeline. I have a gstreamer based c++ app and the pipeline it creates looks like this (using gst-launch syntax) :
souphttpsrc location="http://localhost/local.ts" ! mpegtsdemux name=d ! queue ! mpeg2dec ! xvimagesink d. ! queue ! a52dec ! pulsesink
During the middle of playback (i.e. GST_STATE_PLAYING is the pipeline state and the user is happily watching video), I need to remove souphttpsrc from the pipeline and create a new souphttpsrc, or even a new neonhttpsource, and then immediately add that back into the pipeline and continue playback of the same uri source stream at the same time position where playback was before we performed this operation. The user might see a small delay and that is fine.
We've barely figured out how to remove and replace the source, and we need more understanding. Here's our best attempt thus far:
gst_element_unlink(source, demuxer);
gst_element_set_state(source, GST_STATE_NULL);
gst_bin_remove(GST_BIN(pipeline), source);
source = gst_element_factory_make("souphttpsrc", "src");
g_object_set(G_OBJECT(source), "location", url, NULL);
gst_bin_add(GST_BIN(pipeline), source);
gst_element_link(source, demuxer);
gst_element_sync_state_with_parent(source);
This doesn't work perfectly because the source is playing back from the beginning and the rest of the pipeline is waiting for the correct timestamped buffers (I assume) because after several seconds, playback picks back up. I tried seeking the source in multiple ways but nothing has worked.
I need to know the correct way to do this. It would be nice to know a general technique, if one exists, as well, in case we wanted to dynamically replace the decoder or some other element.
thanks
I think this may be what you are looking for:
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
(starting at line 115)
I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19