Gstreamer rtsp server with different urls for payloaders - gstreamer

I am working on streaming device with CSI camera input. I want to duplicate the incomming stream with tee and subsequently access each of these streams with different url using gst-rtsp-server. I can have only one consumer on my camera so it is impossible to have two standalone pipelines. Is this possible? See the pseudo pipeline below.
source -> tee name=t -> rtsp with url0 .t -> rtsp with url1
Thanks!
EDIT 1:
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false
and
appsrc name=appsrc format=3 is-live=true do-timestamp=true ! queue ! rtph264pay config-interval=1 name=pay0
The second pipeline is used to create media factory. I push the buffers from appsink to appsrc in callback to new-sample signal like this.
static GstFlowReturn
on_new_sample_from_sink (GstElement * elt, void * data)
{
GstSample *sample;
GstFlowReturn ret = GST_FLOW_OK;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
if(appsrc)
{
ret = gst_app_src_push_sample(GST_APP_SRC (appsrc), sample);
}
gst_sample_unref (sample);
return ret;
}
This works - video is streamed and can be seen on different machine using gstreamer or vlc. The problem is latency. For some reason the latency is about 3s.
When I merge these two pipelines into one to create media factory directly withou usage of appsink and appsrc it works fine without large latency.
I think that for some reason the appsrc is queuing buffers until it starts pushing them to its source pad - On the debug output bellow you can see the number of queued bytes it stabilize itself on.
0:00:19.202295929 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202331834 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202353818 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1863:gst_app_src_push_internal:<appsrc> queueing buffer 0x7f58039690
0:00:19.222150573 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
0:00:19.222184302 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
EDIT 2:
I add the max-buffers property to appsink and suggested properties to queues but it didn't helped at all.
I just don't understand how it can buffer so many buffers and why. If I run my test application with GST_DEBUG=appsrc:5 then I get output like this.
0:00:47.923713520 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
0:00:47.923757840 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
According to this debug output it is all queued in appsrc even if it has max-bytes property set to 200 000 bytes. Maybe I don't understand it correctly but It looks weird to me.
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
My pipelines are currently like this.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! queue max-size-buffers=3 leaky=downstream ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false max-buffers=3
and
appsrc name=appsrc format=3 stream-type=0 is-live=true do-timestamp=true blocksize=16384 max-bytes=200000 ! queue max-size-buffers=3 leaky=no ! rtph264pay config-interval=1 name=pay0

I can think of three possibilities:
Use appsink/appsrc (as in this example) to separate the pipeline in something like:
Factory with URL 1
Capture pipeline .---------------------------.
.----------------------. appsrc ! encoder ! rtph264pay
v4l2src ! ... ! appsink
appsrc ! encoder ! rtph264pay
'---------------------------'
Factory with URL 2
You would manually take out buffers from the appsink and push them into the different appsrcs.
Build something like above, but use something like interpipes or intervideosink in place of the appsink/appsrc to perform the buffer transfer automatically.
Use something like GstRtspSink (paid product though)

Related

Gstreamer. Get info about incoming data

I have gstreamer pipeline which starts
udpsrc port=50000 caps='application/x-rtp' ! rtpopusdepay ! decodebin ! queue audioconvert ...
It was made in C++ code.
How can I get info about media data? Like sampling rate, mono/stereo and etc.

gstreamer udpsrc pipeline aggregate audio before appsink

I'm trying to come up with a pipeline to aggregate few audio buffers before having the appsink callback executed.
I have tried the following:
gst-launch-1.0 udpsrc name=udpsrc address="192.168.1.33" retrieve-sender-address=false reuse=false port=16384 caps="application/x-rtp, media=(string)audio, payload=0, clock-rate=(int)8000" timeout=10000000000 ! rtppcmudepay ! rtppcmupay min-ptime=3200000000 max-ptime=3200000000 mtu=30000 ! rtppcmudepay ! udpsink host=192.168.1.8 port=16386
And that seemed to do the trick if I use gstreamer 1.20 or 1.18.
But when I run this pipeline under 'load' ~300 concurrent pipelines, I do have streams that are waking up the callback with 160 bytes rather than the 25600.
Is there any other way that I can achive that ?

Gstreamer rtsp isn't picking up audio through queues

I'm having an issue with pulling audio and video from an RTSP stream using gstreamer.
The command I am using to test is as follows:
gst-launch-1.0 rtspsrc location=rtsp://192.168.50.160/whp name=src src. ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! x264enc bitrate=10000 ! rtph264pay ! udpsink host=192.168.50.164 port=8004 src. ! queue ! fakesink
The result of the above is that the pipe follows through for the first (video) stream. The second stream however is untouched and seems to sit in the rtspsrc plugin.
The way I am finding this is by looking at the resultant dot file:
If I am reading this right it looks like the queue connects correctly to rtpsession0, but seems to ignore rtpsession1 and the second queue doesn't connect to anything resulting in audio from my stream being completely ignored.
Am I reading this incorrectly? If not am I missing something in my pipeline command that would rectify this issue?
I am happy to provide any more information necessary
Thanks

How to record pipeline even if sender doesn't send data in gstreamer

I'm a newbie to gstreamer so i would be appreciated if you could help me.
I'm trying to listen to a pipeline and record frames to a file.
I have tried the following pipeline:
gst-launch-1.0 udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
I want to record whole timeline even if the sender doesn't provide any frame data.
In default, recorder appends the frames when pipeline receive some valid frames. But I want to see some black frames when sender doesn't send data.
I experimented a bit and I don't think you'll be able to do this with a plain gst-launch command. Unfortunately what it would probably involve is to write an application that detects when packets/buffers are not coming in any more, and then modifying the pipeline. If you want to give it a go I'd suggest the input-selector element in something like this:
gst-launch-1.0 videotestsrc pattern=black ! video/x-raw ! input-selector name=selector ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
Then I'd create a method to attach the stream to the input-selector:
udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! identity name=buffer-checker
To detect no packets coming in, you can listen for the handoff signal on the identity element, and then remove the stream when it times out and switch over to the black test pattern from the videotestsrc by using the active-pad property on the input-selector.
Using the videomixer element almost works, but I don't believe it will handle multiple stops and starts of the stream.
Anyway, hope someone else comes up with a better idea. You could also re-analyze your top level approach and see if there is a way you can work with multiple video clips instead of the one.

How to remove a branch of tee in an active GStreamer pipeline?

everyone
The version of GStreamer I use is 1.x. I've spent a lot of time in searching a way to delete a tee branch.
In an active pipeline, a recording bin is created as below and inserted into this pipeline by branching the tee element.
"queue ! video/x-h264, width=800, height=600, framerate=10/1, stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink location=/xxxx"
It works perfectly except that I want to dynamically delete the recording bin and get a playable mp4 file. According to some discussion and tutorial, to get a correct mp4 file , we need to handle something about EOS. After trying some methods, I always got broken mp4 files.
Does anyone have sample code written in C to show me ? I'd appreciate your help.
Your best bet for cases like this may be to create two processes. The first process would run the video, and half of the tee it has would deliver h264 data to the second process through whatever means.
Here are two pipelines demonstrating the concept using UDP sockets.
gst-launch-1.0 videotestsrc ! x264enc ! tee name=t ! h264parse ! avdec_h264 ! videoconvert ! ximagesink t. ! queue ! h264parse ! rtph264pay ! udpsink host=localhost port=8888
gst-launch-1.0 udpsrc port=8888 num-buffers=300 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! mp4mux ! filesink location=/tmp/264.mp4
The trick to getting that clean mp4 is to make sure an EOS event is delivered reliably.
Instead of dynamically adding it you just have it in the pipeline by default, and add a probe callback at the source pad of the queue in the probe callback you have to do the trick either to pass the buffer or not (GST_PAD_PROBE_DROP drops the buffer and GST_PAD_PROBE_OK passes on the buffer to next element) so when you get an event to start/stop recoding you just need to return appropriate values. And filesink you can use multifilesink instead so as to write to different files everytime you start/stop.
Note the queue which drops the buffers needs before the mux element otherwise the file would be corrupt.
Hope that helps!
Finally, I came up with a solution.
Let's say that there is an active pipeline including a recording bin.
"udpsrc port=4444 caps=\"application/x-rtp, media=(string)video,
clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay !
tee name=tp tp. ! queue ! video/x-h264, width=800, height=600,
framerate=10/1 ! decodebin ! videoconvert ! video/x-raw, format=RGBA !
autovideosink"
recording bin:
"queue ! video/x-h264, width=800, height=600, framerate=10/1,
stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink
location=/xxxx"
After a period of time, we want to stop recording and save as a mp4 file, and video media is still streaming.
First, I use a blocking probe to block the src pad of tee. In this blocking probe callback, I use an event probe to catch EOS in the sink pad of filesink and do a busy waiting.
*if EOS is catched in the event probe callback
self->isGotEOS = YES;
*busy waiting in the blocking probe callback
while (self->isGotEOS == NO) {
usleep(100000);
}
Before entering the busy waiting while loop, an EOS event is created and sent to the sink pad of recording bin.
After the busy waiting is done:
usleep(200000);
[self destory_record_elements];
I think usleep(200000) is a trick. Without it, a non-playable mp4 file is usually the result. It would seem that 200ms is long enough handling the EOS.
I had similar problem previously, my pipeline
videotestsrc do-timestamp="TRUE" ! videoflip method=0 ! tee name=t
t. ! queue ! videoconvert ! glupload ! glshader ! autovideosink async="FALSE"
t. ! queue ! identity drop-probability=1 ! videoconvert name=conv2 ! openh264enc ! h264parse ! avimux ! multifilesink async="FALSE" post-messages=true next-file=4
Then I just change drop-probability property on identity element
drop-probability = 1 + gst_pad_send_event(conv2_sinkpad, gst_event_new_eos()); - stop recording
drop-probability = 0 - resume recording