I am building my pipeline like this:
gst-launch-1.0 -v filesrc location=" + video_filename + " ! qtdemux ! queue max-size-buffers=0 max-size-time=0 ! vpudec frame-drop=0 ! queue ! imxv4l2sink device=/dev/video16
This is how I stop video playback
gst_element_set_state (m_gst_pipeline, GST_STATE_NULL);
g_main_loop_quit (m_mainloop);
gst_object_unref(m_gst_pipeline);
The stopping itsself works fine, only issue is that I get the last video frame displayed even when exiting the program. How do I flush the pipeline before destruction so I have a black screen?
Related
I am working on streaming device with CSI camera input. I want to duplicate the incomming stream with tee and subsequently access each of these streams with different url using gst-rtsp-server. I can have only one consumer on my camera so it is impossible to have two standalone pipelines. Is this possible? See the pseudo pipeline below.
source -> tee name=t -> rtsp with url0 .t -> rtsp with url1
Thanks!
EDIT 1:
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false
and
appsrc name=appsrc format=3 is-live=true do-timestamp=true ! queue ! rtph264pay config-interval=1 name=pay0
The second pipeline is used to create media factory. I push the buffers from appsink to appsrc in callback to new-sample signal like this.
static GstFlowReturn
on_new_sample_from_sink (GstElement * elt, void * data)
{
GstSample *sample;
GstFlowReturn ret = GST_FLOW_OK;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
if(appsrc)
{
ret = gst_app_src_push_sample(GST_APP_SRC (appsrc), sample);
}
gst_sample_unref (sample);
return ret;
}
This works - video is streamed and can be seen on different machine using gstreamer or vlc. The problem is latency. For some reason the latency is about 3s.
When I merge these two pipelines into one to create media factory directly withou usage of appsink and appsrc it works fine without large latency.
I think that for some reason the appsrc is queuing buffers until it starts pushing them to its source pad - On the debug output bellow you can see the number of queued bytes it stabilize itself on.
0:00:19.202295929 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202331834 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202353818 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1863:gst_app_src_push_internal:<appsrc> queueing buffer 0x7f58039690
0:00:19.222150573 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
0:00:19.222184302 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
EDIT 2:
I add the max-buffers property to appsink and suggested properties to queues but it didn't helped at all.
I just don't understand how it can buffer so many buffers and why. If I run my test application with GST_DEBUG=appsrc:5 then I get output like this.
0:00:47.923713520 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
0:00:47.923757840 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
According to this debug output it is all queued in appsrc even if it has max-bytes property set to 200 000 bytes. Maybe I don't understand it correctly but It looks weird to me.
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
My pipelines are currently like this.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! queue max-size-buffers=3 leaky=downstream ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false max-buffers=3
and
appsrc name=appsrc format=3 stream-type=0 is-live=true do-timestamp=true blocksize=16384 max-bytes=200000 ! queue max-size-buffers=3 leaky=no ! rtph264pay config-interval=1 name=pay0
I can think of three possibilities:
Use appsink/appsrc (as in this example) to separate the pipeline in something like:
Factory with URL 1
Capture pipeline .---------------------------.
.----------------------. appsrc ! encoder ! rtph264pay
v4l2src ! ... ! appsink
appsrc ! encoder ! rtph264pay
'---------------------------'
Factory with URL 2
You would manually take out buffers from the appsink and push them into the different appsrcs.
Build something like above, but use something like interpipes or intervideosink in place of the appsink/appsrc to perform the buffer transfer automatically.
Use something like GstRtspSink (paid product though)
Given two GStreamer pipelines:
Sender:
gst-launch-1.0 videotestsrc do-timestamp=true pattern=snow ! video/x-raw,width=640,height=480,framerate=30/1 ! x265enc ! h265parse ! rtph265pay ! udpsink host=127.0.0.1 port=5801
Receiver
gst-launch-1.0 -v udpsrc port=5801 ! application/x-rtp,encoding-name=H265 ! rtph265depay ! decodebin ! autovideosink sync=false
If I start the Receiver first, the pipeline works fine. If I start the Sender first, the receiver pipeline never actually starts showing any output. It does print the following to the terminal:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = application/x-rtp, encoding-name=(string)H265, media=(string)video, clock-rate=(int)90000
/GstPipeline:pipeline0/GstRtpH265Depay:rtph265depay0.GstPad:sink: caps = application/x-rtp, encoding-name=(string)H265, media=(string)video, clock-rate=(int)90000
Any ideas to why this happens? I am assuming there is some form of "start" packet sent at the beginning of the stream that the receiver needs to be "awake" for, but this is purely based on intuition, not any documentation.
I found the solution, I would have found it if I read the documentation of rtph265pay. https://gstreamer.freedesktop.org/documentation/rtp/rtph265pay.html?gi-language=c
There is a parameter called config-interval, which "Send VPS, SPS and PPS Insertion Interval in seconds". This parameter is initially 0, which means it likely only sends it at the beginning of the stream and never again. Setting this value to a positive number makes the receiver able to start reading the stream every time this data is sent. For my application, a value of 1s works great.
I have a GStreamer pipeline that grabs an mjpeg webcam stream from 3 separate cameras, and saves 2 frames each. I'm executing these commands on the USB 3.0 bus of an Odroid XU4 with Ubuntu 18.04 When doing this, I discovered I would occasionally get a garbled image like this in the collection of output images:
It wouldn't always happen, but maybe 1/5 executions an image might look like that.
I then discovered if I decode the jpeg then re-encode, this doesn't ever happen. See the below pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! \
jpegenc ! multifilesink location=/root/cam0_%02d.jpg v4l2src \
device=/dev/video1 num-buffers=2 ! image/jpeg,width=3840,height=2160 ! \
queue ! jpegdec ! queue ! jpegenc ! multifilesink \
location=/root/cam1_%02d.jpg v4l2src device=/dev/video2 num-buffers=2 ! \
image/jpeg,width=3840,height=2160 ! queue ! jpegdec ! queue ! jpegenc ! \
multifilesink location=/root/cam2_%02d.jpg
Now, when I run this pipeline I have a 1/5 chance of getting this error:
/GstPipeline:pipeline0/GstJpegDec:jpegdec0: No valid frames decoded before end of stream
Is there a way to make gstreamer wait for the frames instead of simply failing? I attempted this by adding the !queue! in the above pipeline, to no success.
everyone
The version of GStreamer I use is 1.x. I've spent a lot of time in searching a way to delete a tee branch.
In an active pipeline, a recording bin is created as below and inserted into this pipeline by branching the tee element.
"queue ! video/x-h264, width=800, height=600, framerate=10/1, stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink location=/xxxx"
It works perfectly except that I want to dynamically delete the recording bin and get a playable mp4 file. According to some discussion and tutorial, to get a correct mp4 file , we need to handle something about EOS. After trying some methods, I always got broken mp4 files.
Does anyone have sample code written in C to show me ? I'd appreciate your help.
Your best bet for cases like this may be to create two processes. The first process would run the video, and half of the tee it has would deliver h264 data to the second process through whatever means.
Here are two pipelines demonstrating the concept using UDP sockets.
gst-launch-1.0 videotestsrc ! x264enc ! tee name=t ! h264parse ! avdec_h264 ! videoconvert ! ximagesink t. ! queue ! h264parse ! rtph264pay ! udpsink host=localhost port=8888
gst-launch-1.0 udpsrc port=8888 num-buffers=300 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! mp4mux ! filesink location=/tmp/264.mp4
The trick to getting that clean mp4 is to make sure an EOS event is delivered reliably.
Instead of dynamically adding it you just have it in the pipeline by default, and add a probe callback at the source pad of the queue in the probe callback you have to do the trick either to pass the buffer or not (GST_PAD_PROBE_DROP drops the buffer and GST_PAD_PROBE_OK passes on the buffer to next element) so when you get an event to start/stop recoding you just need to return appropriate values. And filesink you can use multifilesink instead so as to write to different files everytime you start/stop.
Note the queue which drops the buffers needs before the mux element otherwise the file would be corrupt.
Hope that helps!
Finally, I came up with a solution.
Let's say that there is an active pipeline including a recording bin.
"udpsrc port=4444 caps=\"application/x-rtp, media=(string)video,
clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay !
tee name=tp tp. ! queue ! video/x-h264, width=800, height=600,
framerate=10/1 ! decodebin ! videoconvert ! video/x-raw, format=RGBA !
autovideosink"
recording bin:
"queue ! video/x-h264, width=800, height=600, framerate=10/1,
stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink
location=/xxxx"
After a period of time, we want to stop recording and save as a mp4 file, and video media is still streaming.
First, I use a blocking probe to block the src pad of tee. In this blocking probe callback, I use an event probe to catch EOS in the sink pad of filesink and do a busy waiting.
*if EOS is catched in the event probe callback
self->isGotEOS = YES;
*busy waiting in the blocking probe callback
while (self->isGotEOS == NO) {
usleep(100000);
}
Before entering the busy waiting while loop, an EOS event is created and sent to the sink pad of recording bin.
After the busy waiting is done:
usleep(200000);
[self destory_record_elements];
I think usleep(200000) is a trick. Without it, a non-playable mp4 file is usually the result. It would seem that 200ms is long enough handling the EOS.
I had similar problem previously, my pipeline
videotestsrc do-timestamp="TRUE" ! videoflip method=0 ! tee name=t
t. ! queue ! videoconvert ! glupload ! glshader ! autovideosink async="FALSE"
t. ! queue ! identity drop-probability=1 ! videoconvert name=conv2 ! openh264enc ! h264parse ! avimux ! multifilesink async="FALSE" post-messages=true next-file=4
Then I just change drop-probability property on identity element
drop-probability = 1 + gst_pad_send_event(conv2_sinkpad, gst_event_new_eos()); - stop recording
drop-probability = 0 - resume recording
hi I am trying to visualize a music file in gstreamer using the following command:
gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert !
tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! goom !
colorspace ! autovideosink
But I get this error : "There may be a timestamping problem, or this computer is too slow."
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstAudioSinkClock
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstDshowVideoSink:autovideosink0-actual-sink-dshowvideo: A lot of buffers are being dropped.
Additional debug info:
..\Source\gstreamer\libs\gst\base\gstbasesink.c(2572): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstDshowVideoSink:autovideosink0-actual-sink-dshowvideo:
There may be a timestamping problem, or this computer is too slow.
ERROR: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0
Assuming this is something to do with the thread, I tried the following command:
gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert ! tee name=myT
{ ! queue ! autoaudiosink } { tee. ! queue ! goom ! colorspace ! autovideosink }
But then it gives the folloiwng link error:
** (gst-launch-0.10:5308): WARNING **: Trying to connect elements that don't share a common ancestor: tee and queue1
0:00:00.125000000 5308 003342F0 ERROR GST_PIPELINE grammar.tab.c:656:gst_parse_perform_link: could not link tee to queue1
WARNING: erroneous pipeline: could not link tee to queue1
Can anyone tell what is wrong? Thanks
I cannot give you an exact answer because i don't have windows installed.
For debugging this use your first pipeline (in linux works). Use parameter -v with gst-launch and put element identity just before autovideosink. This will print buffer information that passes through element identity, look for anything strange.
Also you could try to use directdrawsink instead of autovideosink. Another test that i will do is to generate the audio with audiotestsrc.
Remember that if you find a bug you can open a bug report in gnome bugzilla so GStreamer developers are aware that there is a problem. Even you could fix it yourself and send a patch.
For There may be a timestamping problem, or this computer is too slow. Error Try sync=false like
`gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert ! tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! goom ! colorspace ! autovideosink sync=false`
or you may have to try at both sink ends of the Tee like
`gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert ! tee name=myT myT. ! queue ! autoaudiosink sync=false myT. ! queue ! goom ! colorspace ! autovideosink sync=false`
I also observed that if you replace autovideosink with xvimagesink or ximagesink the timestamping problem apparently seems to be solved.