What does 'num-buffers' do in gstreamer? - gstreamer

I could only find few pages giving a one-liner explanation of num-buffers. Like this one.
Number of buffers to output before sending EOS (End of Stream). Default = -1 (unlimited)
I have a dummy pipeline using gst-launch-1.0 multifilesrc with default loop=False. The pipeline loops because of num-buffers=-1 as the default.
I don't want it to loop, which happens to be when I set num-buffers=1 or literally any other finite number.
What does it mean to be num-buffers=1 (or any value in that sense)?
Edit: Sample pipelines with a 10-second video
# 1. With loop=false and num-buffers=1
$> GST_DEBUG=3 gst-launch-1.0 multifilesrc location=preview.h264 loop=false num-buffers=1 ! h264parse ! avdec_h264 ! fakesink
...
Got EOS from element "pipeline0".
Execution ended after 0:00:00.425738029
...
# 2. With loop=false and num-buffers=10
$> GST_DEBUG=3 gst-launch-1.0 multifilesrc location=preview.h264 loop=false num-buffers=10 ! h264parse ! avdec_h264 ! fakesink
...
Got EOS from element "pipeline0".
Execution ended after 0:00:04.256451070
...
# 3. With neither loop flag (default=false) nor num-buffers (default=-1, unlimited)
$> GST_DEBUG=3 gst-launch-1.0 multifilesrc location=preview.h264 ! h264parse ! avdec_h264 ! fakesink
...This never ends because num-buffers=-1. Why?...
I didn't get any warnings in any case.

"num-buffers" defines how many frames will be published by a given element like videotestsrc. After sending "num-buffers", EOS event is published.
I find it useful in tests when you can define number of frames and framerate and then set expectations about how many frames shall be received during given time (e.g. using probe).
multifilesrc doesn't seem to support "num-buffers": it will read all files and exits (or start again when loop=True). You should see a warning when setting "num-buffers" on multifilesrc.
multifilesrc inherits from GstBaseSrc element and has num-buffers property. It should be used to replay a sequence of frames as video:
gst-launch-1.0 multifilesrc location="%08d.png" loop=true num-buffers=1000 ! decodebin ! videoconvert ! ximagesink
To replay images named 00000000.png to 99999999.png one after another.
For your purpose, just use filesrc element, not multifilesrc.

Related

Which elements are contained in decodebin?

I'm looking to decode and demux an mp4 file with gst-launch-1.0. Instead of using a bin - decodebin - I'd rather work with the seperate elements. Unfortunately, I did not find this.
My question is simple: what basic elements are contained in the decodebin?
If you can direct me to a place where I can find the composition of other bins or autopluggers that whould also be nice.
The decodebin will use all available elements in your gstreamer installation. Remember that you can launch the pipeline with decodebin and using verbose -v and guess what elements is the decodebin creating. For example, in the next pipeline that plays succesfully a mp4 file (video and audio):
gst-launch-1.0 -v filesrc location=/home/usuario/GST_/BigBuckBunny_320x180.mp4 ! queue ! qtdemux name=demuxer demuxer.video_0 ! queue ! decodebin ! videoconvert ! autovideosink demuxer.audio_0 ! queue ! decodebin ! audioconvert ! autoaudiosink
Watching the output I can conclude that the resulting pipeline is:
gst-launch-1.0 -v filesrc location=/home/usuario/GST_/BigBuckBunny_320x180.mp4 ! queue ! qtdemux name=demuxer demuxer.video_0 ! queue ! h264parse ! avdec_h264 ! videoconvert ! autovideosink demuxer.audio_0 ! queue ! aacparse ! avdec_aac ! audioconvert ! autoaudiosink
The playback components from gstreamer are available here. The playbin element will give you the full pipeline (video, audio, etc...) from the uri input.
For example, if you even don't know what kind of source you have, you can use playbin element:
gst-launch-1.0 playbin uri=file:///home/usuario/GST_/BigBuckBunny_320x180.mp4 -v
This will play automatically the file (if it is possible), and verbose output will show you the used plugins and status information.
gst-launch-1.0 can create .dot file with pipeline diagram every time pipeline changes state. To enable this functionality, set GST_DEBUG_DUMP_DOT_DIR variable to path where generated files should be saved. In this dir gst-launch-1.0 will create files like 0.00.00.069441527-gst-launch.READY_PAUSED.dot. You can then convert them to .png files using dot from ghraphviz package. To convert one file, use following command:
dot -Tpng 0.00.00.069441527-gst-launch.READY_PAUSED.dot -o0.00.00.069441527-gst-launch.READY_PAUSED.png
You also can convert them all, using following command in bash shell:
ls -1 *.dot | xargs -I{} dot -Tpng {} -o{}.png
You can find more details here:
How to generate a Gstreamer pipeline diagram (graph)

Synchronize two RTSP/RTP H264 video streams capture using GStreamer

I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH

In GStreamer how do I simultaneously playback and record an h264 AVI file of a v4l2src?

Recorded files with gstreamer-0.10 with FPS25 and FourCIF_Format plays in fast forward mode. Any solution would be appreciated. Some times skips 3-4 seconds in recorded files.
The pipeline I'm attempting to use is:
gst-launch v4l2src device=/dev/video2 !
'video/x-raw-yuv,width=704,height=576, framerate=25/1' ! tee
name=liveTee ! queue ! mfw_isink liveTee. ! queue ! vpuenc ! avimux !
filesink location=/home/Recording.avi
I'm gonna take a rough stab at it and re-format your question a bit. This is mostly a GStreamer and Freescale question, not so much QT.
gst-launch-1.0 -e videotestsrc pattern=ball do-timestamp=true
is-live=true ! timeoverlay !
'video/x-raw,width=704,height=576,framerate=25/1' ! tee name=liveTee !
queue leaky=downstream ! videoconvert !
ximagesink async=false
liveTee. ! queue leaky=downstream ! videoconvert ! queue ! x264enc !
avimux ! filesink location=/tmp/test.avi
The thing to keep in mind is that your encoder has to keep up with the live playback. So your pipeline needs to handle the case where the encoder falls out of sync. On the queue elements behind the tee, use the leaky attribute.
Then you also want to be careful about your video source and what it's supplying. It looks like in your case you want live video, but if your source was an existing video file the pipeline would probably need some more tweaking.
NOTE: It may be even simpler than that, just adding async=false to the videosink appears to be very important.

How to remove a branch of tee in an active GStreamer pipeline?

everyone
The version of GStreamer I use is 1.x. I've spent a lot of time in searching a way to delete a tee branch.
In an active pipeline, a recording bin is created as below and inserted into this pipeline by branching the tee element.
"queue ! video/x-h264, width=800, height=600, framerate=10/1, stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink location=/xxxx"
It works perfectly except that I want to dynamically delete the recording bin and get a playable mp4 file. According to some discussion and tutorial, to get a correct mp4 file , we need to handle something about EOS. After trying some methods, I always got broken mp4 files.
Does anyone have sample code written in C to show me ? I'd appreciate your help.
Your best bet for cases like this may be to create two processes. The first process would run the video, and half of the tee it has would deliver h264 data to the second process through whatever means.
Here are two pipelines demonstrating the concept using UDP sockets.
gst-launch-1.0 videotestsrc ! x264enc ! tee name=t ! h264parse ! avdec_h264 ! videoconvert ! ximagesink t. ! queue ! h264parse ! rtph264pay ! udpsink host=localhost port=8888
gst-launch-1.0 udpsrc port=8888 num-buffers=300 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! mp4mux ! filesink location=/tmp/264.mp4
The trick to getting that clean mp4 is to make sure an EOS event is delivered reliably.
Instead of dynamically adding it you just have it in the pipeline by default, and add a probe callback at the source pad of the queue in the probe callback you have to do the trick either to pass the buffer or not (GST_PAD_PROBE_DROP drops the buffer and GST_PAD_PROBE_OK passes on the buffer to next element) so when you get an event to start/stop recoding you just need to return appropriate values. And filesink you can use multifilesink instead so as to write to different files everytime you start/stop.
Note the queue which drops the buffers needs before the mux element otherwise the file would be corrupt.
Hope that helps!
Finally, I came up with a solution.
Let's say that there is an active pipeline including a recording bin.
"udpsrc port=4444 caps=\"application/x-rtp, media=(string)video,
clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay !
tee name=tp tp. ! queue ! video/x-h264, width=800, height=600,
framerate=10/1 ! decodebin ! videoconvert ! video/x-raw, format=RGBA !
autovideosink"
recording bin:
"queue ! video/x-h264, width=800, height=600, framerate=10/1,
stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink
location=/xxxx"
After a period of time, we want to stop recording and save as a mp4 file, and video media is still streaming.
First, I use a blocking probe to block the src pad of tee. In this blocking probe callback, I use an event probe to catch EOS in the sink pad of filesink and do a busy waiting.
*if EOS is catched in the event probe callback
self->isGotEOS = YES;
*busy waiting in the blocking probe callback
while (self->isGotEOS == NO) {
usleep(100000);
}
Before entering the busy waiting while loop, an EOS event is created and sent to the sink pad of recording bin.
After the busy waiting is done:
usleep(200000);
[self destory_record_elements];
I think usleep(200000) is a trick. Without it, a non-playable mp4 file is usually the result. It would seem that 200ms is long enough handling the EOS.
I had similar problem previously, my pipeline
videotestsrc do-timestamp="TRUE" ! videoflip method=0 ! tee name=t
t. ! queue ! videoconvert ! glupload ! glshader ! autovideosink async="FALSE"
t. ! queue ! identity drop-probability=1 ! videoconvert name=conv2 ! openh264enc ! h264parse ! avimux ! multifilesink async="FALSE" post-messages=true next-file=4
Then I just change drop-probability property on identity element
drop-probability = 1 + gst_pad_send_event(conv2_sinkpad, gst_event_new_eos()); - stop recording
drop-probability = 0 - resume recording

GStreamer multifilesrc never throws EOS

It appears that the GStreamer 1.0 multifilesrc element will not automatically throw an EOS when it runs out of files. If a stop-index=N is specified then it will EOS after N frames.
gst-launch-1.0 -ev multifilesrc location="tmp/frame%04d.jpg" stop-index=20 ! image/jpeg,framerate=10/1 ! jpegdec ! videoconvert ! videorate ! xvimagesink
Is there a way to have multifilesrc automatically generate EOS when the file list is exhausted or otherwise pack the frames into a stream with an EOS? Otherwise my pipeline just hangs at the end.