Trying to implement GStreamer pipeline with Tee using following elements.
gst_bin_add_many(GST_BIN (pipeline), <rpicamsrc>, <capsfilter>, <h264parse>, tee, <queue>, <rtph264pay>, <fakesink>, <queue>, <avdec_h264>, <videoconvert>, <capsfilter>, <customplugin>, <fakesink>, nullptr);
For better understanding provided the element names. The purpose is to create Tee pipeline as follows:
rpicamsrc ! capsfilter ! h264parse ! tee name=t t. ! queue ! rtph264pay ! fakesink t. ! queue ! avdec_h264 ! videoconvert ! capsfilter ! customplugin ! fakesink
But it fails always and doesn't report any error. But no video frames are captured. After some testing identified that fails for even this simple pipeline:
gst_element_link_many ( <rpicamsrc>, <capsfilter>, <h264parse>, <rtph264pay>, <fakesink>, nullptr))
Interesting is if I remove second fakesink from that above gst_bin_add_many line of code it works. Not sure what's the problem with this. Tried to use a different sink like autovideosink but no luck. When it fails it doesn't receive GST message type GST_MESSAGE_ASYNC_DONE in gst bus, but for success case it does. Gets GST_STREAM_STATUS_TYPE_CREATE, GST_STREAM_STATUS_TYPE_ENTER and GST_MESSAGE_STREAM_START for both failure and success case. What I am doing wrong, any ideas?
gst_element_link_many() is a convenient wrapper for a non-branched pipeline, meaning that it links one from next, to next. It does not know that you want to link the tee element in the middle of the pipeline with multiple elements. For example in your case, it tries to connect the fakesink to the queue in the middle of your pipeline.
Easy way
You can use gst_parse_launch() to let GStreamer figure out what links to what.
By your hands
If you have an element like tee, you must use gst_element_link() or gst_element_link_pads() to tell GSreamer that which element connect to which.
It is possible to create two pipelines with gst_element_link_many(),
rpicamsrc → capsfilter → h264parse → tee → queue → rtph264pay → fakesink
queue → avdec_h264 → videoconvert → capsfilter → customplugin→ fakesink
and then, link the tee element in the above to the below with gst_element_link_pads().
Related
I'm looking to decode and demux an mp4 file with gst-launch-1.0. Instead of using a bin - decodebin - I'd rather work with the seperate elements. Unfortunately, I did not find this.
My question is simple: what basic elements are contained in the decodebin?
If you can direct me to a place where I can find the composition of other bins or autopluggers that whould also be nice.
The decodebin will use all available elements in your gstreamer installation. Remember that you can launch the pipeline with decodebin and using verbose -v and guess what elements is the decodebin creating. For example, in the next pipeline that plays succesfully a mp4 file (video and audio):
gst-launch-1.0 -v filesrc location=/home/usuario/GST_/BigBuckBunny_320x180.mp4 ! queue ! qtdemux name=demuxer demuxer.video_0 ! queue ! decodebin ! videoconvert ! autovideosink demuxer.audio_0 ! queue ! decodebin ! audioconvert ! autoaudiosink
Watching the output I can conclude that the resulting pipeline is:
gst-launch-1.0 -v filesrc location=/home/usuario/GST_/BigBuckBunny_320x180.mp4 ! queue ! qtdemux name=demuxer demuxer.video_0 ! queue ! h264parse ! avdec_h264 ! videoconvert ! autovideosink demuxer.audio_0 ! queue ! aacparse ! avdec_aac ! audioconvert ! autoaudiosink
The playback components from gstreamer are available here. The playbin element will give you the full pipeline (video, audio, etc...) from the uri input.
For example, if you even don't know what kind of source you have, you can use playbin element:
gst-launch-1.0 playbin uri=file:///home/usuario/GST_/BigBuckBunny_320x180.mp4 -v
This will play automatically the file (if it is possible), and verbose output will show you the used plugins and status information.
gst-launch-1.0 can create .dot file with pipeline diagram every time pipeline changes state. To enable this functionality, set GST_DEBUG_DUMP_DOT_DIR variable to path where generated files should be saved. In this dir gst-launch-1.0 will create files like 0.00.00.069441527-gst-launch.READY_PAUSED.dot. You can then convert them to .png files using dot from ghraphviz package. To convert one file, use following command:
dot -Tpng 0.00.00.069441527-gst-launch.READY_PAUSED.dot -o0.00.00.069441527-gst-launch.READY_PAUSED.png
You also can convert them all, using following command in bash shell:
ls -1 *.dot | xargs -I{} dot -Tpng {} -o{}.png
You can find more details here:
How to generate a Gstreamer pipeline diagram (graph)
I'm a newbie to gstreamer so i would be appreciated if you could help me.
I'm trying to listen to a pipeline and record frames to a file.
I have tried the following pipeline:
gst-launch-1.0 udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
I want to record whole timeline even if the sender doesn't provide any frame data.
In default, recorder appends the frames when pipeline receive some valid frames. But I want to see some black frames when sender doesn't send data.
I experimented a bit and I don't think you'll be able to do this with a plain gst-launch command. Unfortunately what it would probably involve is to write an application that detects when packets/buffers are not coming in any more, and then modifying the pipeline. If you want to give it a go I'd suggest the input-selector element in something like this:
gst-launch-1.0 videotestsrc pattern=black ! video/x-raw ! input-selector name=selector ! clockoverlay ! jpegenc ! avimux ! filesink location=stream.avi
Then I'd create a method to attach the stream to the input-selector:
udpsrc port=5600 do-timestamp=true ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! identity name=buffer-checker
To detect no packets coming in, you can listen for the handoff signal on the identity element, and then remove the stream when it times out and switch over to the black test pattern from the videotestsrc by using the active-pad property on the input-selector.
Using the videomixer element almost works, but I don't believe it will handle multiple stops and starts of the stream.
Anyway, hope someone else comes up with a better idea. You could also re-analyze your top level approach and see if there is a way you can work with multiple video clips instead of the one.
Recorded files with gstreamer-0.10 with FPS25 and FourCIF_Format plays in fast forward mode. Any solution would be appreciated. Some times skips 3-4 seconds in recorded files.
The pipeline I'm attempting to use is:
gst-launch v4l2src device=/dev/video2 !
'video/x-raw-yuv,width=704,height=576, framerate=25/1' ! tee
name=liveTee ! queue ! mfw_isink liveTee. ! queue ! vpuenc ! avimux !
filesink location=/home/Recording.avi
I'm gonna take a rough stab at it and re-format your question a bit. This is mostly a GStreamer and Freescale question, not so much QT.
gst-launch-1.0 -e videotestsrc pattern=ball do-timestamp=true
is-live=true ! timeoverlay !
'video/x-raw,width=704,height=576,framerate=25/1' ! tee name=liveTee !
queue leaky=downstream ! videoconvert !
ximagesink async=false
liveTee. ! queue leaky=downstream ! videoconvert ! queue ! x264enc !
avimux ! filesink location=/tmp/test.avi
The thing to keep in mind is that your encoder has to keep up with the live playback. So your pipeline needs to handle the case where the encoder falls out of sync. On the queue elements behind the tee, use the leaky attribute.
Then you also want to be careful about your video source and what it's supplying. It looks like in your case you want live video, but if your source was an existing video file the pipeline would probably need some more tweaking.
NOTE: It may be even simpler than that, just adding async=false to the videosink appears to be very important.
everyone
The version of GStreamer I use is 1.x. I've spent a lot of time in searching a way to delete a tee branch.
In an active pipeline, a recording bin is created as below and inserted into this pipeline by branching the tee element.
"queue ! video/x-h264, width=800, height=600, framerate=10/1, stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink location=/xxxx"
It works perfectly except that I want to dynamically delete the recording bin and get a playable mp4 file. According to some discussion and tutorial, to get a correct mp4 file , we need to handle something about EOS. After trying some methods, I always got broken mp4 files.
Does anyone have sample code written in C to show me ? I'd appreciate your help.
Your best bet for cases like this may be to create two processes. The first process would run the video, and half of the tee it has would deliver h264 data to the second process through whatever means.
Here are two pipelines demonstrating the concept using UDP sockets.
gst-launch-1.0 videotestsrc ! x264enc ! tee name=t ! h264parse ! avdec_h264 ! videoconvert ! ximagesink t. ! queue ! h264parse ! rtph264pay ! udpsink host=localhost port=8888
gst-launch-1.0 udpsrc port=8888 num-buffers=300 ! application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! mp4mux ! filesink location=/tmp/264.mp4
The trick to getting that clean mp4 is to make sure an EOS event is delivered reliably.
Instead of dynamically adding it you just have it in the pipeline by default, and add a probe callback at the source pad of the queue in the probe callback you have to do the trick either to pass the buffer or not (GST_PAD_PROBE_DROP drops the buffer and GST_PAD_PROBE_OK passes on the buffer to next element) so when you get an event to start/stop recoding you just need to return appropriate values. And filesink you can use multifilesink instead so as to write to different files everytime you start/stop.
Note the queue which drops the buffers needs before the mux element otherwise the file would be corrupt.
Hope that helps!
Finally, I came up with a solution.
Let's say that there is an active pipeline including a recording bin.
"udpsrc port=4444 caps=\"application/x-rtp, media=(string)video,
clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay !
tee name=tp tp. ! queue ! video/x-h264, width=800, height=600,
framerate=10/1 ! decodebin ! videoconvert ! video/x-raw, format=RGBA !
autovideosink"
recording bin:
"queue ! video/x-h264, width=800, height=600, framerate=10/1,
stream-format=(string)byte-stream ! h264parse ! mp4mux ! filesink
location=/xxxx"
After a period of time, we want to stop recording and save as a mp4 file, and video media is still streaming.
First, I use a blocking probe to block the src pad of tee. In this blocking probe callback, I use an event probe to catch EOS in the sink pad of filesink and do a busy waiting.
*if EOS is catched in the event probe callback
self->isGotEOS = YES;
*busy waiting in the blocking probe callback
while (self->isGotEOS == NO) {
usleep(100000);
}
Before entering the busy waiting while loop, an EOS event is created and sent to the sink pad of recording bin.
After the busy waiting is done:
usleep(200000);
[self destory_record_elements];
I think usleep(200000) is a trick. Without it, a non-playable mp4 file is usually the result. It would seem that 200ms is long enough handling the EOS.
I had similar problem previously, my pipeline
videotestsrc do-timestamp="TRUE" ! videoflip method=0 ! tee name=t
t. ! queue ! videoconvert ! glupload ! glshader ! autovideosink async="FALSE"
t. ! queue ! identity drop-probability=1 ! videoconvert name=conv2 ! openh264enc ! h264parse ! avimux ! multifilesink async="FALSE" post-messages=true next-file=4
Then I just change drop-probability property on identity element
drop-probability = 1 + gst_pad_send_event(conv2_sinkpad, gst_event_new_eos()); - stop recording
drop-probability = 0 - resume recording
hi I am trying to visualize a music file in gstreamer using the following command:
gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert !
tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! goom !
colorspace ! autovideosink
But I get this error : "There may be a timestamping problem, or this computer is too slow."
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstAudioSinkClock
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstDshowVideoSink:autovideosink0-actual-sink-dshowvideo: A lot of buffers are being dropped.
Additional debug info:
..\Source\gstreamer\libs\gst\base\gstbasesink.c(2572): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstDshowVideoSink:autovideosink0-actual-sink-dshowvideo:
There may be a timestamping problem, or this computer is too slow.
ERROR: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0
Assuming this is something to do with the thread, I tried the following command:
gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert ! tee name=myT
{ ! queue ! autoaudiosink } { tee. ! queue ! goom ! colorspace ! autovideosink }
But then it gives the folloiwng link error:
** (gst-launch-0.10:5308): WARNING **: Trying to connect elements that don't share a common ancestor: tee and queue1
0:00:00.125000000 5308 003342F0 ERROR GST_PIPELINE grammar.tab.c:656:gst_parse_perform_link: could not link tee to queue1
WARNING: erroneous pipeline: could not link tee to queue1
Can anyone tell what is wrong? Thanks
I cannot give you an exact answer because i don't have windows installed.
For debugging this use your first pipeline (in linux works). Use parameter -v with gst-launch and put element identity just before autovideosink. This will print buffer information that passes through element identity, look for anything strange.
Also you could try to use directdrawsink instead of autovideosink. Another test that i will do is to generate the audio with audiotestsrc.
Remember that if you find a bug you can open a bug report in gnome bugzilla so GStreamer developers are aware that there is a problem. Even you could fix it yourself and send a patch.
For There may be a timestamping problem, or this computer is too slow. Error Try sync=false like
`gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert ! tee name=myT myT. ! queue ! autoaudiosink myT. ! queue ! goom ! colorspace ! autovideosink sync=false`
or you may have to try at both sink ends of the Tee like
`gst-launch filesrc location=file.mp3 ! decodebin ! audioconvert ! tee name=myT myT. ! queue ! autoaudiosink sync=false myT. ! queue ! goom ! colorspace ! autovideosink sync=false`
I also observed that if you replace autovideosink with xvimagesink or ximagesink the timestamping problem apparently seems to be solved.