In my code, I currently have a pipeline description that is a string. I use gst_parse_launch(...) to utilize this pipeline and everything is working great.
However, now I am interested in setting some properties on one of the elements in the pipeline. Specifically the pipeline sink element (in my case autovideosink). I would like the set the property enable-last-sample but the autovideosink doesn't have that property. Thus my question is, how can I determine which video sink the autovideosink has resolved to so I can set this property?
My application is written in C++.
One way to find out what it resolved to is to use the awesome pipeline graph debug feature. For example:
GST_DEBUG_BIN_TO_DOT_FILE(yourPipeline, GST_DEBUG_GRAPH_SHOW_ALL, file_name)
See GST_DEBUG_BIN_TO_DOT_FILE for details.
You can then render that graphviz graph and inspect your pipeline (including all bin-children).
autovideosink implements the GstChildProxy interface:
https://gstreamer.freedesktop.org/documentation/gstreamer/gstchildproxy.html?gi-language=c
You should be able to set things directly via this interface, or hook into the callbacks directly when a new child is being added.
Related
So my situation is I have a gstreamer pipeline (deepstream more precisely), that runs from C++ program, I have a function that checks user commands and I'm able to pause/resume pipeline that way. My question is there way to get screenshot from pipeline's videostream likewise?
If you are looking to check if the video is making it through your pipeline, use the "identity dump=1" element in your pipeline. This will display frames in a hexdump manner if the video is actually making through at the point at which this identity element is inserted.
Suppose I have an application written with GStreamer that has a pipeline for processing video data. For simplicity let's assume it looks like this:
appsrc->identity->appsink
While application with this hardcoded pipeline provides some functionality, I could imagine that users of my application might want to replace identity element with arbitrarily complex pipelines (still having the interface with one sink and one source with defined capabilities). Does GStreamer provide any functionality that would allow injecting whole pipelines into my application? If the pipeline could be defined with gst-launch it would be great, but C code is also fine. Or do I need to resort to some generic mechanisms for writing plugins?
I've got a rtsp cam with backchannel support and I'm trying to get it to work with the command line tool gst-launch-1.0. The incoming streams are not an issue, but the backchannel when enabled doesn't produce a sink. however I've digged through the sources and got this little hint from the developer from the element rtspsrc:
Set backchannel=onvif to enable, and use the 'push-backchannel-sample'
action signal with the correct stream id.
I can't seem to find any info about (action) signals on the command line for gst-launch-1.0
Does anyone know if it is even possible to send signals from gst-launch-1.0?
Thanks,
Bram
I think this is meant to be called from code and not usable from gst-launch-1.0.
Just for reference, the signal is called push-backchannel-buffer (not -sample).
Also, the above linked manual page for gst-launch-1.0 says:
Please note that gst-launch-1.0 is primarily a debugging tool. You
should not build applications on top of it. For applications, use the
gst_parse_launch() function of the GStreamer API as an easy way to
construct pipelines from pipeline descriptions.
I have been making a few experiments with GStreamer by using the gst-launch utility. However, ultimately, the aim is to implement this same functionality on my own application using GStreamer libraries.
The problem is that it's ultimately difficult (at least for someone that is not used to the GStreamer API) to "port" what I test on the command line to C/C++ code.
An example of a command that I may need to port is:
gst-launch filesrc location="CLIP8.mp4" ! decodebin2 ! jpegenc ! multifilesink location="test%d.jpg"
What's the most "straight forward" way/approach to take such command and write it in C on my own app.
Also, as a side question, how could I replace the multifilesink with the possibility of doing this work on memory (I'm using OpenCV to perform a few calculation on a given image that should be extracted from the video). Is it possible to decode directly to memory and use it right away without first saving to the filesystem? It could (and should) be sequential, I mean that would only move on to the next frame after I'm done with processing the current one so that I wouldn't have to keep thousands of frames in memory.
What do you say?
I found the solution. There's a function built in on GStreamer that parses gst-launch arguments and returns a pipeline. The function is called gst_parse_launch and is documented here: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html
I haven't tested it but it's possible the fastest solution to convert what have been testing on the command line to C/C++ code.
You could always pop open the source of gst-launch and grab the bits that parse out the command-line and turn it into a GStreamer pipeline.
That way you can just pass in the "command line" as a string, and the function will return a complete pipeline for you.
By the way, there is an interesting GStreamer element that provides a good way to integrate a processing pipeline into your (C/C++) application: appsink
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
With this one you can basically retrieve the frames from the pipeline in a big C array and do whatever you want with them. You setup a callback function, which will be activated every time a new frame is available from the pipeline thread...
I want to use Qt to create a simple GUI application that can play a local video file. I could use Phonon which does all the work behind the scenes, but I need to have a little more control. I have already succeeded in implementing an GStreamer pipeline using the decodebin and autovideosink elements. Now I want to use a Qt widget to channel the output to.
Has anyone ever succeeded in doing this? (I suppose so since there are Qt-based video players that build upon GStreamer.) Can someone point me in the right direction on how to do it?
Note: This question is similar to my previous posted question on how to connect Qt with an incoming RTP stream. This seemed to be quite challenging. This question will be easier to answer I think.
Update 1
Patrice's suggestion to use libVLC is very helpful already. Here's a somewhat cleaner version of the code found on VLC's website:
Sample for Qt + libVLC.
However, my original question remains: How do I connect GStreamer to a Qt widget?
Update 2
After some experimentation I ended up with this working sample. It depends on GstWidget.h and GstWidget.cpp from my own little GstSupport library. However, take note that is is currently only tested on the Mac version of Qt.
To connect Gstreamer with your QWidget, you need to get the window handle using QWidget::winId() and you pass it to gst_x_overlay_set_xwindow_id();
Rough sample code:
sink = gst_element_factory_make("xvimagesink", "sink");
gst_element_set_state(sink, GST_STATE_READY);
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId());
Also, you will want your widget to be backed by a native window which is achieved by setting the Qt::AA_NativeWindows attribute at the application level or the Qt::WA_NativeWindow attribute at the widget level.
Since Phonon is based on gstreamer, the place to look for details is the Phonon source tree (available here: http://gitorious.org/phonon/import/trees/master). For a video player you are most likely going to need a video display widget, such as the gstreamer/videowidget.h (cpp) that in turn used the X11 renderer (gstreamer/x11renderer.h, cpp). The sink used is the xvimagesink, falling back onto the ximagesink if the first cannot be created.
The basic trick is to overlay the VideoWidget with the video output. The X11 handle needed to do this is retrieved using the QWidget::winId method, which is platform specific (as are the sinks, so no biggie).
Also, if overlay is unavailable, a QWidgetVideoSink is used, which converts the video frames into individual frames for the WidgetRenderer class. This class, in turn, makes the current frame available as a QImage object, ready for any type of processing.
So to answer your question - use either overlays (as X11Renderer) or extract individual QImages from the video stream (as QWidgetVideoSink).
VLC version is a QT-based video player (since version 0.99). It allows too to stream or read a stream: You can find all information you need here: http://wiki.videolan.org/Developers_Corner. You only have create an instance of the player and associate it to a widget. Then you have full control on the player.
I have already tested it (on Linux and Windows) playing local music and video files and it works fine.
Give it a try and see by yourself.
Hope that helps.
Edit:
It seems if you want to use VLC, you need to write or find (I do not know if one exists) a GStreamer codec as explain on the videolan wiki. I think I would do that.