emit (action) signal from gst-launch-1.0 pipeline - gstreamer

I've got a rtsp cam with backchannel support and I'm trying to get it to work with the command line tool gst-launch-1.0. The incoming streams are not an issue, but the backchannel when enabled doesn't produce a sink. however I've digged through the sources and got this little hint from the developer from the element rtspsrc:
Set backchannel=onvif to enable, and use the 'push-backchannel-sample'
action signal with the correct stream id.
I can't seem to find any info about (action) signals on the command line for gst-launch-1.0
Does anyone know if it is even possible to send signals from gst-launch-1.0?
Thanks,
Bram

I think this is meant to be called from code and not usable from gst-launch-1.0.
Just for reference, the signal is called push-backchannel-buffer (not -sample).
Also, the above linked manual page for gst-launch-1.0 says:
Please note that gst-launch-1.0 is primarily a debugging tool. You
should not build applications on top of it. For applications, use the
gst_parse_launch() function of the GStreamer API as an easy way to
construct pipelines from pipeline descriptions.

Related

GStreamer: get child element from GstElement*

In my code, I currently have a pipeline description that is a string. I use gst_parse_launch(...) to utilize this pipeline and everything is working great.
However, now I am interested in setting some properties on one of the elements in the pipeline. Specifically the pipeline sink element (in my case autovideosink). I would like the set the property enable-last-sample but the autovideosink doesn't have that property. Thus my question is, how can I determine which video sink the autovideosink has resolved to so I can set this property?
My application is written in C++.
One way to find out what it resolved to is to use the awesome pipeline graph debug feature. For example:
GST_DEBUG_BIN_TO_DOT_FILE(yourPipeline, GST_DEBUG_GRAPH_SHOW_ALL, file_name)
See GST_DEBUG_BIN_TO_DOT_FILE for details.
You can then render that graphviz graph and inspect your pipeline (including all bin-children).
autovideosink implements the GstChildProxy interface:
https://gstreamer.freedesktop.org/documentation/gstreamer/gstchildproxy.html?gi-language=c
You should be able to set things directly via this interface, or hook into the callbacks directly when a new child is being added.

Gstreamer webrtc pipeline problem for open source camera

Hello everyone,
I am trying to implement low-latency video streaming using WebRTC. I write my code in C++ (websocket etc.), use only webrtc signalling server which is written in Python (ref1).
When I use a webcam, I do not have any problem streaming video to the client, however, I try to use the FLIR camera, I get a lot of problems while implementation.
There are a few questions in my mind to clear. I hope you guys give me some recommendations.
Is there any specific data-type that I should do pipeline to webrtc as a source? I just would like to know what kind of data I should send as a source in webrtc?
I try to send an image to check whether my WebRTC implementation works properly (except webcam), it gives me the error "Pipeline is empty". What can cause this problem? This is actually the main problem why I would like to know data type etc. to understand what exactly I should pipe into webrtc.
ref1: https://github.com/centricular/gstwebrtc-demos/tree/master/signalling
P.S.:
Client and Jetson Nano in the network
Server for signals is running on Jetson Nano
By running gst-inspect-1.0 webrtcbin you will find that both source and sink capability for this plugin is just application/x-rtp.
Therefore, if you want webrtcbin to work as a source pad, you will need to pipe it to some sort of RTP depayloader such as rtph264depay for video and rtpopusdepay for audio.

How to further investigate linking problems in gstreamer?

First of all, you should know this question is titled that way because that's were I ended up stuck after narrowing down my problem for quite a while. Since there probably are better approaches to my problem I'm also explaining below my problem and what I've been doing to try and solve it. Suggestions on other approaches would be very welcome.
The problem
I'm using a gstreamer port to Android to render videos from remote cameras through the RTSP protocol (UDP is the transport method).
Using playbin things were working quite fine until they didn't anymore for a subset of these cameras.
Unfortunately I don't have access to the cameras themselves since they belong to our company's client, but the first thing that sprung to my mind was that it's got to be a problem with them.
Then, there's another Android app which we're using as reference that is still able to play video from these cameras normally, so I'm now trying to do my best to further investigate the issue on my end (our Android app).
The problem has been quite deterministic: some cameras always fail, others always work. When they fail, sometimes it would be with reason not-linked as the cause.
I managed to dump the pipeline graph associated with each of these cameras when the application tries to play video from them. Then I could notice that for each of the cameras that are failing, the associated pipelines are always missing something. Some miss just the sink element, others miss both the source and the sink:
Dump of pipeline with source only:
Dump of pipeline without a source or a sink:
Dump of pipeline with both (these are the cases where we can indeed play):
These are dumps of pipelines built by the playbin.
Attempted solution
I've been trying to test what would happen if I built the pipeline manually from scratch (so that it's the same being build by the playbin in the third image above) and forced all camera's videos to be processed by this pipeline. Since all cameras used to work, my guess is that somehow negotiation is failing now for some cameras so that the playbin is not building the pipeline properly for these cameras but if I assemble it myself, eventually it all would work as expected (I'm assuming that rtspsrc in combination with glimagesink was also the chosen pipeline by the playbin for playing video from these cameras).
This is how I'm trying to build this pipeline myself:
priv->pipeline = gst_pipeline_new("rtspstreamer");
source = gst_element_factory_make("rtspsrc", NULL);
if (!source) {
GST_DEBUG("Source could not be created");
}
sink = gst_element_factory_make("glimagesink", NULL);
if (!sink) {
GST_DEBUG("Sink could not be created");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), source)) {
GST_DEBUG("Could not add source to pipeline");
}
if (!gst_bin_add(GST_BIN(priv->pipeline), sink)) {
GST_DEBUG("Could not add sink to pipeline");
}
if (!gst_element_link(source, sink)) {
GST_DEBUG("Source and sink could not be linked");
}
g_object_set(source, "location", uri, NULL);
So, running the code above, I get the following error:
Source and sink could not be linked
This is where I'm stuck. How could I investigate further on why these components are unable to link to each other? I think that maybe there should be some other component between them in the pipeline, but I think that's not the case by looking at the dump of the successful pipeline (third image) above.
Thanks in advance for any help.

Gstreamer - Convert command line gst-launch to C code

I have been making a few experiments with GStreamer by using the gst-launch utility. However, ultimately, the aim is to implement this same functionality on my own application using GStreamer libraries.
The problem is that it's ultimately difficult (at least for someone that is not used to the GStreamer API) to "port" what I test on the command line to C/C++ code.
An example of a command that I may need to port is:
gst-launch filesrc location="CLIP8.mp4" ! decodebin2 ! jpegenc ! multifilesink location="test%d.jpg"
What's the most "straight forward" way/approach to take such command and write it in C on my own app.
Also, as a side question, how could I replace the multifilesink with the possibility of doing this work on memory (I'm using OpenCV to perform a few calculation on a given image that should be extracted from the video). Is it possible to decode directly to memory and use it right away without first saving to the filesystem? It could (and should) be sequential, I mean that would only move on to the next frame after I'm done with processing the current one so that I wouldn't have to keep thousands of frames in memory.
What do you say?
I found the solution. There's a function built in on GStreamer that parses gst-launch arguments and returns a pipeline. The function is called gst_parse_launch and is documented here: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html
I haven't tested it but it's possible the fastest solution to convert what have been testing on the command line to C/C++ code.
You could always pop open the source of gst-launch and grab the bits that parse out the command-line and turn it into a GStreamer pipeline.
That way you can just pass in the "command line" as a string, and the function will return a complete pipeline for you.
By the way, there is an interesting GStreamer element that provides a good way to integrate a processing pipeline into your (C/C++) application: appsink
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
With this one you can basically retrieve the frames from the pipeline in a big C array and do whatever you want with them. You setup a callback function, which will be activated every time a new frame is available from the pipeline thread...

How can I create a video (from RGB and PCM) programatically with GStreamer?

My application displays video and audio and I want to add a recording feature.
I've considered FFmpeg, but I have to compile my application with VS so I can't use it. So I'm trying to do it with GStreamer, but I'm not finding any example or guide on how to create a video. Any help?
(I can also consider using any other alternatives, but they must be cross-platform).
Application Development Manual explains very well how to use gstreamer from your code. Try to read it first.
Than you can experiment with gst-launch tool, build pipeline and execute it from your application using gst-parse-launch function.
You can expose more details of your problem if you want more helpful answer.