GStreamer. Probe after rtph265pay never called - gstreamer

i have rtsp server and i want to extend rtp buffer header. For this purpose i added probe to src of rtph265pay, but it never called. My pipeline:
( appsrc name=vsrc ! nvvidconv ! video/x-raw(memory:NVMM),format=NV12 !
omxh265enc MeasureEncoderLatency=true bitrate=20000000 control-rate=2 !
rtph265pay name=pay0 pt=96 )
Code where i attach probe:
static GstPadProbeReturn test_probe (GstPad *pad, GstPadProbeInfo *info,
gpointer user_data)
{
cout << "i'm here";
}
void mediaConfigure (GstRTSPMediaFactory* factory, GstRTSPMedia* media,
gpointer user_data)
{
GstElement *element, *rtph265pay; GstPad *pad;
element = gst_rtsp_media_get_element (media);
rtph265pay = gst_bin_get_by_name_recurse_up (GST_BIN (element), "pay0");
pad = gst_element_get_static_pad (rtph265pay, "src");
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER,
(GstPadProbeCallback) test_probe, NULL, NULL);
gst_object_unref (pad);
}
If i set "sink" instead of "src", probe works, but i need "src" to change rtp buffer header...
What is wrong here?

Maybe because the rtph265pay's src pad isn't linked to any other pad - meaning rtph265pay is the end of the pipeline - the element doesn't pass any buffers to its src pad?
Try and attach a fakesink after the rtph265pay.

Related

gstreamer rtsp tee appsink can't emit signal new-sample

I am using gstreamer to play and slove the rtsp stream.
rtspsrc location=rtspt://admin:scut123456#192.168.1.64:554/Streaming/Channels/1 ! tee name=t ! queue ! decodebin ! videoconvert ! autovideosink t. ! queue ! rtph264depay ! h264parse ! appsink name=mysink
and i write in c++ code like this :
#include <gst/gst.h>
void printIt(GList *p) {
if(!p) {
g_print("p null\n");
return ;
}
while(p) {
GstPad *pad = (GstPad*)p->data;
g_print("[%s]", pad->object.name);
p = p->next;
}
g_print("\n");
}
GstFlowReturn new_sample_cb (GstElement * appsink, gpointer udata) {
g_print("new-sample cb\n");
return GST_FLOW_OK;
}
GstFlowReturn new_preroll_cb (GstElement* appsink, gpointer udata) {
g_print("new_preroll_cb cb\n");
return GST_FLOW_OK;
}
int
main (int argc, char *argv[]) {
GstElement *pipeline;
GstBus *bus;
GstMessage *msg;
/* Initialize GStreamer */
gst_init (&argc, &argv);
/* Build the pipeline */
pipeline = gst_parse_launch("rtspsrc location=rtspt://admin:scut123456#192.168.1.64:554/Streaming/Channels/1 ! tee name=t ! queue ! decodebin ! videoconvert ! autovideosink t. ! queue ! rtph264depay ! h264parse ! appsink name=mysink", NULL);
GstElement *appsink = gst_bin_get_by_name(GST_BIN(pipeline), "mysink");
printIt(appsink->pads);
g_signal_connect(appsink, "new-sample", G_CALLBACK(new_sample_cb), pipeline);
g_print("sig conn new-sample\n");
g_signal_connect(appsink, "new-preroll", G_CALLBACK(new_preroll_cb), pipeline);
g_print("sig conn new-preroll\n");
/* Start playing */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* Wait until error or EOS */
bus = gst_element_get_bus (pipeline);
msg =
gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE,
GstMessageType(GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
/* Free resources */
if (msg != NULL)
gst_message_unref (msg);
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return 0;
}
when i compile and run it. it has output video in the autovideosink but the appsink's signal new-sample is not be callbacked. what should i do if i what to slove a frame in appsink ?
thanks.
By default appsink favors to use callbacks instead of signals for performance reasons (but I wouldn't consider your use case as a performance problem). For appsink to emit signals you will need to set the emit-signals property of the appsink to true. It defaults to false.
P.S. Apart from the above, I think you will need a GMainLoop for the event processing as demonstrated in the GStreamer examples.

GStreamer - Pipeline how to connect filesrc to qmlglsink

I'm new in the world of GStreamer, so I can't figure out how it works, and how to pair all GstElements.
I want to merge Video (mp4 for example, or any other video format) with qml (from Qt) as an overlay.
This example works perfectly fine.
GstElement *pipeline = gst_pipeline_new(NULL);
GstElement *src = gst_element_factory_make("videotestsrc",NULL);
GstElement *glupload = gst_element_factory_make("glupload",NULL);
GstElement *qmlglsink = gst_element_factory_make("qmlglsink",NULL);
g_assert(src && glupload && qmlglsink);
gst_bin_add_many(GST_BIN(pipeline), src, glupload, sink);
gst_element_link_many(src, glupload, sink, NULL);
But that example uses videotestsrc as Source, I would prefer to use something like filesrc.
I tried this code:
GstElement *pipeline = gst_pipeline_new (NULL);
GstElement *src = gst_element_factory_make ("filesrc", "file-source");
GstElement *parser = gst_element_factory_make("h264parse",NULL);
GstElement *decoder = gst_element_factory_make("avdec_h264",NULL);
GstElement *colors = gst_element_factory_make("glcolorconvert",NULL);
GstElement *glupload = gst_element_factory_make ("glupload", NULL);
GstElement *sink = gst_element_factory_make ("qmlglsink", NULL);
g_assert (src && parser && decoder %% colors && glupload && sink);
g_object_set (G_OBJECT (src), "location", "file:///home/test.mp4", NULL);
gst_bin_add_many (GST_BIN (pipeline), src, parser, decoder, glupload, colors, sink, NULL);
gst_element_link_many (src, parser, decoder, glupload, colors, sink, NULL);
It compiles, but the output is just a black screen.
Since I'm not sure how the GStreamer pipeline works, I tried that.
First, get the file from memory with filesrc, then parse it with h265parse and decode it with avdec_h264. Then forward that (I guess raw uncompressed data) to glupload and make colors good with glcolorconvert, since qmlglsink uses RGBA, and avdec_h264 is I420. After colors, are adjusted forward it to qmlglsink to be displayed in qml.
I'm missing something, and I don't know how to pair GstElemnts, as I said I need to pair filesrc (any video format) and qmlglsink.
You can try like below
MediaPlayer{
id: playVideo
source: "gst-pipeline: filesrc location=/home/root/skim-debris.mp4 ! qtdemux ! avdec_h264 ! qtvideosink"
autoLoad: true
autoPlay: true
playbackRate: 1.0
loops: 10
}
VideoOutput {
anchors.fill: parent
source: playVideo
}
It is easier to use a bin or any of GStreamers auto-pluggers.
But the main issue here is that you try treating an MP4 file as H.264 stream. This cannot work this way. You need to demux media streams from your container.
E.g. the pipeline should be something like this:
gst-launch-1.0 filesrc location=/home/test.mp4 ! qtdemux ! \
h264parse ! avdec_h264 ! glupload ! glcolorconvert ! qmlglsink

In gstreamer cant remove tee section after EOS

I am trying to create a webcam on an embedded device and learn gstreamer c implementation at the same time. i have dealt with gstreamer launch pipelines for a while so i am somewhat familiar already with gstreamer.
my end goal is to eventually have a pipeline that will dynamically stream video, record video and save pictures all from external commands. I've started small with my implementation and right now I'm focusing on being able to take a picture in one branch of a tee while the other branch is still flowing data. the other branch is just a fakesink right now but eventually it will be an h264 encoder with mux and audio saving videos.
here is a simple view of my pipeline:
v4l2src ! capsfilter ! tee ! queue ! fakesink tee. ! queue ! videoconvert ! pngenc ! filesink
my idea was to dynamically add the picture portion of the pipeline while its running.
the flow of my program goes like this:
picture event is triggered (currently a simple timer)-> add blocking probe on tee -> add picture pipeline and link it to tee -> set to playing -> set blocking probe on filesink to verify it has received data -> send EOS down the pipeline starting at the videoconvert -> set blocking probe on tee pad linked to picture pipeline -> set the picture pipeline to null and remove it and the tee pad
when the program executes, the eos probe on the tee pad for the picture pipeline is never called and instead the whole pipeline goes to EOS and i get an internal data stream error and no picture.
i want to make sure the filesink only gets 1 buffer as i cant stop the v4l2src stream or give it a num-buffers=1. i guess my problem right now is: how do i verify the filesink gets only one buffer? which pad should i send the EOS event on in order for it to properly save the picture? and lastly, how do i make sure only this one branch sees the EOS?
ive poured over all of the gstreamer tutorials and SO questions but most are either not answered or havent helped my situation.
here is my code:
#include <QDebug>
#include <QTimer>
#include "gstpipeline.hpp"
#include "gsttypes.hpp"
using namespace INSP_GST_TYPES;
gstpipeline::gstpipeline()
: mV4l2Src(NULL)
, mEncoder(NULL)
, mPngEncoder(NULL)
, mVideoFileSink(NULL)
, mPictureFileSink(NULL)
, mRawCapsFilter(NULL)
, mEncodedCapsFilter(NULL)
, mEncoderVideoConvert(NULL)
, mPngVideoConvert(NULL)
, mEncoderQueue(NULL)
, mMatroskaMux(NULL)
, mPipeline(NULL)
{
}
void gstpipeline::init()
{
mV4l2Src = gst_element_factory_make("v4l2src", V4L2SOURCE_NAME);
mRawCapsFilter = gst_element_factory_make("capsfilter", RAW_CAPS_NAME);
mRawFakesinkQueue = gst_element_factory_make("queue", RAW_FAKESINK_QUEUE_NAME);
mRawFakeSink = gst_element_factory_make("fakesink", RAW_FAKESINK_NAME);
mRawTee = gst_element_factory_make("tee", RAW_TEE_NAME);
mPipeline = gst_pipeline_new(PIPELINE_NAME);
mRawCaps = gst_caps_new_simple("video/x-raw",
"format", G_TYPE_STRING, "NV12",
"width", G_TYPE_INT, 1280,
"height", G_TYPE_INT, 720,
"framerate", GST_TYPE_FRACTION, 30, 1,
NULL);
g_object_set(mRawCapsFilter, "caps", mRawCaps, NULL);
if(!mPipeline || !mV4l2Src || !mRawCapsFilter || !mRawTee || !mRawFakesinkQueue || !mRawFakeSink)
{
qCritical() << "Failed to create main gst elements";
return;
}
else
{
qWarning() << "created the initial pipeline";
}
linkRawPipeline();
}
void gstpipeline::linkRawPipeline()
{
gst_bin_add_many(GST_BIN(mPipeline), mV4l2Src, mRawCapsFilter, mRawTee, mRawFakesinkQueue, mRawFakeSink, NULL);
g_object_set(mPipeline, "message-forward", TRUE, NULL);
if(gst_element_link_many(mV4l2Src, mRawCapsFilter, mRawTee, NULL) != TRUE)
{
qCritical() << "Failed to link raw pipeline";
return;
}
if(gst_element_link_many(mRawFakesinkQueue, mRawFakeSink, NULL) != TRUE)
{
qCritical() << "Failed to link fakesink pipeline";
return;
}
/* Manually link the Tee, which has "Request" pads */
GstPad* tee_fakesink_pad = gst_element_get_request_pad (mRawTee, "src_%u");
qWarning ("Obtained request pad %s for fakesink branch.", gst_pad_get_name (tee_fakesink_pad));
GstPad* raw_queue_pad = gst_element_get_static_pad (mRawFakesinkQueue, "sink");
if (gst_pad_link (tee_fakesink_pad, raw_queue_pad) != GST_PAD_LINK_OK)
{
qCritical ("raw Tee could not be linked.");
}
gst_object_unref(tee_fakesink_pad);
gst_object_unref(raw_queue_pad);
if (gst_element_set_state (mPipeline, GST_STATE_PLAYING) == GST_STATE_CHANGE_FAILURE)
{
qCritical() << "Unable to set the pipeline to the ready state";
gst_object_unref (mPipeline);
}
else
{
qWarning() << "set pipeline to playing";
GMainLoop* loop = g_main_loop_new (NULL, FALSE);
gst_bus_add_watch (GST_ELEMENT_BUS (mPipeline), sMainBusCallback, loop);
QTimer::singleShot(1000, this, SLOT(onBusTimeoutExpired()));
}
}
void gstpipeline::onBusTimeoutExpired()
{
blockRawPipeline();
}
void gstpipeline::blockRawPipeline()
{
qWarning() << "Blocking raw pipeline";
GstPad* srcpad = gst_element_get_static_pad(mRawFakesinkQueue, SRC_PAD);
gst_pad_add_probe(srcpad,
(GstPadProbeType)(GST_PAD_PROBE_TYPE_BLOCK | GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM | GST_PAD_PROBE_TYPE_IDLE),
sRawFakesinkQueueBlockedCallback, NULL, NULL);
g_object_unref(srcpad);
qWarning() << "added fakesink queue probe";
}
GstPadProbeReturn gstpipeline::sRawFakesinkQueueBlockedCallback(GstPad * pad, GstPadProbeInfo * info, gpointer user_data)
{
gst_pad_remove_probe (pad, GST_PAD_PROBE_INFO_ID (info));
//create the picturesink pipeline and link it to a new src pad on the raw tee
mPictureQueue = gst_element_factory_make("queue", RAW_PICTURE_QUEUE_NAME);
mPngEncoder = gst_element_factory_make("pngenc", PNG_ENC_NAME);
mPictureFileSink = gst_element_factory_make("filesink", PICTURESINK_NAME);
mPngVideoConvert = gst_element_factory_make("videoconvert", VIDEOCONVERT_PNG_NAME);
if(!mPngEncoder || !mPictureFileSink || !mPngVideoConvert)
{
qCritical() << "failed to make picturesink elements";
}
g_object_set(G_OBJECT (mPictureFileSink), "location", "/mnt/userdata/pipelinetest.png", NULL);
gst_bin_add_many (GST_BIN (mPipeline), mPictureQueue, mPngVideoConvert,
mPngEncoder, mPictureFileSink, NULL);
if(gst_element_link_many(mPictureQueue, mPngVideoConvert, mPngEncoder, mPictureFileSink, NULL) != TRUE)
{
qCritical() << "failed to link picture pipeline";
}
GstPad* tee_picturesink_pad = gst_element_get_request_pad (mRawTee, "src_%u");
qWarning ("Obtained request pad %s for picturesink branch.", gst_pad_get_name (tee_picturesink_pad));
GstPad* raw_picture_queue_pad = gst_element_get_static_pad (mPictureQueue, "sink");
if (gst_pad_link (tee_picturesink_pad, raw_picture_queue_pad) != GST_PAD_LINK_OK)
{
qCritical ("picture Tee could not be linked.");
}
gst_element_sync_state_with_parent(mPictureQueue);
gst_element_sync_state_with_parent(mPngVideoConvert);
gst_element_sync_state_with_parent(mPngEncoder);
gst_element_sync_state_with_parent(mPictureFileSink);
qWarning() << "done adding picturesink";
//set data block to see when the filesink gets data so we can send an EOS
GstPad* srcpad = gst_element_get_static_pad(mPictureFileSink, SINK_PAD);
gst_pad_add_probe(srcpad, (GstPadProbeType)(GST_PAD_PROBE_TYPE_BLOCK_DOWNSTREAM),
sPictureSinkDownstreamBlockProbe, NULL, NULL);
g_object_unref(srcpad);
return GST_PAD_PROBE_DROP;
}
GstPadProbeReturn gstpipeline::sPictureSinkDownstreamBlockProbe(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
gst_pad_remove_probe (pad, GST_PAD_PROBE_INFO_ID (info));
//this is a data blocking pad probe on picture filesink
qWarning() << "setting the EOS event probe on the picturesink";
GstPad* srcpad = gst_element_get_static_pad(mPictureQueue, SRC_PAD);
gst_pad_add_probe(pad, (GstPadProbeType)(GST_PAD_PROBE_TYPE_BLOCK | GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM),sPictureSinkEOSCallback, NULL, NULL);
g_object_unref(srcpad);
qWarning() << "sending eos through videoconvert";
gst_element_send_event(mPngVideoConvert, gst_event_new_eos());
qWarning() << "exiting pad probe";
return GST_PAD_PROBE_PASS;
}
GstPadProbeReturn gstpipeline::sPictureSinkEOSCallback(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
gst_pad_remove_probe (pad, GST_PAD_PROBE_INFO_ID (info));
if (GST_EVENT_TYPE (GST_PAD_PROBE_INFO_DATA (info)) == GST_EVENT_EOS)
{
qWarning() << "setting raw queue pad block";
GstPad* srcpad = gst_element_get_static_pad(mPictureQueue, SRC_PAD);
gst_pad_add_probe(pad, (GstPadProbeType)(GST_PAD_PROBE_TYPE_IDLE),sRawQueueBlockedCallback, NULL, NULL);
g_object_unref(srcpad);
}
else
{
qCritical() << "picturesink pad probe is NOT EOS";
}
return GST_PAD_PROBE_HANDLED;
}
GstPadProbeReturn gstpipeline::sRawQueueBlockedCallback(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
if (GST_EVENT_TYPE (GST_PAD_PROBE_INFO_DATA (info)) == GST_EVENT_EOS)
{
gst_pad_remove_probe (pad, GST_PAD_PROBE_INFO_ID (info));
gst_element_set_state(mPictureFileSink, GST_STATE_NULL);
gst_element_set_state(mPngEncoder, GST_STATE_NULL);
gst_element_set_state(mPngVideoConvert, GST_STATE_NULL);
gst_element_set_state(mPictureQueue, GST_STATE_NULL);
//unlink the picture pipeline from the src pad of the raw tee and remove that pad
GstPad* tee_picturesink_pad = gst_element_get_static_pad(mRawTee, "src_1");
qWarning ("Obtained request pad %s for picturesink branch.", gst_pad_get_name (tee_picturesink_pad));
GstPad* raw_picture_queue_pad = gst_element_get_static_pad (mPictureQueue, "sink");
if (gst_pad_unlink (tee_picturesink_pad, raw_picture_queue_pad) != GST_PAD_LINK_OK)
{
qCritical ("picture Tee could not be linked.");
}
if(gst_element_remove_pad(mRawTee, tee_picturesink_pad) != TRUE)
{
qCritical("could not remove raw tee pad");
}
g_object_unref(tee_picturesink_pad);
g_object_unref(raw_picture_queue_pad);
gst_bin_remove_many(GST_BIN(mPipeline), mPictureQueue, mPngVideoConvert, mPngEncoder, mPictureFileSink, NULL);
qWarning() << "we have set the fakesink back up";
}
else
{
qCritical() << "picturesink pad probe is NOT EOS";
}
return GST_PAD_PROBE_PASS;
}
gboolean gstpipeline::sMainBusCallback (GstBus*bus, GstMessage *msg, gpointer user_data)
{
GMainLoop *loop = (GMainLoop*)user_data;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_ERROR:
{
GError *err = NULL;
gchar *dbg;
gst_message_parse_error (msg, &err, &dbg);
gst_object_default_error (msg->src, err, dbg);
g_clear_error (&err);
g_free (dbg);
g_main_loop_quit (loop);
}
break;
case GST_MESSAGE_EOS:
g_print ("we reached EOS\n");
g_main_loop_quit (loop);
break;
default:
// g_print ("msg: %s\n", GST_MESSAGE_TYPE_NAME(msg));
break;
}
}
so i managed to figure this out myself. here are the steps i took to get this working:
1. blocking probe on the fakesink queue
2. add the picture pipeline
3. put a blocking data probe on the picture files sink
4. wait until a segment buffer reaches the filesink
5. put a blocking probe on the picture piplines queue
6. in the queue blocking probe, send eos event and remove the picture pipeline

gstreamer audiomixer command to code converting

I want to use audiomixer in my application which receives audios from different sources and should play them together in speaker.
my final application should do something like this command:
gst-launch-1.0 audiomixer name=mix ! autoaudiosink autoaudiosrc ! \
audioconvert ! mix. udpsrc port=5001 caps="application/x-rtp" ! queue !\
rtppcmudepay ! mulawdec ! audioconvert ! audioresample ! mix.
I already wrote a code to use tee and queues and know how to work with tee and queues in code based on this code. but I don't know how to use mixer in my code.
so for simplicity I just want to write a code to work as this command does:
gst-launch-1.0 audiotestsrc freq=100 ! audiomixer name=mix ! audioconvert ! autoaudiosink autoaudiosrc ! mix.
I didn't find any useful example to reach this goal, how can I write a C code to do this?
for the second part:
gst-launch-1.0 audiotestsrc freq=100 ! audiomixer name=mix ! audioconvert ! autoaudiosink autoaudiosrc ! mix.
this code works:
#include <gst/gst.h>
static GMainLoop *loop;
int bus_callback (GstBus *bus, GstMessage *message, gpointer data)
{
g_print ("Got %s message\n", GST_MESSAGE_TYPE_NAME (message));
switch (GST_MESSAGE_TYPE (message)) {
case GST_MESSAGE_ERROR: {
GError *err;
gchar *debug;
gst_message_parse_error (message, &err, &debug);
g_print ("Error: %s\n", err->message);
g_error_free (err);
g_free (debug);
g_main_loop_quit (loop);
break;
}
case GST_MESSAGE_EOS:
/* end-of-stream */
g_main_loop_quit (loop);
break;
default:
/* unhandled message */
break;
}
/* we want to be notified again the next time there is a message
* on the bus, so returning TRUE (FALSE means we want to stop watching
* for messages on the bus and our callback should not be called again)
*/
return TRUE;
}
int main(int argc, char *argv[])
{
/* Initialize GStreamer */
gst_init (nullptr, nullptr);
GstElement *pipeline, *src1,*src2, *sink, *convert1,*convert2,*audiomixer;
GstPad *conv_pad1, *conv_pad2, *mixer1_sinkpad,*mixer2_sinkpad;
gint i;
static GstBus *bus;
static guint bus_watch_id;
pipeline = gst_pipeline_new ("pipeline");
audiomixer = gst_element_factory_make ("adder", "mixer");
sink = gst_element_factory_make ("autoaudiosink", "sink");
src1 = gst_element_factory_make ("audiotestsrc", "src1");
convert1 = gst_element_factory_make ("audioconvert", "convert1");
src2 = gst_element_factory_make ("autoaudiosrc", "src2");
convert2 = gst_element_factory_make ("audioconvert", "convert2");
//g_object_set (sink, "async-handling", TRUE, NULL);
gst_bin_add_many (GST_BIN (pipeline), audiomixer ,sink, NULL);
gst_bin_add_many (GST_BIN (pipeline), src1 , convert1 , NULL);
gst_bin_add_many (GST_BIN (pipeline), src2 , convert2 , NULL);
gst_element_link (src1, convert1 );
gst_element_link (src2, convert2 );
gst_element_link(audiomixer , sink);
conv_pad1= gst_element_get_static_pad (convert1, "src");
mixer1_sinkpad = gst_element_get_request_pad (audiomixer, "sink_%u");
gst_pad_link (conv_pad1, mixer1_sinkpad);
g_object_unref(mixer1_sinkpad);
conv_pad2= gst_element_get_static_pad (convert2, "src");
mixer2_sinkpad = gst_element_get_request_pad (audiomixer, "sink_%u");
gst_pad_link (conv_pad2, mixer2_sinkpad);
g_object_unref(mixer2_sinkpad);
/* adds a watch for new message on our pipeline’s message bus to
* the default GLib main context, which is the main context that our
* GLib main loop is attached to below
*/
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
bus_watch_id = gst_bus_add_watch (bus, bus_callback, NULL);
gst_object_unref (bus);
/* Start playing */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
loop = g_main_loop_new (NULL, FALSE);
g_main_loop_run (loop);
g_object_unref(conv_pad1);
g_object_unref(conv_pad2);
gst_element_set_state (pipeline, GST_STATE_NULL);
g_source_remove (bus_watch_id);
}

GStreamer add probe with playbin

The following code adds a callback when each frame is displayed and it's working well:
pipeline = gst_parse_launch("filesrc location=/path ! decodebin ! autovideosink", &error);
video_sink = gst_bin_get_by_interface(GST_BIN(pipeline), GST_TYPE_VIDEO_OVERLAY);
GstPad *pad = gst_element_get_static_pad(video_sink, "sink");
gst_pad_add_probe(pad, GST_PAD_PROBE_TYPE_BUFFER, (GstPadProbeCallback)cb_have_data, data, NULL);
The following code adds the same callback but it's never been called:
pipeline = gst_parse_launch("playbin uri=file:///path", &error);
video_sink = gst_bin_get_by_interface(GST_BIN(pipeline), GST_TYPE_VIDEO_OVERLAY);
GstPad *pad = gst_element_get_static_pad(video_sink, "sink");
gst_pad_add_probe(pad, GST_PAD_PROBE_TYPE_BUFFER, (GstPadProbeCallback)cb_have_data, data, NULL);
Any idea why and how to fix that?
playbin has no input pads and no output pads, so you can't put a
probe, as a probe has to go on a pad.
However, there is a get-video-pad action signal you can run on
playbin, which it's possible to attach a probe to.