I'm writting my first gstreamer plugin and I cannot display my debug traces.
I used: gst-template-0.10/gst-plugin/tools/make_elem to generate a plugin template that I customized.
One of my first action was to add a macro GST_LOG_OBJECT in gst_demux_hbb_tv_chain to get the size of the buffer.
But the trace is not displayed.
I read the doc about developing a plugin for gstreamer, there is a debug part and what I've got in my template is exactly the same.
I launched my pipeline this way:
GST_DEBUG=demuxhbbtv=5 gst-launch fakesrc ! demuxhbbtv silent=TRUE ! fakesink
(GST_DEBUG_CATEGORY_INIT (gst_demux_hbb_tv_debug, "demuxhbbtv", 0, "Template demuxhbbtv");)
I try to put a g_print and it works.
What have I missed ?
Here is a part of my code:
GST_DEBUG_CATEGORY_STATIC (gst_demux_hbb_tv_debug);
#define GST_CAT_DEFAULT gst_demux_hbb_tv_debug
static gboolean demuxhbbtv_init (GstPlugin * demuxhbbtv)
{
...
GST_DEBUG_CATEGORY_INIT (gst_demux_hbb_tv_debug, "demuxhbbtv", 0, "Template demuxhbbtv");
...
}
static GstFlowReturn
gst_demux_hbb_tv_chain (GstPad * pad, GstBuffer * buf)
{
...
demuxHbbTv = GST_DEMUXHBBTV (gst_pad_get_parent (pad));
GST_LOG_OBJECT (demuxHbbTv, "!!!!!!!!!!!!!!!!!!!!!!!!!!==> buffer size= %d ....\n", GST_BUFFER_SIZE(buf) );
...
}
It is GST_DEBUG=demuxhbbtv:5 (replace the 2nd = with a :)
Related
I`m a beginner at using Gstreamer to handle some input videos. I have already built the pipeline using GStreamer to transcode the videos but the last part I cannot do is How I can get those batches of frames and do some custom image processing techniques to handle the purpose of my task.
Input Videos -----> Gstreamer Pipeline -----> Task: Apply some Image Processing Techniques
I`ve been searching about this problem on the Internet but cannot find any solution and the more I search, the more I am confused.
AppSink is the good element for you. You can enable "emit-signal" property and listen the event "new-sample". Then you can get an access to the buffer.
Here the entire documentation :
https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html?gi-language=c
You have to create appsink element, enable "emit-signals" then register "new-sample" callback like this :
g_signal_connect (data.app_sink, "new-sample", G_CALLBACK (new_sample), &data)
static GstFlowReturn new_sample (GstElement *sink, CustomData *data) {
GstSample *sample;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
/* The only thing we do in this example is print a * to indicate a received buffer */
g_print ("*");
gst_sample_unref (sample);
return GST_FLOW_OK;
}
return GST_FLOW_ERROR;
}
Now you can retrieve buffer from sample instead of g_print ... (gst_sample_get_buffer)
https://gstreamer.freedesktop.org/documentation/gstreamer/gstsample.html?gi-language=c
Then read data inside the buffer :
GstMapInfo info;
gst_buffer_map (buf, &info, GST_MAP_READ);
gst_buffer_unmap (buf, &info);
gst_buffer_unref (buf);
info.data ==> buffer content.
Best regards.
Is it possible to pass two streams through a single element? I have two streams
Need to extract data from, can be destroyed in element or passed
through to a sink.
Video stream, will be edited based on the
data extracted from stream 1, passed through to autovideosink
GStreamer Core Library version 1.16.2
Written in c
chain functions:
static GstFlowReturn
gst_test2_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
{
Gsttest2 *filter;
filter = GST_TEST2 (parent);
/* just push out the incoming buffer without touching it */
return gst_pad_push (filter->srcpad, buf);
}
//second pads chain function
static GstFlowReturn
gst_test2_chain2 (GstPad * pad, GstObject * parent, GstBuffer * buf)
{
g_print("\ninside chain2\n");
Gsttest2 *filter;
filter = GST_TEST2 (parent);
return gst_pad_push (filter->srcpad2, buf);
}
//Pad templates:
static GstStaticPadTemplate src_factory = GST_STATIC_PAD_TEMPLATE ("src",
GST_PAD_SRC,
GST_PAD_ALWAYS,
GST_STATIC_CAPS ("video/x-raw")
);
Extracting the data from one stream and editing the other works fine. Currently using two video/x-raw src and sink pads for testing, but the stream for extracting the data would eventually be meta/x-klv. Using a single pad and source works fine with videotestsrc, but trying to use both sources and sinks result in pipeline errors unable to link or syntax. Does gstreamer support sending two streams through a single element? Would it be simpler to destroy the buffer of the no longer needed stream in element?
What is the correct way of using splitmuxsink in a dynamic pipeline?
Previously I have used filesink to record (no problem what so ever) but there is requirement to save the file in segments so I have tried to use splitmuxsink in dynamic pipeline(there is async time in recording). In doing so I have faced two problems
when I tried to stop the recording, I use a idle pad to block the recording queue and launch a callback function to do steps to delink the recording branch (send eos, set elements in recording bin to NULL, then dequeue the bin). I have set a downstream data probe to notify me that the eos has reached the splitmuxsink sink before I tried to do step 2..(set elemets to null)
However, the end result is that i still have an empty last file (o bytes). It seem that the pipe is not yet closed or having some problem. I did a workaround to split the video immediately when the record stop (though I lost a few frames)
How should one stop in a dynamic branch?
When I tried to create the recording bin when i start the recording(utilizing the pad-added signal when a pad is created to connect the recording bin). Previously I have created the recording bin in normal sequence (not creating them during the glib loop that I have created). The previous step work ok but the present step has the splitmuxsink's filesink in a locked state
How should I workaround this? What causes the lock state?
Here is my code
/// create record bin
static void
pad-added(GstElement * self,
GstPad * new_pad,
gpointer user_data)
{
char* pad_name = gst_pad_get_name(new_pad);
if(g_str_equal(pad_name,"src"))
{
//RECORD records;
records.recording = gst_bin_new("recording");
records.queue = gst_element_factory_make("queue","queue");
records.enc = gst_element_factory_make("vpuenc_h264","enc");
records.parser = gst_element_factory_make("h264parse","parser");
records.sink = gst_element_factory_make("splitmuxsink","sink");
// Add it to the main pipeline
gst_bin_add_many(GST_BIN(records.recording),
records.queue,
records.enc,
records.parser,
records.sink,NULL);
// link up the recording elements queue
gst_element_link_many(records.queue,
records.enc,
records.parser,
records.sink,NULL)
g_object_set(G_OBJECT(records.fsink),
//"location","video_%d.mp4",
"max-size-time", (guint64) 10L * GST_SECOND,
"async-handling", TRUE,
"async-finalize", TRUE,
NULL);
records.queue_sink_pad = gst_element_get_static_pad (records.queue, "sink");
records.ghost_pad = gst_ghost_pad_new ("sink", records.queue_sink_pad);
gst_pad_set_active(records.ghost_pad, TRUE);
gst_element_add_pad(GST_ELEMENT(records.recording),records.ghost_pad);
g_signal_connect (records.sink, "format-location",
(GCallback)format_location_callback,
&records);
}
}
gboolean cmd_loop()
{
// other cmd not shown here
if(RECORD)
{
//create tee sink pad
// this step will trigger the pad-added function
tee_sink_pad = gst_element_get_request_pad (tee,"src");
// ....other function
}
}
int main()
{
// add the pad-added signal response
g_signal_connect(tee, "pad-added", G_CALLBACK(pad-added), NULL);
// use to construct the loop (cycle every 1s)
GSource* source = g_timeout_source_new(1000);
// set function to watch for command
g_source_set_callback(source,
(GSourceFunc)cmd_loop,
NULL,
NULL);
}
I'm using splitmuxsink element to save videos based on size, I can use format-location signal to set the next video name to be used for dumping video.
static gchararray
format_location_callback (GstElement * splitmux,
guint fragment_id,
gpointer udata)
{
static int i =0;
gchararray myarray = g_strdup_printf("myvid%d.mp4", i);
i += 1;
return myarray;
}
# add a callback signal
g_signal_connect (G_OBJECT (bin->sink), "format-location",
G_CALLBACK (format_location_callback), bin);
How do I get the current video name that's being dumped by the splitmuxsink? I think that might be possible using the GstMessages, but I'm not sure how to get the message related to particular plugin.
In fact, when I use the DEBUG_MODE=4, I can see the message that video name changed when the video split happens in splitmuxsink.
0:00:06.238114046 31488 0x55928d253d90 INFO splitmuxsink gstsplitmuxsink.c:2389:set_next_filename:<sink_sub_bin_sink1> Setting file to myvid0.mp4
0:00:06.238149341 31488 0x55928d253d90 INFO filesink gstfilesink.c:294:gst_file_sink_set_location:<sink> filename : myvid0.mp4
0:00:06.238160223 31488 0x55928d253d90 INFO filesink gstfilesink.c:295:gst_file_sink_set_location:<sink> uri
I'm able to get video location using GstMessage of type GST_MESSAGE_ELEMENT by listening on the GstBus. The splitmuxsink has message with name splitmuxsink-fragment-opened when video split takes place.
const gchar * location;
const GstStructure* s = gst_message_get_structure(message);
if (gst_structure_has_name(s, "splitmuxsink-fragment-opened"))
{
g_message ("get message: %s",
gst_structure_to_string (gst_message_get_structure(message)));
location = gst_structure_get_string(s, "location");
cout << location << endl;
}
break;
}
Output
** Message: 12:00:27.618: get message: splitmuxsink-fragment-opened, location=(string)myvid0.mp4, running-time=(guint64)1199530439;
myvid0.mp4
I have char* buffer that I read from video.mp4 file. This buffer has size 4096.
I tried to create GstBuffer from char* buffer
GstBuffer* Buffer = gst_buffer_new_wrapped(data, size);
dataBuffer = gst_buffer_copy(tmpBuf);
Then I push this buffer to the appsrc
GstElement* source = gst_bin_get_by_name (GST_BIN (consumer), "source");
gst_app_src_push_buffer (GST_APP_SRC (source), dataBuffer);
gst_object_unref (source);
Pipeline consumer was created in the next way:
gchar* videoConsumerString = g_strdup_printf ("appsrc max-buffers=5 drop=false name=source ! decodebin ! xvimagesink");
consumer = gst_parse_launch (videoConsumerString, NULL);
gst_element_set_state (consumer, GST_STATE_NULL);
g_free (videoConsumerString);
After the create of pipeline I set its state to the GST_STATE_NULL.
When I starts playing I set its state to GST_STATE_PLAYING.
But in the out I got error:
ERROR from element mpegvparse0: No valid frames found before end of stream
I tried to change size of char* buffer, use different elements in the pipeline (e.g. ffmpegcolorspace, videconvert, some other) but did not resolve this issue.
If run with GST_DEBUG=3, i have a lot of warnings
0:00:00.064480642 4059 0x12c66d0 WARN codecparsers_mpegvideo gstmpegvideoparser.c:887:gst_mpeg_video_packet_parse_picture_header: Unsupported picture type : 0
I use gstreamer 1.0.
Does anybody faced with such problem?
P.S. I don't have possibility to read data from file with Gstreamer. I only can read buffers from file with fread and then try to play them.
Maybe I have to set some specific fixed size of readed buffer?
I solved this problem.
Unexpectedly for me it was in the creating of the GstBuffer.
Correct way to create such buffer from data(char*) with known size is
GstBuffer * buffer = gst_buffer_new_allocate(NULL, size, NULL);
gst_buffer_fill(m_dataBufferProducer, 0, data, size);
Thank you for your help!