GStreamer Appsrc: how do I get the negotiated caps? - c++

I am currently using an appsrc element in a program I am writing.
I let the user provide a launch string such as this example:
appsrc name=mysource format=3 is-live=1 \
! video/x-raw(memory:NVMM), width=5400, height=3400, framerate=30/1, format=NV12 \
! nvv4l2h265enc bitrate=8000000 control-rate=0 iframeinterval=2 \
! h265parse \
! matroskamux \
! filesink location=myfile.mkv
The launch string could be anything, provided it has an appsrc called mysource.
In my program, I locate mysource and I would like to know the format property that was provided by the user (to create the right kind of data buffer). I query the src pad of my appsrc element:
GstPad * pad = gst_element_get_static_pad(m_appsrc, "src");
if (!pad) fprintf(stderr, "pad is null\n");
GstCaps * caps = gst_pad_get_current_caps(pad);
if (!caps) fprintf(stderr, "caps is null\n");
for (guint i = 0; i < gst_caps_get_size (caps); i++) {
GstStructure *structure = gst_caps_get_structure (caps, i);
g_print ("%s%s\n", " ", gst_structure_get_name (structure));
gst_structure_foreach (structure, print_field, (gpointer) " ");
}
pad is non-null, but caps is always returned as NULL.
Looking at the .dot graph created by GST_DEBUG_BIN_TO_DOT, I'm seeing that the caps indicated on the output of the appsrc are "ANY".
Do I have to do something special like traversing the pipeline to get the final negotiated caps?

Looks like if I specify a caps property in the appsrc launch string, caps is not NULL and can be traversed to describe the user-specified capabilities. This is good enough for me:
appsrc name=mysource format=3 is-live=1 caps="video/x-raw(memory:NVMM), width=5400, height=3400, framerate=30/1, format=NV12" \
! nvv4l2h265enc bitrate=8000000 control-rate=0 iframeinterval=2 \
! h265parse \
! matroskamux \
! filesink location=myfile.mkv

Related

How to get the correct video frame number or time position from appsink buffer

Thanks in advance.
I want to record a video from an rtsp video camera and at the same time
process the video frame obtained from appsink throught new-sample signal.
Then, in a separate application I read the recorded video and show the information related to the frames processed.
Documentation say that buffer->offset have the video frame number, but doesnt work for me, it allways have the same number.
I have this pipeline:
rtspsrc location=rtsp://10.0.0.1:554/video.sdp latency=100 ! rtph264depay ! tee name=t
! queue ! vaapidecodebin ! vaapipostproc format=rgba ! appsink name=appsink t.
! queue ! h264parse ! mp4mux ! filesink sync=false name=filer location=/home/VideoDB/2017-09-04_16:33:46.mp4
Code example:
GstFlowReturn GstVideoSourcePrivate::newSample(GstAppSink* sink, gpointer user_data)
{
....
GstSample* sinkSample = gst_app_sink_pull_sample(GST_APP_SINK(sink));
if (sinkSample) {
GstBuffer* buffer = gst_sample_get_buffer(sinkSample);
// I need this position to be the same as the recorded video
// or get the frame video sequence number, so that we
GstClockTime pos;
gst_element_query_position(self->pipeline(), GST_FORMAT_TIME, &pos);
...
}
...
}
Thanks for your answer.
I did what you told me, but I can not get the expected result.
Then I discovered that when a videorate element is inserted into the pipeline, buffer->offset begins to display the correct sequence of video frame. But again, I can not get a good sync for a few milliseconds.
So, I read the doc one more time and I made this code to get a better result. It seems that there are few latencies that need to be compensated.
https://gstreamer.freedesktop.org/documentation/application-development/advanced/clocks.html
https://gstreamer.freedesktop.org/documentation/plugin-development/advanced/clock.html
...
int64_t timestam = GST_BUFFER_TIMESTAMP(buffer);
GstSegment* segment = gst_sample_get_segment(sinkSample);
gint64 pos = gst_segment_to_stream_time(segment, GST_FORMAT_TIME, timestam);
GstQuery*q = gst_query_new_latency();
if (gst_element_query (self->m_pipeline, q)) {
gboolean live;
GstClockTime minlat, maxlat;
gst_query_parse_latency (q, &live, &minlat, &maxlat);
pos+= minlat;
}
gst_query_unref (q);
...

GStreamer: Caps negotiation failure

I have a problem with linking of a two elements: avdec_h264 and avenc_mpeg4. I think that somehow these elements can't negotiate a capabilities of a data.
I've tested my pipeline with gst-launch:
gst-launch-1.0 rtspsrc location="rtsp://camera" ! rtph264depay ! h264parse ! avdec_h264 ! avenc_mpeg4 ! fakesink
It have worked fine.
When I use my application where the pipeline is implemented:
pipeline_ = gst_pipeline_new("default");
if (!pipeline_)
{
return false;
}
receiver_ = gst_element_factory_make("rtspsrc", "receiver");
demuxer_ = gst_element_factory_make("rtph264depay", "demuxer");
parser_ = gst_element_factory_make("h264parse", "parser");
decoder_ = gst_element_factory_make("avdec_h264", "decoder");
encoder_ = gst_element_factory_make("avenc_mpeg4, "encoder");
output_ = gst_element_factory_make("fakesink", "output");
if (!receiver_ || !demuxer_ || !parser_ ||
!decoder_ || !encoder_ || !output_)
{
return false;
}
g_object_set(GST_OBJECT(receiver_), "location", "rtsp://camera", nullptr);
// On this signal source pad of the receiver is being connected to
// the sink pad of the demuxer.
g_signal_connect(receiver_, "pad-added", G_CALLBACK(on_pad_added), this);
gst_bin_add_many(GST_BIN(pipeline_), receiver_, demuxer_, parser_,
decoder_, encoder_, output, nullptr);
if (!gst_element_link_many(demuxer_, parser_, decoder_,
encoder_, output_, nullptr))
{
return false;
}
Everything links successfully. All elements change their state to PLAYING, but I get nothing: I do not get GST_MESSAGE_STREAM_START on the pipeline's bus.
Here is the graphs from gst-launch and my application:
If I change avenc_mpeg4 to, videoconvert element, which is not an encoder, everything will works well. If I put an other encoder, I will still have current problem.
Probably I don't know about some particular things on how to work with the encoder. But I could not find solution.
Thank you.
A few points:
The code you posted above should listed for pad-added messages of decodebin. I am surprised if the code would work as is (maybe put the full code on a gist and link from here). See https://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-pads.html
Insert a videoconvert between the decoder and the encoder.
Where are you linking reciever to demuxer ? that is needed as i understand.

mux klv data with h264 by mpegtsmux

I need to mux klv metadata into the h264 stream. I have created application. But the stream is playing only as long as klv-data is being inserted. When i stop pushing klv-data the whole stream stops. What is the right method to mux asynchronous klv data by mpegtsmux?
Klv-data need to be inserted into the following working pipeline:
v4l2src input-src=Camera ! videorate drop-only=true ! 'video/x-raw, format=(string)NV12, width=1920, height=1088, framerate=25/1' ! ce_h264enc target-bitrate=6000000 idrinterval=25 intraframe-interval=60 ! queue ! mpegtsmux alignment=7 ! udpsink host=192.168.0.1 port=3000 -v
This pipeline is collected in the application. To insert klv-metedata appsrc is created:
appSrc = gst_element_factory_make("appsrc", nullptr);
gst_app_src_set_caps (GST_APP_SRC (appSrc), gst_caps_new_simple("meta/x-klv", "parsed", G_TYPE_BOOLEAN, TRUE, "sparse", G_TYPE_BOOLEAN, TRUE, nullptr));
g_object_set(appSrc, "format", GST_FORMAT_TIME, nullptr);
Then appsrc is linked to the pipeline:
gst_bin_add(GST_BIN(pipeline), appSrc);
gst_element_link(appSrc, mpegtsmux);
Here is push function:
void AppSrc::pushData(const std::string &data)
{
GstBuffer *buffer = gst_buffer_new_allocate(nullptr, data.size(), nullptr);
GstMapInfo map;
GstClock *clock;
GstClockTime abs_time, base_time;
gst_buffer_map (buffer, &map, GST_MAP_WRITE);
memcpy(map.data, data.data(), data.size());
gst_buffer_unmap (buffer, &map);
GST_OBJECT_LOCK (element);
clock = GST_ELEMENT_CLOCK (element);
base_time = GST_ELEMENT (element)->base_time;
gst_object_ref (clock);
GST_OBJECT_UNLOCK (element);
abs_time = gst_clock_get_time (clock);
gst_object_unref (clock);
GST_BUFFER_PTS (buffer) = abs_time - base_time;
GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale_int (1, GST_SECOND, 1);
gst_app_src_push_buffer(GST_APP_SRC(element), buffer);
}
Gstreamer version is 1.6.1.
What can be wrong with my code? I'd appreciate your help.
I can push dummy klv-packets to maintain video stream. But i don't want to pollute upcomming stream and i am sure there should be more delicate solution.
I have found that i can send event with GST_STREAM_FLAG_SPARSE, which should be appropriate for subtitles. But as a result i have no output at all.
GstEvent* stream_start = gst_event_new_stream_start("klv-04");
gst_event_set_stream_flags(stream_start, GST_STREAM_FLAG_SPARSE);
GstPad* pad = gst_element_get_static_pad(GST_ELEMENT(element), "src");
gst_pad_push_event (pad, stream_start);
While debugging i have found that after applying the following patch to the gstreamer and using GST_STREAM_FLAG_SPARSE, the stream doesn't stop when the appsrc stops pushing packets.
diff --git a/libs/gst/base/gstcollectpads.c b/libs/gst/base/gstcollectpads.c
index 8edfe41..14f9926 100644
--- a/libs/gst/base/gstcollectpads.c
+++ b/libs/gst/base/gstcollectpads.c
## -1440,7 +1440,8 ## gst_collect_pads_recalculate_waiting (GstCollectPads * pads)
if (!GST_COLLECT_PADS_STATE_IS_SET (data, GST_COLLECT_PADS_STATE_WAITING)) {
/* start waiting */
gst_collect_pads_set_waiting (pads, data, TRUE);
- result = TRUE;
+ if (!GST_COLLECT_PADS_STATE_IS_SET (data, GST_COLLECT_PADS_STATE_LOCKED))
+ result = TRUE;
}
}
}
Anyway, the receiver stops updating screen 10 seconds after the last klv packet.
This is a bit of an old thread but,
In my experience though, if there is no queue between the appsrc and the muxer, you will get this behavior. I would change your:
gst_element_link(appSrc, mpegtsmux);
To this:
gst_element_link(appSrc, appSrcQueue);
gst_element_link(appSrcQueue, mpegtsmux);
And I'm not sure if the mpegtsmux has the capability for it or not but the muxer that we have used has a property called do-timestamping and when that was set to TRUE we had a better experience.
Another tip I would give is to use the gst-inspect tool to see what options each elements have.

Gstreamer. Write appsink to filesink

I have written a code for appsrc to appsink and it works. I see the actual buffer. It's encoded in H264(vpuenc=avc). Now I want to save it in a file(filesink). How I approach it?
app:
int main(int argc, char *argv[]) {
gst_init (NULL, NULL);
GstElement *pipeline, *sink;
gchar *descr;
GError *error = NULL;
GstAppSink *appsink;
descr = g_strdup_printf (
"mfw_v4lsrc device=/dev/video1 capture_mode=0 ! " // grab from mipi camera
"ffmpegcolorspace ! vpuenc codec=avc ! "
"appsink name=sink"
);
pipeline = gst_parse_launch (descr, &error);
if (error != NULL) {
g_print ("could not construct pipeline: %s\n", error->message);
g_error_free (error);
exit (-1);
}
gst_element_set_state(pipeline, GST_STATE_PAUSED);
sink = gst_bin_get_by_name (GST_BIN (pipeline), "sink");
appsink = (GstAppSink *) sink;
gst_app_sink_set_max_buffers ( appsink, 2); // limit number of buffers queued
gst_app_sink_set_drop( appsink, true ); // drop old buffers in queue when full
gst_element_set_state (pipeline, GST_STATE_PLAYING);
int i = 0;
while( !gst_app_sink_is_eos(appsink) )
{
GstBuffer *buffer = gst_app_sink_pull_buffer(appsink);
uint8_t* data = (uint8_t*)GST_BUFFER_DATA(buffer);
uint32_t size = GST_BUFFER_SIZE(buffer);
gst_buffer_unref(buffer);
}
return 0; }
If as mentioned in the comments, what you actually want to know is how to do a network video stream in GStreamer, you should probably close this question because you're on the wrong path. You don't need to use an appsink or filesink for that. What you'll want to investigate are the GStreamer elements related to RTP, RTSP, RTMP, MPEGTS, or even MJPEGs (if your image size is small enough).
Here are two basic send/receive video stream pipelines:
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=640,height=480 ! vpuenc ! h264parse ! rtph264pay ! udpsink host=localhost port=5555
gst-launch-0.10 udpsrc port=5555 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! ffdec_h264 ! videoconvert ! ximagesink
In this situation you don't write your own while loop. You register callbacks and wait for buffers (GStreamer 0.10) to arrive. If you're using GStreamer 1.0, you use samples instead of buffers. Samples are a huge pain in the ass compared to buffers but oh well.
Register the callback:
GstAppSinkCallbacks* appsink_callbacks = (GstAppSinkCallbacks*)malloc(sizeof(GstAppSinkCallbacks));
appsink_callbacks->eos = NULL;
appsink_callbacks->new_preroll = NULL;
appsink_callbacks->new_sample = app_sink_new_sample;
gst_app_sink_set_callbacks(GST_APP_SINK(appsink), appsink_callbacks, (gpointer)pointer_to_data_passed_to_the_callback, free);
And your callback:
GstFlowReturn app_sink_new_sample(GstAppSink *sink, gpointer user_data) {
prog_data* pd = (prog_data*)user_data;
GstSample* sample = gst_app_sink_pull_sample(sink);
if(sample == NULL) {
return GST_FLOW_ERROR;
}
GstBuffer* buffer = gst_sample_get_buffer(src);
GstMemory* memory = gst_buffer_get_all_memory(buffer);
GstMapInfo map_info;
if(! gst_memory_map(memory, &map_info, GST_MAP_READ)) {
gst_memory_unref(memory);
gst_sample_unref(sample);
return GST_FLOW_ERROR;
}
//render using map_info.data
gst_memory_unmap(memory, &map_info);
gst_memory_unref(memory);
gst_sample_unref(sample);
return GST_FLOW_OK;
}
You can keep your while loop as it is--using gst_app_sink_is_eos()--but make sure to put a sleep in it. Most of the time I use something like the following instead:
GMainLoop* loop = g_main_loop_new(NULL, FALSE);
g_main_loop_run(loop);
g_main_loop_unref(loop);
Note: Unless you need to do something special with the data you can use the "filesink" element directly.
Simpler option would be write to the file directly in the appsink itself ie when you get a callback when the buffer is done write to the file and make sure you close it on eos.
Hope that helps.

Gstreamer Elements not linking

I am new to Gstreamer and I have a question about why my elements will not link together. Here is my code:
CustomData data;
data.videosource = gst_element_factory_make("uridecodebin", "source");
cout << "Created source element " << data.videosource << endl;
data.demuxer = gst_element_factory_make("qtdemux", "demuxer");
cout << "Created demux element " << data.demuxer << endl;
data.decoder = gst_element_factory_make("ffdec_h264", "video-decoder");
cout << "Went to the video path " << data.decoder << endl;
data.videoconvert = gst_element_factory_make("ffmpegcolorspace", "convert");
cout << "Created convert element " << data.videoconvert << endl;
data.videosink = gst_element_factory_make("autovideosink", "sink");
cout << "Created sink element " << data.videosink << endl;
if (!data.videosource ||!data.demuxer || !data.decoder || !data.videoconvert || !data.videosink)
{
g_printerr ("Not all elements could be created.\n");
system("PAUSE");
return;
}
//Creating the pipeline
data.pipeline = gst_pipeline_new("video-pipeline");
if (!data.pipeline)
{
g_printerr ("Pipeline could not be created.");
}
//Setting up the object
g_object_set(data.videosource, "uri", videoFileName[camID] , NULL);
//videoFileName[camID] is a char** with the content uri=file:///C://videofiles/...mp4
//Adding elements to the pipeline
gst_bin_add_many(GST_BIN (data.pipeline), data.videosource, data.demuxer, data.decoder, data.videoconvert, data.videosink, NULL);
//This is where the issue occurs
if(!gst_element_link(data.videosource, data.demuxer)){
g_printerr("Elements could not be linked. \n");
system("PAUSE");
return;
}
What I am trying to do is to break down a mp4 file and display only the video content but for some reason when I try to link source and demuxer, it comes out as false.
Thank you guys so much!
Let's have a look at the pipeline you're using (I'll use gst-launch here for its brevity, but the same goes for any GStreamer pipelines):
gst-launch uridecodebin uri=file:///path/to/movie.avi \
! qtdemux ! ffdec_h264 ! ffmpegcolorspace \
! autovideosink
gst-inspect uridecodebin states:
Autoplug and decode an URI to raw media
So uridecodebin takes any audio/video source and decodes it by internally using some of GStreamer's other elements.
Its output is something like video/x-raw-rgb or audio/x-raw-int (raw audio/video)
qtdemux on the other hand takes a QuickTime stream (still encoded) and demuxes it.
But what it gets in your example is the already decoded raw video (which is why it won't link).
So, you've basically got two options:
just use uridecodebin
gst-launch uridecodebin uri=file:///path/to/movie.avi \
! autovideosink
which will allow your pipeline to decode pretty much any video file
just use the qtdemux ! ffdec_h264 ! ffmpegcolorspace elements:
gst-launch filesrc=/path/to/movie.avi \
! qtdemux ! ffdec_h264 ! ffmpegcolorspace
! autovideosink
Keep in mind however that your pipeline doesn't play audio.
To get that as well do one of the following:
Simply use playbin2
gst-launch playbin2 uri=file:///path/to/movie.avi
Connect your decodebin to an audio sink as well
gst-launch uridecodebin name=d uri=... ! autovideosink d. ! autoaudiosink