I have a simple mjpeg pipeline and I want to access the buffer on the sink to get the pts to calculate the latency.
Pipeline:
souphttpsrc -> jpegparse -> imxvpudec -> imxipusink
What is the best way to do this? Some code examples would be great.
The time things in gstreamer confusing me a little bit.
I'd add an identity element in your pipeline where you want to analyze the PTS:
souphttpsrc ! jpegparse ! identity ! imxvpudec ! imxipusink
Then connect to the "handoff" signal:
static void pts_analysis_cb(GstElement *identity,
GstBuffer *buffer,
gpointer user_data) {
GstClockTime pts = GST_BUFFER_PTS(buffer);
//analysis
}
g_signal_connect_data(identity, "handoff",
G_CALLBACK(pts_analysis_cb),
NULL, NULL, GConnectFlags());
If you're seeing MJPEG related latency though you may just need to have sync=false on your tail element or set flags to drop buffers if it's falling behind.
Related
We are decoding RTSP stream frames using Gstreamer in C++. We need to read the frame NTP timestamps, which we think that resides in RTCP packets. After some documentation digging, we found an element called GstRTPBaseDepayload, which has a property called "stats", which has a field "timestamp", explained as the "last seen RTP timestamp".
Our original pipeline:
gst-launch-1.0 rtspsrc port-range=5000-5100 location="rtsp://.." latency=300 is-live=true ! queue ! rtph265depay name=depayer! video/x-h265 , stream-format=byte-stream, alignment=au ! h265parse ! video/x-h265 , stream-format=byte-stream, alignment=au ! appsink name=mysink sync=true
I named the depay element as rtph265depay name=dp, then:
depayer_=gst_bin_get_by_name(GST_BIN(pipeline_), "dp");
GstStructure * stat;
g_object_get((GstRTPBaseDepayload*)depayer_,"stats",stat);
GType type = gst_structure_get_field_type(stat,"timestamp");
It gave an error saying that the stat structure does not have a field, in fact, it did not have any fields. I did not find any example usage of GstRTPBaseDepayload, and the documentation is lacking as always. I would appreciate any guidance regarding the frame timestamps.
Edit:
I also tried to check if depayer_ has a null value:
depayer_=gst_bin_get_by_name(GST_BIN(pipeline_), "dp");
if(depayer_!=nullptr){
GstStructure * stat;
// GstRTPBaseDepayload* depayload;
g_object_get(depayer_,"stats",stat,NULL);
if(gst_structure_has_field(stat,"timestamp")){ //this line causes segfault
guint timestamp;
gst_structure_get_uint(stat,"timestamp",×tamp);
}
}
Neither depayer nor stat object is null, however gst_structure_has_field(stat,"timestamp") causes a segfault. Any help is much appreciated.
I guess you can try
GstStructure *stat;
// GstRTPBaseDepayload* depayload;
g_object_get(depayer_,"stats",&stat,NULL);
notice I used &stat in g_object_get, not stat.
https://docs.gtk.org/gobject/method.Object.get.html
I found an example:
https://github.com/pexip/gst-rtsp-server/blob/master/examples/test-mp4.c#L38-L42
I am working on streaming device with CSI camera input. I want to duplicate the incomming stream with tee and subsequently access each of these streams with different url using gst-rtsp-server. I can have only one consumer on my camera so it is impossible to have two standalone pipelines. Is this possible? See the pseudo pipeline below.
source -> tee name=t -> rtsp with url0 .t -> rtsp with url1
Thanks!
EDIT 1:
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false
and
appsrc name=appsrc format=3 is-live=true do-timestamp=true ! queue ! rtph264pay config-interval=1 name=pay0
The second pipeline is used to create media factory. I push the buffers from appsink to appsrc in callback to new-sample signal like this.
static GstFlowReturn
on_new_sample_from_sink (GstElement * elt, void * data)
{
GstSample *sample;
GstFlowReturn ret = GST_FLOW_OK;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
if(appsrc)
{
ret = gst_app_src_push_sample(GST_APP_SRC (appsrc), sample);
}
gst_sample_unref (sample);
return ret;
}
This works - video is streamed and can be seen on different machine using gstreamer or vlc. The problem is latency. For some reason the latency is about 3s.
When I merge these two pipelines into one to create media factory directly withou usage of appsink and appsrc it works fine without large latency.
I think that for some reason the appsrc is queuing buffers until it starts pushing them to its source pad - On the debug output bellow you can see the number of queued bytes it stabilize itself on.
0:00:19.202295929 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202331834 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1113444 >= 200000)
0:00:19.202353818 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1863:gst_app_src_push_internal:<appsrc> queueing buffer 0x7f58039690
0:00:19.222150573 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
0:00:19.222184302 9724 0x7f680030f0 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (1141310 >= 200000)
EDIT 2:
I add the max-buffers property to appsink and suggested properties to queues but it didn't helped at all.
I just don't understand how it can buffer so many buffers and why. If I run my test application with GST_DEBUG=appsrc:5 then I get output like this.
0:00:47.923713520 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
0:00:47.923757840 14035 0x7f68003850 DEBUG appsrc gstappsrc.c:1819:gst_app_src_push_internal:<appsrc> queue filled (2507045 >= 200000)
According to this debug output it is all queued in appsrc even if it has max-bytes property set to 200 000 bytes. Maybe I don't understand it correctly but It looks weird to me.
I tried the first solution with appsink | appsrc pair, but I was only half successful. Now I have two pipelines.
My pipelines are currently like this.
nvv4l2camerasrc device=/dev/video0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=UYVY, framerate=50/1 ! queue max-size-buffers=3 leaky=downstream ! nvvidconv name=conv ! video/x-raw(memory:NVMM), width=1280, height=720, format=NV12, framerate=50/1 ! nvv4l2h264enc control-rate=1 bitrate=10000000 preset-level=1 profile=0 disable-cabac=1 maxperf-enable=1 name=encoder insert-sps-pps=1 insert-vui=1 ! appsink name=appsink sync=false max-buffers=3
and
appsrc name=appsrc format=3 stream-type=0 is-live=true do-timestamp=true blocksize=16384 max-bytes=200000 ! queue max-size-buffers=3 leaky=no ! rtph264pay config-interval=1 name=pay0
I can think of three possibilities:
Use appsink/appsrc (as in this example) to separate the pipeline in something like:
Factory with URL 1
Capture pipeline .---------------------------.
.----------------------. appsrc ! encoder ! rtph264pay
v4l2src ! ... ! appsink
appsrc ! encoder ! rtph264pay
'---------------------------'
Factory with URL 2
You would manually take out buffers from the appsink and push them into the different appsrcs.
Build something like above, but use something like interpipes or intervideosink in place of the appsink/appsrc to perform the buffer transfer automatically.
Use something like GstRtspSink (paid product though)
I am new to Gstreamer. I wrote a simple RTSP server that generates a pipeline like:
appsrc name=vsrc is-live=true do-timestamp=true ! queue ! h264parse ! rtph264pay name=pay0 pt=96
The SDP response is generated after the DESCRIBE request, but only after a few frames on the signal have been received by the appsrc input:
vsrc = gst_bin_get_by_name_recurse_up(GST_BIN(element), "vsrc"); // appsrc
if (nullptr != vsrc)
{
gst_util_set_object_arg(G_OBJECT(vsrc), "format", "time");
g_signal_connect(vsrc, "need-data", (GCallback)need_video_data, streamResource);
}
The time from which the video is to be played is passed in the RTSP request PLAY, in the Range header as an absolute:
PLAY rtsp://172.19.9.65:554/Recording/ RTSP/1.0
CSeq: 4
Immediate: yes
Range: clock=20220127T082831.039Z- // Start from ...
To the object GstRTSPClient attached the handler to the signal in which I process this request and make the move to the right time in my appsrc
g_signal_connect(client, "pre-play-request", (GCallback)pre_play_request, NULL);
The problem is that at this point my appsrc's start time frames have already arrived in pipline and I watch them first, and then the playback continues from the time specified in the PLAY request.
Can you please tell me how I can cut off these initial frames that came in before the PLAY call.
I've tried:
gst_element_seek - doesn't help because of peculiarities of appsrc implementation
Flush didn't help either, tried resetting sink at element rtph264pay:
gst_pad_push_event(sinkPad, gst_event_new_flush_start());
GST_PAD_STREAM_LOCK(sinkPad);
// ... seek in appsrc
gst_pad_push_event(sinkPad, gst_event_new_flush_stop(TRUE));
GST_PAD_STREAM_UNLOCK(sinkPad);
gst_object_unref(sinkPad);
Thank You!
I am designing a pipeline to Encode a video frame from a opencv application (got from a web cam) to video/x-h264 format, send it via network and decode it on another device of different type (probably a raspberry pi ) to a proper RGB stream for my project.
For this I am supposed to use a hardware accelerated Encoder and Decoder.
Since , the whole scenario is huge , the current development is performed on a Intel machine using the gstreamer VAAPI plugins(vaapiencode_h264 & vaapidecode ) . Ánd also, the fact that we need to NOT use any of the networking plugins like TCPServer or UDPServer
For this I have used the below pipeline for my purpose :
On the Encoder End:
appsrc name=applicationSource ! videoconvert ! video/x-raw, format=I420, width=640, height=480,framerate=30/1, pixel-aspect-ratio=1/1,interlace-mode=progressive ! vaapiencode_h264 bitrate=600 tune=high-compression ! h264parse config-interval=1 ! appsink name=applicationSink sync=false
The Appsrc part works perfectly well while the appsink part is having some issue with it.
The appsink part of this pipeline has been set with the below caps:
"video/x-h264, format=(string){avc,avc3,byte-stream },alignment=(string){au,nal};video/mpeg, mpegversion=(int)2, profile=(string)simple"
The code for the data extraction of my appsink is
bool HWEncoder::grabData()
{
// initial checks..
if (!cameraPipeline)
{
GST_ERROR("ERROR AS TO NO PIPE FOUND ... Stopping FRAME GRAB HERE !! ");
return false;
}
if (gst_app_sink_is_eos (GST_APP_SINK(applicationSink)))
{
GST_WARNING("APP SINK GAVE US AN EOS! BAILING OUT ");
return false;
}
if (sample)
{
cout << "sample available ... unrefing it ! "<< endl;
gst_sample_unref(sample);
}
sample = gst_app_sink_pull_sample (GST_APP_SINK(applicationSink));
if (!sample)
{
GST_WARNING("No valid sample");
return false; // no valid sample pulled !
}
sink_buffer = gst_sample_get_buffer(sample);
if (!sink_buffer)
{
GST_ERROR("No Valid Buffer ");return false;
}
return true;
}
After bringing up the pipeline and checking for the buffer filling up in my appsink, I am getting stuck at the below said lines ofmy code indefinitely:
sample = gst_app_sink_pull_sample (GST_APP_SINK(applicationSink));
I have the following questions :
1) Is my Caps for appsink correct ? If not How can I determine the caps for them ?
2) Is there something wrong in my pipeline above ?
How can I fix this issue with Appsink ??
Any kind of help would be useful!
Thanks !!
Just a guess (I had similar problems) the problem having appsink and appsrc in same pipeline may be that when you fill/empty one of them it will block the other(more on that below).
appsink and appsrc would block when they are full/empty - this is normal desired behaviour. There is option drop for appsink or for appsrc there is option block - but using these it may be just workaround and you will get glitches in your stream. Proper solution is to handle the synchronisation between appsrc and appsink in a better way.
You can react on appsrc signals enough-data and need-data - this is our way. Also we fiddled with properties of appsrc: is-live, do-timestamp and buffer size (this may or may not help you):
g_object_set(src->appsrc,
"stream-type", GST_APP_STREAM_TYPE_STREAM,
"format", GST_FORMAT_TIME,
"do-timestamp", TRUE,
"is-live", TRUE,
"block", TRUE,
NULL);
Why do they block each other?
Because (I guess) you process appsink and at the same time appsrc in main application thread. When one of the appsink/appsrc block the thread there is no one that would handle the processing for the other one. So when appsink is blocked because it does not have any data there is noone that can feed appsrc with new data - thus endless deadlock.
We also implemented noblock version of appsink *pull_sample method but it was just a workaround and resulted in more problems than solutions.
If you want to debug what is happening you can add GST_DEBUG entry for appsrc/appsink (I do not remember what they were), you can add callback on mentioned enough-data and need-data signals or you may add queues and enable GST_DEBUG=queue_dataflow:5 to see which queue is filled first etc.. this is always helpful when debugging the "data-deadlock".
My pipeline scheme(dynamic link):
videotestsrc OR audiotestsrc ! decodebin ! queue ! autovideosink OR
autoaudiosink
I trying to use this advice to check which type of data I got (video/audio), but if I use decodebin like demuxer, then I get just "src_0" instead of "audio" or "video". How I can check my pad type for linking right element for playback? May be I can use one universal element for audio playback and video playback, like playsink(but it does not work for video)?
You can get the caps of the newly added pad and check if it contains audio or video caps (or something else).
Try with:
gst_pad_get_current_caps (pad);
or:
gst_pad_get_allowed_caps (pad);
If you are using gstreamer 0.10 (which is 3+ years obsolete an unmantained), you have:
gst_pad_get_caps_reffed (pad);
Then just check the returned caps if it is audio or video by getting the structure from the caps and checking if its name starts with video or audio.
/* There might be multiple structures depending on how you do it,
* but usually checking one in this case is enough */
structure = gst_caps_get_structure (caps, 0);
name = gst_structure_get_name (structure);
if (g_str_has_prefix (name, "video/")) {
...
} else if (g_str_has_prefix (name, "audio/")) {
...
}