GStreamer AppSrc AppSink latency - c++

I'm would like to use GStreamer to decode an H264 frame. My pipeline looks like this:
appsrc name=source max-buffers=1 ! video/x-h264, framerate=10/1, stream-format=byte-stream ! h264parse name=parser ! video/x-h264, profile=baseline, width=320, height=240 ! nvh264dec name=decoder ! appsink name=sink max-buffers=1 drop=TRUE
AppSrc is configuref in push mode. I have an enconding thread that pushes H264 frames appsrc. At the end of the pipeline, I receive the decoded frame through AppSink. This works well except I have a latency between the time the frame is pushed to the pipeline and the time it reaches AppSink. More precisely I noticed that:
the sink always receives the first frame after the 6th frame has been pushed to the source.
the sink and all upstreams element receive a "latency" event after the 6th frame has been pushed to the source.
the pipeline state changes from READY to PAUSED just after the latency event
the pipeline state changes from PAUSED to PLAYING just after the source receive the latency event, ie just before the sink receives the first frame.
So the timeline looks like this (with an enconding thread at 10 fps):
Time (ms)
AppSrc
AppSink
0
Frame 1
100
Frame 2
200
Frame 3
300
Frame 4
400
Frame 5
500
Frame 6
Frame 1
600
Frame 7
Frame 2
700
Frame 8
Frame 3
800
Frame 9
Frame 4
etc ...
It seems the delay is directly linked to the size of the pipeline. I removed the decoder in order to test this hypothesis (of course it doesn't make sense from a functional point of view):
appsrc name=mysource max-buffers=1 ! video/x-h264, framerate=10/1, stream-format=byte-stream ! h264parse name=parser ! appsink name=sink max-buffers=1 drop=TRUE sync=FALSE
With this shrinked pipeline, I receive the first frame in the sink, after the third frame has been pushed.
I would like to receive the first frame in the sink a soon as it is pushed to the pipeline. Actually the input framerate is not that important for me. The latency is the key point. I'm even ready to consider a solution where the encoding thread is blocked until the frame reaches the sink before pushing a new one.
Any help would be greatly appreciated.
Regards,
PY

Related

Gstreamer screenshot from RTSP stream is always gray

I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg

Sending mkv file over udpsink gstreamer

I am trying to stream an mkv file over udpsink as an rtp payload, but when the I receive the packets it drops almost every other frame. The stream just freezes for a couple of seconds and then shows the time synced frame. I have a timestamp on the video file and it jumps a the same amount of time it freezes for. The setup is gstreamer is run on a Raspberry Pi 4 and I view the stream using vlc on another computer.
I have the payload working so it sends over the udpsink successfully using:
gst-launch-1.0 filesrc location=file.mkv ! matroskademux ! rtph264pay ! udpsink host=127.0.0.1 port=8004
I tried changing the buffer size of the udpsink and it had little effect (it might have increased the shown frames a bit).

Gstreamer: Save image/jpeg using multifilesink every 5 seconds

I am trying to figure out how to save an image using multifilesink every N seconds (lets say 5). My get-launch-1.0 pipeline is below: gst-launch-1.0 videotestsrc ! 'video/x-raw, format=I420, width=400, height=400, framerate=1/5' ! jpegenc ! multifilesink location=/some/location/img_%06d.jpg
I was thinking the framerate option could control the capture speed but it seems to not be affecting anything. How can I delay this pipeline to only save a jpeg every N seconds?
Edit: So I figured how that this will work with videotestsrc if you set "is-live=true" but I would like to do this with an nvcamerasrc or nvarguscamerasrc.
When the videotestsrc is not running as a live source, it will pump out frames as fast as it can, updating timestamps based on the output framerate configured on the source pad.
Setting it to live-mode will ensure that it actually matches the expected framerate.
This shouldn't be an issue with a true live source like a camera source.
However something like this can force synchronization with the videotestsrc:
gst-launch-1.0.exe videotestsrc ! video/x-raw, format=I420, width=400, height=400, framerate=1/5 ! identity sync=true ! timeoverlay ! jpegenc ! multifilesink location="/some/location/img_%06.jpg"

Change framerate in GStreamer pipeline twice

I have problem with making pipeline in GStreamer.
My pipeline looks like this:
gst-launch-1.0 videotestsrc is-live=true ! videorate ! video/x-raw,framerate=200/1 ! videorate max-rate=50 ! videoconvert ! x264enc bitrate=500000 byte-stream=true ! h264parse ! rtph264pay mtu=1400 ! udpsink host=127.0.0.1 port=5000 sync=false async=true
At this point, I am optimalizing pipeline for application. So instead of videotestsrc in pipeline, there will be appsrc, which gets frames from application, which returns frames. Everytime appsrc asks for frame, application would return one. Camera have about 50 FPS.
I'll help explanation with a picture:
Gray line means time. Let's say camera send frame every 20ms (50 FPS) (red dots) and appsrc asks every 20ms, but asks allways 1ms before camera produces new frame (blue dots). This will generate delay of 19 ms, which I am trying to get low as possible.
My idea is to use videorate ! video/x-raw,framerate=200/1, to let source ask for new frame every 5 ms, which implies the blue dot will be 4 times faster, than camera getting new frames, which mean 4 frames will be equal. After getting those "newest" frames, I want to (without encoding) to limit framerate back to 50 FPS using videorate max-rate=50.
Problem is, my pipeline doesn't work in application; not even as terminal command gst-launch-1.0.
How can I control framerate twice in one pipeline? Is there any other solution?
Use set_property to set/modify properties of your element. The element handle can be obtained using [gst_element_factory_make][1].
rate = gst_element_factory_make("videorate","vrate")
g_object_set("rate","property-name","property-value")
You can set/modify the values based on your requirements when the pipeline is playing.

Why does decreasing the framerate with videorate incur a significant CPU performance penalty?

My understanding of the videorate element is that framerate correction is performed by simply dropping frames and no "fancy algorithm" is used. I've profiled CPU usage for a gst-launch-1.0 pipeline and I've observed that as the framerate decreases below 1 FPS, CPU usage, counter-intuitively, increases dramatically.
Sample pipeline (you can observe the performance penalty by changing the framerate fraction):
gst-launch-1.0 filesrc location=test.mp4 ! qtdemux ! h264parse ! avdec_h264 ! videorate drop-only=true ! video/x-raw,framerate=1/10 ! autovideosink
I would expect that decreasing the framerate would reduce the amount of processing required throughout the rest of the pipeline. Any insight into this phenomenon would be appreciated.
System info: Centos 7, GStreamer 1.4.5
EDIT: Seems this happens with the videotestsrc as well but only if you specify a high framerate on the source.
videotestsrc pattern=snow ! video/x-raw,width=1920,height=1080,framerate=25/1 ! videorate drop-only=true ! video/x-raw,framerate=1/10 ! autovideosink
Removing the framerate from the videotestsrc caps puts CPU usage at 1%, and usage increases as the videorate framerate increases. Meanwhile, setting the source to 25/1 FPS increases CPU usage to 50% and it lowers as the videorate framerate increases.
Tozar I'm going to specifically address the pipeline you posted in your comment above.
If you're only going to be sending a frame once every ten seconds there's probably no need to use h264. In ten seconds time the frame will have changed completely and there will be no data similarities to be encoded for bandwidth savings. The encoder will likely just assume a new keyframe is needed. You could go with jpegenc and rtpjpegpay as alternatives.
If you're re-encoding the content you'll definitely see a CPU spike every ten seconds. It's just not avoidable.
If you want to place CPU usage as low as possible on the machine doing the transformation, you could go to the work of parsing the incoming h264 data, pulling out the key frames (IDR frames), and then passing those along to the secondary destination. That would be assuming the original transmitter sent keyframes though (no intra refresh). It would not be easy.
You may want to form a more general question about what you're trying to do. What is the role of the machine doing the transformation? Does it have to use the data at all itself? What type of machine is receiving the frames every ten seconds and what is its role?
videorate is tricky and you need to consider it in conjunction with every other element in the pipeline. You also need to be aware of how much CPU time is actually available to cut off. For example, if you're decoding a 60fps file and displaying it at 1fps, you'll still be eating a lot of CPU. You can output to fakesink with sync set to true to see how much CPU you could actually save.
I recommend adding a bit of debug info to better understand videorate's behavior.
export GST_DEBUG=2,videorate:7
Then you can grep for "pushing buffer" for when it pushes:
gst-launch-1.0 [PIPELINE] 2>&1 | grep "pushing buffer"
..and for storing buffer when it receives data:
gst-launch-1.0 [PIPELINE] 2>&1 | grep "storing buffer"
In the case of decoding a filesrc, you're going to see bursts of CPU activity because what happens is the decoder will run through say 60 frames, realize the pipeline is filled, pause, wait till a need-buffers event comes in, then burst to 100% CPU to fill the pipeline again.
There are other factors too. Like you may need to be careful that you have queue elements between certain bottlenecks, with the correct max-size attributes set. Or your sink or source elements could be behaving in unexpected ways.
To get the best possible answer for your question, I'd suggest posting the exact pipeline you intend to use, with and without the videorate. If you have something like "autovideosink" change that to the element it actually resolves to on your system.
Here are a few pipelines I tested with:
gst-launch-1.0 videotestsrc pattern=snow ! video/x-raw,width=320,height=180,framerate=60/1 ! videorate ! videoscale method=lanczos ! video/x-raw,width=1920,height=1080,framerate=60/1 ! ximagesink 30% CPU in htop
gst-launch-1.0 videotestsrc pattern=snow ! video/x-raw,width=320,height=180,framerate=60/1 ! videorate ! videoscale method=lanczos ! video/x-raw,width=1920,height=1080,framerate=1/10 0% with 10% spikes in htop