Gstreamer videorate framerate vs max-rate - gstreamer

I want to reduce the framerate of a rstp source to 2 frames per second. I am using a Gstreamer pipeline but I don't understand the difference between the property framerate of video/x-raw and max-rate of videorate documentation here
From the doc, max-rate maximum framerate to pass through. So what's the difference between using max-rate and doing videorate ! video/x-raw,framerate=25/2 !? In my test, max-rate does not seem to work.

I'm looking exactly the same. Apparently max-rate allows you to input NVMM buffers, while rate as caps don't.
But the docs aren't clear.

Related

Gstreamer screenshot from RTSP stream is always gray

I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg

Campture still images using Gstreamer in RPi4

I am trying to get a camera that uses Gstreamer to capture images in a specific interval of time, while searching in the web I found the following line:
gst-launch-1.0 -v videotestsrc is-live=true ! clockoverlay font-desc="Sans, 48" ! videoconvert ! videorate ! video/x-raw,framerate=1/3 ! jpegenc ! multifilesink location=file-%02d.jpg
I believe it would work great, but unfortunately, I don't know how to get it to work with my specific camera, meaning I don't know how to identify my video source in RPi4 and if that is the only thing I have to change to get it to work. I would appreciate either help with the video source, or any other method to get those images.

Straightforward way to downscale too large video with GStreamer

I'm using GStreamer to process videos in my project. The input videos can have various formats (both resolutions and aspect-ratio). It can be for example 400x300, 1080p, 4K, 2000x1000, etc.
I would like to downscale videos automatically which are larger than 1080p. So if video is in 4K it should be downscaled to 1080p but if it is in 400x300 format it should be kept in original format.
I've found videoscale pipeline plugin but it does not work as I would like. It is changing scale up and down without any distinctions. Also it is not changing sizes proportionally when only width or height is provided.
Do you know any straightforward way in GStreamer to downscale resolutions automatically to desired size?
GStreamer's caps allow ranges. So I believe you are looking for something like this:
video/x-raw,width=[1,1920],height=[1,1080],pixel-aspect-ratio=1/1
This will keep the same aspect ratio but scales down in case it is required to fit into 1920x1080.
E.g.
gst-launch-1.0 videotestsrc ! video/x-raw,width=4000,height=2000 ! videoscale ! video/x-raw,width=[1,1920],height=[1,1080],pixel-aspect-ratio=1/1 ! autovideosink
Will be scaled down to 1920x960.
And:
gst-launch-1.0 videotestsrc ! video/x-raw,width=400,height=200 ! videoscale ! video/x-raw,width=[1,1920],height=[1,1080],pixel-aspect-ratio=1/1 ! autovideosink
Will stay at 400x200.

Change framerate in GStreamer pipeline twice

I have problem with making pipeline in GStreamer.
My pipeline looks like this:
gst-launch-1.0 videotestsrc is-live=true ! videorate ! video/x-raw,framerate=200/1 ! videorate max-rate=50 ! videoconvert ! x264enc bitrate=500000 byte-stream=true ! h264parse ! rtph264pay mtu=1400 ! udpsink host=127.0.0.1 port=5000 sync=false async=true
At this point, I am optimalizing pipeline for application. So instead of videotestsrc in pipeline, there will be appsrc, which gets frames from application, which returns frames. Everytime appsrc asks for frame, application would return one. Camera have about 50 FPS.
I'll help explanation with a picture:
Gray line means time. Let's say camera send frame every 20ms (50 FPS) (red dots) and appsrc asks every 20ms, but asks allways 1ms before camera produces new frame (blue dots). This will generate delay of 19 ms, which I am trying to get low as possible.
My idea is to use videorate ! video/x-raw,framerate=200/1, to let source ask for new frame every 5 ms, which implies the blue dot will be 4 times faster, than camera getting new frames, which mean 4 frames will be equal. After getting those "newest" frames, I want to (without encoding) to limit framerate back to 50 FPS using videorate max-rate=50.
Problem is, my pipeline doesn't work in application; not even as terminal command gst-launch-1.0.
How can I control framerate twice in one pipeline? Is there any other solution?
Use set_property to set/modify properties of your element. The element handle can be obtained using [gst_element_factory_make][1].
rate = gst_element_factory_make("videorate","vrate")
g_object_set("rate","property-name","property-value")
You can set/modify the values based on your requirements when the pipeline is playing.

Why does decreasing the framerate with videorate incur a significant CPU performance penalty?

My understanding of the videorate element is that framerate correction is performed by simply dropping frames and no "fancy algorithm" is used. I've profiled CPU usage for a gst-launch-1.0 pipeline and I've observed that as the framerate decreases below 1 FPS, CPU usage, counter-intuitively, increases dramatically.
Sample pipeline (you can observe the performance penalty by changing the framerate fraction):
gst-launch-1.0 filesrc location=test.mp4 ! qtdemux ! h264parse ! avdec_h264 ! videorate drop-only=true ! video/x-raw,framerate=1/10 ! autovideosink
I would expect that decreasing the framerate would reduce the amount of processing required throughout the rest of the pipeline. Any insight into this phenomenon would be appreciated.
System info: Centos 7, GStreamer 1.4.5
EDIT: Seems this happens with the videotestsrc as well but only if you specify a high framerate on the source.
videotestsrc pattern=snow ! video/x-raw,width=1920,height=1080,framerate=25/1 ! videorate drop-only=true ! video/x-raw,framerate=1/10 ! autovideosink
Removing the framerate from the videotestsrc caps puts CPU usage at 1%, and usage increases as the videorate framerate increases. Meanwhile, setting the source to 25/1 FPS increases CPU usage to 50% and it lowers as the videorate framerate increases.
Tozar I'm going to specifically address the pipeline you posted in your comment above.
If you're only going to be sending a frame once every ten seconds there's probably no need to use h264. In ten seconds time the frame will have changed completely and there will be no data similarities to be encoded for bandwidth savings. The encoder will likely just assume a new keyframe is needed. You could go with jpegenc and rtpjpegpay as alternatives.
If you're re-encoding the content you'll definitely see a CPU spike every ten seconds. It's just not avoidable.
If you want to place CPU usage as low as possible on the machine doing the transformation, you could go to the work of parsing the incoming h264 data, pulling out the key frames (IDR frames), and then passing those along to the secondary destination. That would be assuming the original transmitter sent keyframes though (no intra refresh). It would not be easy.
You may want to form a more general question about what you're trying to do. What is the role of the machine doing the transformation? Does it have to use the data at all itself? What type of machine is receiving the frames every ten seconds and what is its role?
videorate is tricky and you need to consider it in conjunction with every other element in the pipeline. You also need to be aware of how much CPU time is actually available to cut off. For example, if you're decoding a 60fps file and displaying it at 1fps, you'll still be eating a lot of CPU. You can output to fakesink with sync set to true to see how much CPU you could actually save.
I recommend adding a bit of debug info to better understand videorate's behavior.
export GST_DEBUG=2,videorate:7
Then you can grep for "pushing buffer" for when it pushes:
gst-launch-1.0 [PIPELINE] 2>&1 | grep "pushing buffer"
..and for storing buffer when it receives data:
gst-launch-1.0 [PIPELINE] 2>&1 | grep "storing buffer"
In the case of decoding a filesrc, you're going to see bursts of CPU activity because what happens is the decoder will run through say 60 frames, realize the pipeline is filled, pause, wait till a need-buffers event comes in, then burst to 100% CPU to fill the pipeline again.
There are other factors too. Like you may need to be careful that you have queue elements between certain bottlenecks, with the correct max-size attributes set. Or your sink or source elements could be behaving in unexpected ways.
To get the best possible answer for your question, I'd suggest posting the exact pipeline you intend to use, with and without the videorate. If you have something like "autovideosink" change that to the element it actually resolves to on your system.
Here are a few pipelines I tested with:
gst-launch-1.0 videotestsrc pattern=snow ! video/x-raw,width=320,height=180,framerate=60/1 ! videorate ! videoscale method=lanczos ! video/x-raw,width=1920,height=1080,framerate=60/1 ! ximagesink 30% CPU in htop
gst-launch-1.0 videotestsrc pattern=snow ! video/x-raw,width=320,height=180,framerate=60/1 ! videorate ! videoscale method=lanczos ! video/x-raw,width=1920,height=1080,framerate=1/10 0% with 10% spikes in htop