Change framerate in GStreamer pipeline twice - gstreamer

I have problem with making pipeline in GStreamer.
My pipeline looks like this:
gst-launch-1.0 videotestsrc is-live=true ! videorate ! video/x-raw,framerate=200/1 ! videorate max-rate=50 ! videoconvert ! x264enc bitrate=500000 byte-stream=true ! h264parse ! rtph264pay mtu=1400 ! udpsink host=127.0.0.1 port=5000 sync=false async=true
At this point, I am optimalizing pipeline for application. So instead of videotestsrc in pipeline, there will be appsrc, which gets frames from application, which returns frames. Everytime appsrc asks for frame, application would return one. Camera have about 50 FPS.
I'll help explanation with a picture:
Gray line means time. Let's say camera send frame every 20ms (50 FPS) (red dots) and appsrc asks every 20ms, but asks allways 1ms before camera produces new frame (blue dots). This will generate delay of 19 ms, which I am trying to get low as possible.
My idea is to use videorate ! video/x-raw,framerate=200/1, to let source ask for new frame every 5 ms, which implies the blue dot will be 4 times faster, than camera getting new frames, which mean 4 frames will be equal. After getting those "newest" frames, I want to (without encoding) to limit framerate back to 50 FPS using videorate max-rate=50.
Problem is, my pipeline doesn't work in application; not even as terminal command gst-launch-1.0.
How can I control framerate twice in one pipeline? Is there any other solution?

Use set_property to set/modify properties of your element. The element handle can be obtained using [gst_element_factory_make][1].
rate = gst_element_factory_make("videorate","vrate")
g_object_set("rate","property-name","property-value")
You can set/modify the values based on your requirements when the pipeline is playing.

Related

streaming live video after OpenCV image processing

i am trying to stream live video feed from a camera connected to a Jetson NX, to a computer connected to the same network, the network works as wireless ethernet, meaning the jetson sees it as wired connection but in reality its wireless and is limited by bitrate.
On the jetson side, I am using OpenCV VideoWrite to send frames over the network using this pipeline:
cv::VideoWriter video_write("appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv\
! video/x-raw(memory:NVMM),format=NV12,width=640,height=360,framerate=30/1 ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 idrinterval=30 \
bitrate=1800000 EnableTwopassCBR=1 ! h264parse ! rtph264pay ! udpsink host=169.254.84.2 port=5004 auto-multicast=0",\
cv::CAP_GSTREAMER,30,cv::Size(640,360));
on the receiving computer my video capture is :
cv::VideoCapture video("udpsrc port=5004 auto_multicast=0 !
application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 !
rtph264depay ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1 "
, cv::CAP_GSTREAMER);
Problem is, if the camera jitters a lot or moves a lot to any direction, the video stream either freezes or completely pixelates. I was hoping I could get suggestions for either a better encoder(im not limited to nvv4l2h264enc or h264 in general) or a solution for the pixaltion and freeze, or maybe even a better way to stream the video other than VideoWrite.
I am trying to stream 360p video at 30fps, my bitrate is limited to either 6mbps or 2.5mbps depending on the distance i want to limit myself at. it does not seem like a network problem simply because If i change parameters like Codec(MJPG instead of gstreamer for example) it changes the behaviour of the video feed, in my case it lowers the amount of freezing but makes the pixelating worse.

GStreamer pipeline stops playing after fast and shaky camera movement

I'm working on a video streaming wearable device. During the tests, it came up that the pipeline clock and stream stop while fast walking or running. It's bizarre behaviour because in debug messages there are no errors about the broken pipeline, besides lost frames. It's frizzed and only restarting help. May you guys guess what causes the problem?
The pipelines I use:
streaming device:
gst-launch-1.0 -vem --gst-debug=3 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=\(fraction\)30/1 ! v4l2h264enc extra-controls=s,video_bitrate=250000 capture-io-mode=4 output-io-mode=4 ! "video/x-h264,level=(string)4" ! rtph264pay config-interval=1 ! multiudpsink clients="127.0.0.1:5008,10.123.0.2:5008"
client:
udpsrc port=5008 do-timestamp=true ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 ! rtpjitterbuffer latency=100 drop-on-latency=true drop-messages-interval=100000000 ! queue max-size-buffers=20000 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! glupload ! qmlglsink name=qmlglsink sync=false
The hardware I use is a PS3 Eye cam, and LTE modem to transmit video with a pretty low uplink of 1-2 Mbit/s, and everything running on RaspberryPi 3b+ 1GB.
For more debug info there are also pictures of the log file after last registered dropped frame and every next "cycle" sends a new query, loops over GST Element from sink to the source which is my camera and ends with max query duration(highlighted query to v4l2src)
Do you know how to overcome this problem?
The problem has been resolved. The issue was not variable encoder bitrate.
A more detailed inspection and pipeline that works for me is in this GStreamer issue page

Gstreamer screenshot from RTSP stream is always gray

I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg

GStreamer AppSrc AppSink latency

I'm would like to use GStreamer to decode an H264 frame. My pipeline looks like this:
appsrc name=source max-buffers=1 ! video/x-h264, framerate=10/1, stream-format=byte-stream ! h264parse name=parser ! video/x-h264, profile=baseline, width=320, height=240 ! nvh264dec name=decoder ! appsink name=sink max-buffers=1 drop=TRUE
AppSrc is configuref in push mode. I have an enconding thread that pushes H264 frames appsrc. At the end of the pipeline, I receive the decoded frame through AppSink. This works well except I have a latency between the time the frame is pushed to the pipeline and the time it reaches AppSink. More precisely I noticed that:
the sink always receives the first frame after the 6th frame has been pushed to the source.
the sink and all upstreams element receive a "latency" event after the 6th frame has been pushed to the source.
the pipeline state changes from READY to PAUSED just after the latency event
the pipeline state changes from PAUSED to PLAYING just after the source receive the latency event, ie just before the sink receives the first frame.
So the timeline looks like this (with an enconding thread at 10 fps):
Time (ms)
AppSrc
AppSink
0
Frame 1
100
Frame 2
200
Frame 3
300
Frame 4
400
Frame 5
500
Frame 6
Frame 1
600
Frame 7
Frame 2
700
Frame 8
Frame 3
800
Frame 9
Frame 4
etc ...
It seems the delay is directly linked to the size of the pipeline. I removed the decoder in order to test this hypothesis (of course it doesn't make sense from a functional point of view):
appsrc name=mysource max-buffers=1 ! video/x-h264, framerate=10/1, stream-format=byte-stream ! h264parse name=parser ! appsink name=sink max-buffers=1 drop=TRUE sync=FALSE
With this shrinked pipeline, I receive the first frame in the sink, after the third frame has been pushed.
I would like to receive the first frame in the sink a soon as it is pushed to the pipeline. Actually the input framerate is not that important for me. The latency is the key point. I'm even ready to consider a solution where the encoding thread is blocked until the frame reaches the sink before pushing a new one.
Any help would be greatly appreciated.
Regards,
PY

Gstreamer: Save image/jpeg using multifilesink every 5 seconds

I am trying to figure out how to save an image using multifilesink every N seconds (lets say 5). My get-launch-1.0 pipeline is below: gst-launch-1.0 videotestsrc ! 'video/x-raw, format=I420, width=400, height=400, framerate=1/5' ! jpegenc ! multifilesink location=/some/location/img_%06d.jpg
I was thinking the framerate option could control the capture speed but it seems to not be affecting anything. How can I delay this pipeline to only save a jpeg every N seconds?
Edit: So I figured how that this will work with videotestsrc if you set "is-live=true" but I would like to do this with an nvcamerasrc or nvarguscamerasrc.
When the videotestsrc is not running as a live source, it will pump out frames as fast as it can, updating timestamps based on the output framerate configured on the source pad.
Setting it to live-mode will ensure that it actually matches the expected framerate.
This shouldn't be an issue with a true live source like a camera source.
However something like this can force synchronization with the videotestsrc:
gst-launch-1.0.exe videotestsrc ! video/x-raw, format=I420, width=400, height=400, framerate=1/5 ! identity sync=true ! timeoverlay ! jpegenc ! multifilesink location="/some/location/img_%06.jpg"