How to use GStreamer interlace element? - gstreamer

I have a source of progressive video that I want to convert to interlaced fields. From one progressive frame, I want to generate two interlaced fields, each containing either the odd or the even lines.
In GStreamer terminology, each input buffer contains one progressive frame that I want to convert into two buffers, each containing one field.
I was expecting the first pipeline below to produce 20 files of size 640 * 240 * 2 and the second one to produce 10 files of size 640 * 480 * 2.
But they both produce 10 files of size 640 * 480 * 2, so the interlace element seems to do nothing.
Where am I going wrong here?
gst-launch-1.0 -v videotestsrc pattern=ball num-buffers=10 ! video/x-raw,format=YUY2,width=640,height=480 ! interlace ! multifilesink location=c:\x_%d.raw
gst-launch-1.0 -v videotestsrc pattern=ball num-buffers=10 ! video/x-raw,format=YUY2,width=640,height=480 ! multifilesink location=c:\x_%d.raw

Related

Straightforward way to downscale too large video with GStreamer

I'm using GStreamer to process videos in my project. The input videos can have various formats (both resolutions and aspect-ratio). It can be for example 400x300, 1080p, 4K, 2000x1000, etc.
I would like to downscale videos automatically which are larger than 1080p. So if video is in 4K it should be downscaled to 1080p but if it is in 400x300 format it should be kept in original format.
I've found videoscale pipeline plugin but it does not work as I would like. It is changing scale up and down without any distinctions. Also it is not changing sizes proportionally when only width or height is provided.
Do you know any straightforward way in GStreamer to downscale resolutions automatically to desired size?
GStreamer's caps allow ranges. So I believe you are looking for something like this:
video/x-raw,width=[1,1920],height=[1,1080],pixel-aspect-ratio=1/1
This will keep the same aspect ratio but scales down in case it is required to fit into 1920x1080.
E.g.
gst-launch-1.0 videotestsrc ! video/x-raw,width=4000,height=2000 ! videoscale ! video/x-raw,width=[1,1920],height=[1,1080],pixel-aspect-ratio=1/1 ! autovideosink
Will be scaled down to 1920x960.
And:
gst-launch-1.0 videotestsrc ! video/x-raw,width=400,height=200 ! videoscale ! video/x-raw,width=[1,1920],height=[1,1080],pixel-aspect-ratio=1/1 ! autovideosink
Will stay at 400x200.

Need to dump yuv frames from mp4 file

I dumped JPEG frame using below command:
gst-launch-1.0 filesrc location=<FILE LOCATION>! qtdemux name=demux demux.video_0 ! queue ! decodebin ! videoconvert ! jpegenc ! multifilesink location=frame%d.yuv
JPEG image is displaying fine.
When I am trying to dump YUV frames from mp4 file using below command:
gst-launch-1.0 filesrc location=[FILE LOCATION] ! qtdemux name=demux demux.video_0 ! queue ! decodebin ! videoconvert ! multifilesink location=frame%d.yuv
I am not able to see the image as the dumped frame is without header detail(Width&Height).
After converting this frame to ppm format using below command:
yuvtoppm [WIDTH] [HEIGHT] frame1.yuv > frame1.ppm
I am able to see the frame, but it is not in YUV format.
So what is your expectation? Raw YUV by definition come with no header - they are raw. Usually applications reading these files require you to set width, height and format as yuvtoppm does.
You may want to check out y4menc which probably adds a header to these files that some applications may be able to read.
Or you can store it as video files - matroskamux and avimux should be able to handle uncompressed data.

Change framerate in GStreamer pipeline twice

I have problem with making pipeline in GStreamer.
My pipeline looks like this:
gst-launch-1.0 videotestsrc is-live=true ! videorate ! video/x-raw,framerate=200/1 ! videorate max-rate=50 ! videoconvert ! x264enc bitrate=500000 byte-stream=true ! h264parse ! rtph264pay mtu=1400 ! udpsink host=127.0.0.1 port=5000 sync=false async=true
At this point, I am optimalizing pipeline for application. So instead of videotestsrc in pipeline, there will be appsrc, which gets frames from application, which returns frames. Everytime appsrc asks for frame, application would return one. Camera have about 50 FPS.
I'll help explanation with a picture:
Gray line means time. Let's say camera send frame every 20ms (50 FPS) (red dots) and appsrc asks every 20ms, but asks allways 1ms before camera produces new frame (blue dots). This will generate delay of 19 ms, which I am trying to get low as possible.
My idea is to use videorate ! video/x-raw,framerate=200/1, to let source ask for new frame every 5 ms, which implies the blue dot will be 4 times faster, than camera getting new frames, which mean 4 frames will be equal. After getting those "newest" frames, I want to (without encoding) to limit framerate back to 50 FPS using videorate max-rate=50.
Problem is, my pipeline doesn't work in application; not even as terminal command gst-launch-1.0.
How can I control framerate twice in one pipeline? Is there any other solution?
Use set_property to set/modify properties of your element. The element handle can be obtained using [gst_element_factory_make][1].
rate = gst_element_factory_make("videorate","vrate")
g_object_set("rate","property-name","property-value")
You can set/modify the values based on your requirements when the pipeline is playing.

Gstreamer videoconvert color conversion wrong?

I'm launching a gst-launch-1.0 that captures camera images with nvgstcamera. The images are encoded to VP9 video. The video is tee'd to a filesink that saves the video in a webm container and to a VP9 decoder that pipes the images into an appsink.
Later, I want to extract frames from the saved video and run them through the application again. It is important that the frames are absolutely identical to the ones that were piped into the appsink during video capture.
Unfortunately, the decoded frames look slightly different, depending on how you extract them.
A minimal working example:
Recording:
$ gst-launch-1.0 nvcamerasrc ! "video/x-raw(memory:NVMM), format=NV12" ! omxvp9enc ! tee name=splitter \
splitter. ! queue ! webmmux ! filesink location="record.webm" \
splitter. ! queue ! omxvp9dec ! nvvidconv ! "video/x-raw,format=RGBA" ! pngenc ! multifilesink location="direct_%d.png"
Replaying with nvvidconv element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! nvvidconv ! pngenc ! multifilesink location="extracted_nvvidconv_%d.png"
Replaying with videoconvert element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! videoconvert ! pngenc ! multifilesink location="extracted_videoconvert_%d.png"
Testing image differences:
$ compare -metric rmse direct_25.png extracted_nvvidconv_25.png null
0
$ compare -metric rmse direct_25.png extracted_videoconvert_25.png null
688.634 (0.0105079)
nvvidconv:
videoconvert:
My guess is that this has to do with the I420 to RGB conversion. So videoconvert seems to use a different color conversion than nvvidconv.
Launching the pipeline with gst-launch -v shows that the element capabilities are basically the same for both replay pipelines, the only difference is that videoconvert uses RGB by default, while nvvidconv uses RGBA. Adding the caps string "video/x-raw,format=RGBA" behind videoconvert makes however no difference in color conversion.
Note that this is on an Nvidia Jetson TX2 and I would like to use hardware accelerated gstreamer plugins during recording (omxvp9enc, nvvidconv), but not during replay on another machine.
How can I extract images from the video that are identical to the images running through the pipeline during recording, but without the use of Nvidia's Jetson-specific plugins?
Check for colorimetry information - https://developer.gnome.org/gst-plugins-libs/stable/gst-plugins-base-libs-gstvideo.html#GstVideoColorimetry
Videoconvert for example take these into account when converting images. Depending on the caps found at input and output.
You probably have to check what the Tegra is doing here. Most likely there is a difference if the signal is interpreted as full range or tv range. Or the matrices differ from 601 and 709.
Depending on precision there may still be some loss during the conversion. For metrics at video codecs it may make sense to stay at YUV color space and use only RGB for display if you must.

Setting resolution of video in gstreamer pipeline

Myself trying to play a yuv file(IP_Traffic_12820x720_32QP.yuv) wth gstreamer.I am only possible to see the file in yuv file player by setting the width and height as 1280 and 720 respectively.How can I set this resolution in gstreamer pipeline to view the image.
Please help
You can set the resolution of your raw video using a capsfilter. Try something like this from the command line:
gst-launch-0.10 filesrc location=input.yuv ! video/x-raw-yuv,width=1280,height=720,framerate=30/1 ! ffmpegcolorspace ! autovideosink
From code, you need to create a capsfilter element and then set it's caps property. (see: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-capsfilter.html)
Note: for gstreamer-1.0 you would use video/x-raw instead of video/x-raw-yuv, and videoconvert instead of ffmpegcolorspace.