Setting resolution of video in gstreamer pipeline - gstreamer

Myself trying to play a yuv file(IP_Traffic_12820x720_32QP.yuv) wth gstreamer.I am only possible to see the file in yuv file player by setting the width and height as 1280 and 720 respectively.How can I set this resolution in gstreamer pipeline to view the image.
Please help

You can set the resolution of your raw video using a capsfilter. Try something like this from the command line:
gst-launch-0.10 filesrc location=input.yuv ! video/x-raw-yuv,width=1280,height=720,framerate=30/1 ! ffmpegcolorspace ! autovideosink
From code, you need to create a capsfilter element and then set it's caps property. (see: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-capsfilter.html)
Note: for gstreamer-1.0 you would use video/x-raw instead of video/x-raw-yuv, and videoconvert instead of ffmpegcolorspace.

Related

Gstreamer screenshot from RTSP stream is always gray

I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg

Gstreamer: Save image/jpeg using multifilesink every 5 seconds

I am trying to figure out how to save an image using multifilesink every N seconds (lets say 5). My get-launch-1.0 pipeline is below: gst-launch-1.0 videotestsrc ! 'video/x-raw, format=I420, width=400, height=400, framerate=1/5' ! jpegenc ! multifilesink location=/some/location/img_%06d.jpg
I was thinking the framerate option could control the capture speed but it seems to not be affecting anything. How can I delay this pipeline to only save a jpeg every N seconds?
Edit: So I figured how that this will work with videotestsrc if you set "is-live=true" but I would like to do this with an nvcamerasrc or nvarguscamerasrc.
When the videotestsrc is not running as a live source, it will pump out frames as fast as it can, updating timestamps based on the output framerate configured on the source pad.
Setting it to live-mode will ensure that it actually matches the expected framerate.
This shouldn't be an issue with a true live source like a camera source.
However something like this can force synchronization with the videotestsrc:
gst-launch-1.0.exe videotestsrc ! video/x-raw, format=I420, width=400, height=400, framerate=1/5 ! identity sync=true ! timeoverlay ! jpegenc ! multifilesink location="/some/location/img_%06.jpg"

How to use gstreamer to play gif file on windows

As title,
I use this commend to play gif on windows, but it just show the first frame then close it.
gst-launch-1.0 filesrc location=demo.gif ! gdkpixbufdec ! videoconvert ! autovideosink
I want to play whole gif file, is some gst element or parameter I forget to setup?
After gstreamer 1.14, user can use element of libav library to create gif pipline.
This is the sample commend.
gst-launch-1.0 filesrc location=demo.gif ! avdemux_gif ! avdec_gif ! autovideosink

Gstreamer videoconvert color conversion wrong?

I'm launching a gst-launch-1.0 that captures camera images with nvgstcamera. The images are encoded to VP9 video. The video is tee'd to a filesink that saves the video in a webm container and to a VP9 decoder that pipes the images into an appsink.
Later, I want to extract frames from the saved video and run them through the application again. It is important that the frames are absolutely identical to the ones that were piped into the appsink during video capture.
Unfortunately, the decoded frames look slightly different, depending on how you extract them.
A minimal working example:
Recording:
$ gst-launch-1.0 nvcamerasrc ! "video/x-raw(memory:NVMM), format=NV12" ! omxvp9enc ! tee name=splitter \
splitter. ! queue ! webmmux ! filesink location="record.webm" \
splitter. ! queue ! omxvp9dec ! nvvidconv ! "video/x-raw,format=RGBA" ! pngenc ! multifilesink location="direct_%d.png"
Replaying with nvvidconv element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! nvvidconv ! pngenc ! multifilesink location="extracted_nvvidconv_%d.png"
Replaying with videoconvert element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! videoconvert ! pngenc ! multifilesink location="extracted_videoconvert_%d.png"
Testing image differences:
$ compare -metric rmse direct_25.png extracted_nvvidconv_25.png null
0
$ compare -metric rmse direct_25.png extracted_videoconvert_25.png null
688.634 (0.0105079)
nvvidconv:
videoconvert:
My guess is that this has to do with the I420 to RGB conversion. So videoconvert seems to use a different color conversion than nvvidconv.
Launching the pipeline with gst-launch -v shows that the element capabilities are basically the same for both replay pipelines, the only difference is that videoconvert uses RGB by default, while nvvidconv uses RGBA. Adding the caps string "video/x-raw,format=RGBA" behind videoconvert makes however no difference in color conversion.
Note that this is on an Nvidia Jetson TX2 and I would like to use hardware accelerated gstreamer plugins during recording (omxvp9enc, nvvidconv), but not during replay on another machine.
How can I extract images from the video that are identical to the images running through the pipeline during recording, but without the use of Nvidia's Jetson-specific plugins?
Check for colorimetry information - https://developer.gnome.org/gst-plugins-libs/stable/gst-plugins-base-libs-gstvideo.html#GstVideoColorimetry
Videoconvert for example take these into account when converting images. Depending on the caps found at input and output.
You probably have to check what the Tegra is doing here. Most likely there is a difference if the signal is interpreted as full range or tv range. Or the matrices differ from 601 and 709.
Depending on precision there may still be some loss during the conversion. For metrics at video codecs it may make sense to stay at YUV color space and use only RGB for display if you must.

gstreamer: display various image on top of video

I wrote a video player based on gstreamer. Now I need to display status images on top of playing video when some event is occurred. I tried following pipeline for testing purposes
gst-launch-1.0 videotestsrc ! videomixer name=mix ! videoconvert ! autovideosink filesrc location=pic.jpg ! jpegdec ! videoconvert ! imagefreeze ! mix.
to display image (implemented in C). To hide image I set pipeline to GST_STATE_READY, unlink and remove location, jpegdec, videoconvert and imagefreeze and set pipeline back to playing state but that doesn't work (video is not playing anymore).
Could someone suggest the right way of showing and hiding images on top of playing video?