Gstreamer: Save image/jpeg using multifilesink every 5 seconds - c++

I am trying to figure out how to save an image using multifilesink every N seconds (lets say 5). My get-launch-1.0 pipeline is below: gst-launch-1.0 videotestsrc ! 'video/x-raw, format=I420, width=400, height=400, framerate=1/5' ! jpegenc ! multifilesink location=/some/location/img_%06d.jpg
I was thinking the framerate option could control the capture speed but it seems to not be affecting anything. How can I delay this pipeline to only save a jpeg every N seconds?
Edit: So I figured how that this will work with videotestsrc if you set "is-live=true" but I would like to do this with an nvcamerasrc or nvarguscamerasrc.

When the videotestsrc is not running as a live source, it will pump out frames as fast as it can, updating timestamps based on the output framerate configured on the source pad.
Setting it to live-mode will ensure that it actually matches the expected framerate.
This shouldn't be an issue with a true live source like a camera source.
However something like this can force synchronization with the videotestsrc:
gst-launch-1.0.exe videotestsrc ! video/x-raw, format=I420, width=400, height=400, framerate=1/5 ! identity sync=true ! timeoverlay ! jpegenc ! multifilesink location="/some/location/img_%06.jpg"

Related

GStreamer pipeline stops playing after fast and shaky camera movement

I'm working on a video streaming wearable device. During the tests, it came up that the pipeline clock and stream stop while fast walking or running. It's bizarre behaviour because in debug messages there are no errors about the broken pipeline, besides lost frames. It's frizzed and only restarting help. May you guys guess what causes the problem?
The pipelines I use:
streaming device:
gst-launch-1.0 -vem --gst-debug=3 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=\(fraction\)30/1 ! v4l2h264enc extra-controls=s,video_bitrate=250000 capture-io-mode=4 output-io-mode=4 ! "video/x-h264,level=(string)4" ! rtph264pay config-interval=1 ! multiudpsink clients="127.0.0.1:5008,10.123.0.2:5008"
client:
udpsrc port=5008 do-timestamp=true ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 ! rtpjitterbuffer latency=100 drop-on-latency=true drop-messages-interval=100000000 ! queue max-size-buffers=20000 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! glupload ! qmlglsink name=qmlglsink sync=false
The hardware I use is a PS3 Eye cam, and LTE modem to transmit video with a pretty low uplink of 1-2 Mbit/s, and everything running on RaspberryPi 3b+ 1GB.
For more debug info there are also pictures of the log file after last registered dropped frame and every next "cycle" sends a new query, loops over GST Element from sink to the source which is my camera and ends with max query duration(highlighted query to v4l2src)
Do you know how to overcome this problem?
The problem has been resolved. The issue was not variable encoder bitrate.
A more detailed inspection and pipeline that works for me is in this GStreamer issue page

How to record video (1080p 30fps) from raspberry pi camera and stream the 'recording in progress' file simultaneously?

0
The objective I am trying to achieve is streaming 1080p video from Raspberry pi camera and record the video simultaneously.
I tried recording the http streaming as source but didn't work on 30fps. A lot of frames were missing and almost got 8fps only.
As a second approach, I am trying to record the file directly from camera and then streaming the "recording in progress/buffer" file. For the same I am trying to use GStreamer. Please suggest if this is good option or should I try any other?
For Recording using GStreamer I used
gst-launch-1.0 -v v4l2src device=/dev/video0 ! capsfilter caps="video/x-raw,width=1920,height=1080,framerate=30/1" !
videoflip method=clockwise ! videoflip method=clockwise ! videoconvert ! videorate ! x264enc! avimux ! filesink location=test_video.h264
Result : recorded video shows 1080p and 30fps but frames are dropping heavily.
For Streaming the video buffer I have used UDP in Gstreamer as,
gst-launch-1.0 -v v4l2src device=/dev/video0 ! capsfilter caps="video/x-raw,width=640,height=480,framerate=30/1" ! x264enc ! queue ! rtph264pay ! udpsink host=192.168.5.1 port=8080
Result : No specific errors on terminal but can't get stream on vlc.
Please suggest the best method here.

Resample and depayload audio rtp using gstreamer

I am developing an application where I am using a wave file from a location at one end of a pipeline and udpsink at the other end of it.
gst-launch-1.0 filesrc location=/path/to/wave/file/Tornado.wav ! wavparse ! audioconvert ! audio/x-raw,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=xxx.xxx.xxx.xxx port=5000
The Above wave file is having sampling rate = 44100 Hz and single-channel(mono)
On the same PC I am using a c++ program application to catch these packets and depayload to a headerless audio file (say Tornado.raw)
The pipeline I am creating for this is basically
gst-launch-1.0 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! filesink location=Tornado.raw
Now This works fine. I get the headerless data and when I play it using the Audacity It plays great!
I am trying to resample this audio file while it is in pipeline from 44100 Hz to 8000 Hz
Simply changing the clock-rate=(int)44100 to clock-rate=(int)8000 is not helping (also absurd logically)
I am looking for how to get the headerless file at the pipeline output with 8000 Hz sampling.
Also the data that I am getting now is Big-endian, but I want Little-endian as output. how do I set that in the pipeline?
You might relate this to one of my earlier question.
First, you have some weird caps in your pipeline - width and height are for video here. They probably will be just ignored.. but still.. not sure on others there as well but meh..
For the actual question. Just use audioresample and audioconvert elements of Gstreamer to transfer in your desired format.
E.g.
[..] ! rtpL16depay ! audioresample ! audioconvert ! \
audio/x-raw, rate=8000, format=S16LE ! filesink location=Tornado.raw

GStreamer images to video

I have 4 png with different resolutions (1920x1080, 1280x720, 1920x1200) and I try to convert them to a video slideshow.
This pipeline works:
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=1920,height=1080" ! pngdec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! autovideosink
while when I try to force framerate it only reads the first image.
I tried :
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=1920,height=1080" ! pngdec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! videorate ! video/x-raw,width=1920,height=1080,framerate=25/1 ! autovideosink
and
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=854,height=480" ! pngdec ! videoconvert ! videoscale ! videorate ! video/x-raw,width=1920,height=1080,framerate=25/1 ! autovideosink
I don't understand why adding framerate would cause my pipeline to ignore some pictures.
(I am under Windows 10 with the brand new GStreamer 1.14.0)
EDIT: I forgot to tell that when I manually resize my picture so they have all the same resolution, all the above pipelines work!
I suspect it is a timing issue. You are running a real-time pipeline, but most likely the PNG decoding is not fast enough to deliver frames in a 25/1 fps manner and the videosink drops them has they arrive too late. Maybe adding max-lateness=-1 to the videosink prevents the dropping of frames in your case.

Gstreamer videoconvert color conversion wrong?

I'm launching a gst-launch-1.0 that captures camera images with nvgstcamera. The images are encoded to VP9 video. The video is tee'd to a filesink that saves the video in a webm container and to a VP9 decoder that pipes the images into an appsink.
Later, I want to extract frames from the saved video and run them through the application again. It is important that the frames are absolutely identical to the ones that were piped into the appsink during video capture.
Unfortunately, the decoded frames look slightly different, depending on how you extract them.
A minimal working example:
Recording:
$ gst-launch-1.0 nvcamerasrc ! "video/x-raw(memory:NVMM), format=NV12" ! omxvp9enc ! tee name=splitter \
splitter. ! queue ! webmmux ! filesink location="record.webm" \
splitter. ! queue ! omxvp9dec ! nvvidconv ! "video/x-raw,format=RGBA" ! pngenc ! multifilesink location="direct_%d.png"
Replaying with nvvidconv element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! nvvidconv ! pngenc ! multifilesink location="extracted_nvvidconv_%d.png"
Replaying with videoconvert element:
$ gst-launch-1.0 filesrc location=record.webm ! matroskademux ! omxvp9dec \
! videoconvert ! pngenc ! multifilesink location="extracted_videoconvert_%d.png"
Testing image differences:
$ compare -metric rmse direct_25.png extracted_nvvidconv_25.png null
0
$ compare -metric rmse direct_25.png extracted_videoconvert_25.png null
688.634 (0.0105079)
nvvidconv:
videoconvert:
My guess is that this has to do with the I420 to RGB conversion. So videoconvert seems to use a different color conversion than nvvidconv.
Launching the pipeline with gst-launch -v shows that the element capabilities are basically the same for both replay pipelines, the only difference is that videoconvert uses RGB by default, while nvvidconv uses RGBA. Adding the caps string "video/x-raw,format=RGBA" behind videoconvert makes however no difference in color conversion.
Note that this is on an Nvidia Jetson TX2 and I would like to use hardware accelerated gstreamer plugins during recording (omxvp9enc, nvvidconv), but not during replay on another machine.
How can I extract images from the video that are identical to the images running through the pipeline during recording, but without the use of Nvidia's Jetson-specific plugins?
Check for colorimetry information - https://developer.gnome.org/gst-plugins-libs/stable/gst-plugins-base-libs-gstvideo.html#GstVideoColorimetry
Videoconvert for example take these into account when converting images. Depending on the caps found at input and output.
You probably have to check what the Tegra is doing here. Most likely there is a difference if the signal is interpreted as full range or tv range. Or the matrices differ from 601 and 709.
Depending on precision there may still be some loss during the conversion. For metrics at video codecs it may make sense to stay at YUV color space and use only RGB for display if you must.