I wrote a video player based on gstreamer. Now I need to display status images on top of playing video when some event is occurred. I tried following pipeline for testing purposes
gst-launch-1.0 videotestsrc ! videomixer name=mix ! videoconvert ! autovideosink filesrc location=pic.jpg ! jpegdec ! videoconvert ! imagefreeze ! mix.
to display image (implemented in C). To hide image I set pipeline to GST_STATE_READY, unlink and remove location, jpegdec, videoconvert and imagefreeze and set pipeline back to playing state but that doesn't work (video is not playing anymore).
Could someone suggest the right way of showing and hiding images on top of playing video?
Related
I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
0
The objective I am trying to achieve is streaming 1080p video from Raspberry pi camera and record the video simultaneously.
I tried recording the http streaming as source but didn't work on 30fps. A lot of frames were missing and almost got 8fps only.
As a second approach, I am trying to record the file directly from camera and then streaming the "recording in progress/buffer" file. For the same I am trying to use GStreamer. Please suggest if this is good option or should I try any other?
For Recording using GStreamer I used
gst-launch-1.0 -v v4l2src device=/dev/video0 ! capsfilter caps="video/x-raw,width=1920,height=1080,framerate=30/1" !
videoflip method=clockwise ! videoflip method=clockwise ! videoconvert ! videorate ! x264enc! avimux ! filesink location=test_video.h264
Result : recorded video shows 1080p and 30fps but frames are dropping heavily.
For Streaming the video buffer I have used UDP in Gstreamer as,
gst-launch-1.0 -v v4l2src device=/dev/video0 ! capsfilter caps="video/x-raw,width=640,height=480,framerate=30/1" ! x264enc ! queue ! rtph264pay ! udpsink host=192.168.5.1 port=8080
Result : No specific errors on terminal but can't get stream on vlc.
Please suggest the best method here.
As title,
I use this commend to play gif on windows, but it just show the first frame then close it.
gst-launch-1.0 filesrc location=demo.gif ! gdkpixbufdec ! videoconvert ! autovideosink
I want to play whole gif file, is some gst element or parameter I forget to setup?
After gstreamer 1.14, user can use element of libav library to create gif pipline.
This is the sample commend.
gst-launch-1.0 filesrc location=demo.gif ! avdemux_gif ! avdec_gif ! autovideosink
I have a collection of RGBA png files, and have verified the presence of an alpha layer on each file:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! videomixer background=checker ! videoconvert ! ximagesink
I want to take these files and make them into a video file (in any format that GStreamer will readily handle with a simple decodebin). What would be a good set of encoders, containers, and elements to use for this?
I've tried avimux but no alpha data was saved. I also tried avenc_huffyuv, and that would decode fine as raw data using avenc_huffyuv, but decodebin could not detect it.
Nothing like a good night's sleep to solve an issue..
Apparently the huffyuv encoder and avi muxer work nicely together to preserve tranpsarency:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! avenc_huffyuv ! avimux ! filesink location=/tmp/test.avi
Can any one please suggest a working pipeline to display text over live streaming using v4l2src and mfw_isink ?
I have a working pipeline to display textoverlay, clockoverlay, timeoverlay but with test source and and test sink and not with desired source and sink.
gst-launch videotestsrc pattern=blue ! textoverlay font-desc="San 32" text="CAM1 Disconnected" valign=top halign=left ! ximagesink
You can just replace your elements with needed ones, and insert videoconvert if needed. I suggest you to learn how to write pipeline description, however.
Here I replaced videotestsrc with v4l2src and put videoconvert element because v4l2src and ximagesink has no common video format in their list of supported formats to use together.
gst-launch-1.0 v4l2src ! videoconvert ! textoverlay font-desc="San 32" text="CAM1 Disconnected" ! ximagesink
I used GStreamer 1.x here.
You can replace ximagesink with with your custom sink.