I am trying to use gstreamer to save images into a video format in real time as they are
being captured. I have the command to save the images. This line is:
gst-launch -e v4l2src device=/dev/video0 ! 'image/jpeg,width=640,height=480,framerate=30/1' ! jpegdec ! timeoverlay halign=right valign=bottom ! clockoverlay halign=left valign=bottom time-format="%Y/%m/%d %H:%M:%S" ! tee name=t ! queue ! sdlvideosink t. ! queue ! videorate ! capsfilter caps="video/x-raw-yuv,framerate=1/1" ! ffmpegcolorspace ! jpegenc ! multifilesink location="./Desktop/frames/frame%06d.jpg"
This command saves the images to a folder. I wrote another command that takes those pictures and saves them to a video. This command is:
gst-launch -e multifilesrc location=./Desktop/frames/frame%06d.jpg ! image/jpeg,framerate=30/1 ! decodebin ! videoscale ! video/x-raw-yuv ! progressreport name=progress ! avimux ! filesink location=test.avi
I need a way of combining these two commands so that the video can be saved in real time. I cannot seem to figure out how this is done.
Thanks!
I took away the multifilesink element from your first line and added avimux and filesink to your second line (and formatted it better for this forum) to produce this:
gst-launch -e v4l2src device=/dev/video0 ! \
'image/jpeg,width=640,height=480,framerate=30/1' ! \
jpegdec ! timeoverlay halign=right valign=bottom ! \
clockoverlay halign=left valign=bottom time-format="%Y/%m/%d %H:%M:%S" ! \
tee name=t ! \
queue ! \
sdlvideosink t. ! \
queue ! \
videorate ! \
capsfilter caps="video/x-raw-yuv,framerate=1/1" ! \
ffmpegcolorspace ! \
jpegenc ! \
avimux ! \
filesink location=test.avi
Not sure if it will work, and it also discards the progressreport component (not sure how it works). If the command line fails, please post the gst-launch console error messages.
Related
I have one camera and I would like to clone it to be able to use it in two different apps.
The following two things work ok, but I'm not able to combine them:
Read from /dev/video0 and clone to /dev/video1 and /dev/video2
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! \
queue ! v4l2sink device=/dev/video2
Read from /dev/video0 and rescale it and output to /dev/video1
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
videoscale ! video/x-raw,width=178,height=100 ! videoconvert ! \
v4l2sink device=/dev/video1
But the following does not work (reading -> rescaling -> clone)
gst-launch-1.0 v4l2src name=vsrc device=/dev/video0 ! \
video/x-raw,width=1920,height=1080,framerate=60/1,format=RGB ! \
videoscale ! video/x-raw,width=178,height=100 ! videoconvert ! \
tee name=t ! queue ! v4l2sink device=/dev/video1 t. ! \
queue ! v4l2sink device=/dev/video2
It fails with the following error:
ERROR: from element /GstPipeline:pipeline0/GstVideoScale:videoscale0: Failed to configure the buffer pool
Additional debug info:
gstbasetransform.c(904): gst_base_transform_default_decide_allocation (): /GstPipeline:pipeline0/GstVideoScale:videoscale0:
Configuration is most likely invalid, please report this issue.
Thanks!
I am trying to get a stream from the webcam and then using tee to get two sinks(filesink and autovideosink) so I can visualise the video in a window and in the same time save it into a folder. When I run this command I get only a frozen image in the window and not a video stream. It works with two autovideosinks(I get two windows with two videostreams) so I guess the problem is in the filesink part. The filesink works perfectly alone.
gst-launch-1.0 -v v4l2src device=/dev/video0 ! tee name=t \
t. ! queue ! videoscale ! video/x-raw,framerate=30/1,width=320,height=240 ! \
videoconvert ! autovideosink \
t. ! queue ! video/x-raw,framerate=30/1,width=320,height=240 ! \
x264enc ! mpegtsmux ! filesink location=~/Videos/test1.mp4
Try to add async=0 property to filesink.
gst-launch-1.0 -v v4l2src device=/dev/video0 ! tee name=t \
t. ! queue ! videoscale ! video/x-raw,framerate=30/1,width=320,height=240 ! \
videoconvert ! autovideosink \
t. ! queue ! video/x-raw,framerate=30/1,width=320,height=240 ! \
x264enc ! mpegtsmux ! filesink **async=0** location=~/Videos/test1.mp4
I try to write a GStreamer pipeline to capture the screen, put a box on the corner capturing the webcam and record audio (all at the same time).
If I hit Ctrl+C to stop after ten seconds, for example, I realize I only record about 2 seconds of video (and audio). Actually, I don't care that the recording were done in real time, but I just want that GStreamer records the full lenght it should be.
This is the pipeline I have so far:
gst-launch-1.0 --gst-debug=3 ximagesrc use-damage=0 \
! video/x-raw,width=1366,height=768,framerate=30/1 ! videoconvert \
! videomixer name=mix sink_0::alpha=1 sink_1::alpha=1 sink_1::xpos=1046 sink_1::ypos=528 \
! videoconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! vp8enc ! webmmux name=mux ! filesink location="out.webm" \
pulsesrc ! audioconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! vorbisenc ! mux. \
v4l2src do-timestamp=true ! video/x-raw,width=320,height=240,framerate=30/1 ! mix.
I hope to have a solution, thank you.
I need to compose a pipeline for "picture-in-picture" effect to combine media from two files:
1) video content from the first file is showed on the full window
2) video from the second file is resized and is showed in the top-left corner of a window,
3) audio from both files mixed
4) the content from both files should be played simultaneously
So far I got the following pipeline:
gst-launch-1.0 -e \
filesrc name="src0" location=$FILE0 \
! decodebin name="decodebin0" ! queue ! videoscale ! capsfilter caps="video/x-raw,width=120" ! videoconvert ! videomixer.sink_0 decodebin0. ! queue ! audioconvert ! audiomixer.sink_0 \
filesrc name="src1" location=$FILE1 \
! decodebin name="decodebin1" ! queue ! videoscale ! capsfilter caps="video/x-raw" ! videoconvert ! videomixer.sink_1 decodebin1. ! queue ! audioconvert ! audiomixer.sink_1 \
videomixer name="videomixer" ! autovideosink \
audiomixer name="audiomixer" ! autoaudiosink
However, it plays streams one by one, not in parallel. Does anyone know what should be changed here in order to play streams simultaneously ?
Ps: attaching the diagram of this pipeline visualized:
Surprisingly - the order of the sources in the pipeline does matter - after slight modification of the pipeline and placing the source with "larger" frame on the first place I was able to get the result as expected:
gst-launch-1.0 -ev \
filesrc name="src1" location=$FILE1 \
! decodebin name="decodebin1" ! queue ! videoscale ! capsfilter caps="video/x-raw,framerate=15/1" ! videoconvert ! videomixer.sink_1 decodebin1. ! queue ! audioconvert name="ac1" \
filesrc name="src0" location=$FILE0 \
! decodebin name="decodebin0" ! queue ! videoscale ! capsfilter caps="video/x-raw,width=120,framerate=15/1" ! videoconvert ! videomixer.sink_0 decodebin0. ! queue ! audioconvert name="ac0"\
ac0. ! audiomixer.sink_0 \
ac1. ! audiomixer.sink_1 \
videomixer name="videomixer" ! autovideosink \
audiomixer name="audiomixer" ! autoaudiosink \
I am using the following pipeline to convert an flv file to mp4.
gst-launch-1.0 -vvv -e filesrc location="c.flv" ! flvdemux name=demux \
demux.audio ! queue ! decodebin ! audioconvert ! faac bitrate=32000 ! mux. \
demux.video ! queue ! decodebin ! videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=superfast tune=zerolatency psy-tune=grain sync-lookahead=5 bitrate=480 key-int-max=50 ref=2 ! mux. \
mp4mux name=mux ! filesink location="c.mp4"
The problem is when (for example) audio is missing, the pipeline gets stuck. (Same thing happens if just hooking a fakesink to demux.audio).
I need a way for the filters to ignore missing tracks, or produce empty tracks.