Gstreamer pipeline, videorate not working as intended - gstreamer

I'm using Gstreamer to take 4 videos (MKV container, Mjpeg codec, 25 frames per second, 5 minutes long) to generate a "wall" of videos (basically a 2x2 matrix). I'm using the following pipeline:
#!/bin/sh
gst-launch -e videomixer name=mix ! ffmpegcolorspace ! jpegenc ! avimux ! filesink location=$1.avi \
uridecodebin uri="file://${PWD}/$1/1.mkv" ! videoscale ! videorate ! video/x-raw-yuv,width=300,height=200, framerate=25/1 ! videobox border-alpha=0 top=0 left=0 ! mix. \
uridecodebin uri="file://${PWD}/$1/2.mkv" ! videoscale ! videorate ! video/x-raw-yuv,width=300,height=200,framerate=25/1 ! videobox border-alpha=0 top=0 left=-300 ! mix. \
uridecodebin uri="file://${PWD}/$1/3.mkv" ! videoscale ! videorate ! video/x-raw-yuv,width=300,height=200,framerate=25/1 ! videobox border-alpha=0 top=-200 left=0 ! mix. \
uridecodebin uri="file://${PWD}/$1/4.mkv" ! videoscale ! videorate ! video/x-raw-yuv,width=300,height=200,framerate=25/1 ! videobox border-alpha=0 top=-200 left=-300 ! mix. \
The code works, but the end result is only 17 seconds long instead of 5 minutes like the source videos and it doesn't seem like I'm using the videorate element properly -- the output video seems to randomly "speed up", reading frames as they become available instead of maintaining the speed of the original videos.
Interestingly enough, when the source files are .wmv (Windows Media 9 codec) everything appears to be working just fine. Any ideas?

Try putting your capsfilter in quotes ... videorate ! "video/x-raw-yuv,width=300,height=200, framerate=25/1" ! videobox ...
Also try videomixer2

Related

Gstreamer Combine Video and Sound from different sources and Broadcast to RTMP

I have googled it all but I couldn't find solution to my problem. I will be happy if anyone had similiar need and resolved somehow.
I do stream to RTMP server by following command. It captures video from HDMI Encoder, crops, rotates video.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXX live=true'
and I want to add Audio to existing microphone on Raspberry. For example I can record microphone input to wav file by below pipeline.
gst-launch-1.0 alsasrc num-buffers=1000 device="hw:1,0" ! audio/x-raw,format=S16LE ! wavenc ! filesink location = a.wav
My question is; how can I add audio to my existing command line which streams to RTMP Server? And also, when I capture audio to file, there is a lots of Noise. How can I avoid?
Thank you
I have combined Audio & Video. But I have still Noise on Audio.
gst-launch-1.0 -e v4l2src device=/dev/v4l/by-path/platform-fe801000.csi-video-index0 ! video/x-raw,format=UYVY,framerate=20/1 ! videoconvert ! videoscale ! video/x-raw, width=1280,height=720 ! videocrop top=0 left=0 right=800 bottom=0 ! videoflip method=counterclockwise ! omxh264enc ! h264parse! flvmux name=mux streamable=true ! rtmpsink sync=true async=true location='rtmp://XXXXXXXXXXXXXXXX' alsasrc device="hw:1,0" ! queue ! audioconvert ! audioresample ! audio/x-raw,rate=44100 ! queue ! voaacenc bitrate=128000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.
I have kind of resolved Noise by following code. but still not so good.
"ffmpeg -ar 48000 -ac 1 -f alsa -i hw:1,0 -acodec aac -ab 128k -af 'highpass=f=200, lowpass=f=200' -f flv rtmp://XXXXX.XXXXXXX.XXXXX/LiveApp/"+ str(Id) + "-" + str(deviceId)+"-Audio"

How to improve performance on screencasts with audio using GStreamer?

I try to write a GStreamer pipeline to capture the screen, put a box on the corner capturing the webcam and record audio (all at the same time).
If I hit Ctrl+C to stop after ten seconds, for example, I realize I only record about 2 seconds of video (and audio). Actually, I don't care that the recording were done in real time, but I just want that GStreamer records the full lenght it should be.
This is the pipeline I have so far:
gst-launch-1.0 --gst-debug=3 ximagesrc use-damage=0 \
! video/x-raw,width=1366,height=768,framerate=30/1 ! videoconvert \
! videomixer name=mix sink_0::alpha=1 sink_1::alpha=1 sink_1::xpos=1046 sink_1::ypos=528 \
! videoconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 \
! vp8enc ! webmmux name=mux ! filesink location="out.webm" \
pulsesrc ! audioconvert ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 ! vorbisenc ! mux. \
v4l2src do-timestamp=true ! video/x-raw,width=320,height=240,framerate=30/1 ! mix.
I hope to have a solution, thank you.

Gstreamer picture-in-picture - two files playing in parallel

I need to compose a pipeline for "picture-in-picture" effect to combine media from two files:
1) video content from the first file is showed on the full window
2) video from the second file is resized and is showed in the top-left corner of a window,
3) audio from both files mixed
4) the content from both files should be played simultaneously
So far I got the following pipeline:
gst-launch-1.0 -e \
filesrc name="src0" location=$FILE0 \
! decodebin name="decodebin0" ! queue ! videoscale ! capsfilter caps="video/x-raw,width=120" ! videoconvert ! videomixer.sink_0 decodebin0. ! queue ! audioconvert ! audiomixer.sink_0 \
filesrc name="src1" location=$FILE1 \
! decodebin name="decodebin1" ! queue ! videoscale ! capsfilter caps="video/x-raw" ! videoconvert ! videomixer.sink_1 decodebin1. ! queue ! audioconvert ! audiomixer.sink_1 \
videomixer name="videomixer" ! autovideosink \
audiomixer name="audiomixer" ! autoaudiosink
However, it plays streams one by one, not in parallel. Does anyone know what should be changed here in order to play streams simultaneously ?
Ps: attaching the diagram of this pipeline visualized:
Surprisingly - the order of the sources in the pipeline does matter - after slight modification of the pipeline and placing the source with "larger" frame on the first place I was able to get the result as expected:
gst-launch-1.0 -ev \
filesrc name="src1" location=$FILE1 \
! decodebin name="decodebin1" ! queue ! videoscale ! capsfilter caps="video/x-raw,framerate=15/1" ! videoconvert ! videomixer.sink_1 decodebin1. ! queue ! audioconvert name="ac1" \
filesrc name="src0" location=$FILE0 \
! decodebin name="decodebin0" ! queue ! videoscale ! capsfilter caps="video/x-raw,width=120,framerate=15/1" ! videoconvert ! videomixer.sink_0 decodebin0. ! queue ! audioconvert name="ac0"\
ac0. ! audiomixer.sink_0 \
ac1. ! audiomixer.sink_1 \
videomixer name="videomixer" ! autovideosink \
audiomixer name="audiomixer" ! autoaudiosink \

gstreamer images to video in real-time

I am trying to use gstreamer to save images into a video format in real time as they are
being captured. I have the command to save the images. This line is:
gst-launch -e v4l2src device=/dev/video0 ! 'image/jpeg,width=640,height=480,framerate=30/1' ! jpegdec ! timeoverlay halign=right valign=bottom ! clockoverlay halign=left valign=bottom time-format="%Y/%m/%d %H:%M:%S" ! tee name=t ! queue ! sdlvideosink t. ! queue ! videorate ! capsfilter caps="video/x-raw-yuv,framerate=1/1" ! ffmpegcolorspace ! jpegenc ! multifilesink location="./Desktop/frames/frame%06d.jpg"
This command saves the images to a folder. I wrote another command that takes those pictures and saves them to a video. This command is:
gst-launch -e multifilesrc location=./Desktop/frames/frame%06d.jpg ! image/jpeg,framerate=30/1 ! decodebin ! videoscale ! video/x-raw-yuv ! progressreport name=progress ! avimux ! filesink location=test.avi
I need a way of combining these two commands so that the video can be saved in real time. I cannot seem to figure out how this is done.
Thanks!
I took away the multifilesink element from your first line and added avimux and filesink to your second line (and formatted it better for this forum) to produce this:
gst-launch -e v4l2src device=/dev/video0 ! \
'image/jpeg,width=640,height=480,framerate=30/1' ! \
jpegdec ! timeoverlay halign=right valign=bottom ! \
clockoverlay halign=left valign=bottom time-format="%Y/%m/%d %H:%M:%S" ! \
tee name=t ! \
queue ! \
sdlvideosink t. ! \
queue ! \
videorate ! \
capsfilter caps="video/x-raw-yuv,framerate=1/1" ! \
ffmpegcolorspace ! \
jpegenc ! \
avimux ! \
filesink location=test.avi
Not sure if it will work, and it also discards the progressreport component (not sure how it works). If the command line fails, please post the gst-launch console error messages.

How to encode video matrix using shortest clip?

I am producing a video matrix with variable input stream lengths.
How to stop writing to file upon compleation of shortest video clip?
This is important because I don't want to see empty videobox elements once the clip has finished playing.
Does gstreamer provide some functionality to stop processing after some timeout period?
GST_DEBUG=2 gst-launch-0.10 -e videomixer2 name=mix ! ffmpegcolorspace ! jpegenc ! avimux ! filesink location=test.avi \
uridecodebin uri="file:///home/me/1.wmv" ! videoscale ! videorate ! "video/x-raw-yuv,width=300,height=200, framerate=25/1" ! videobox border-alpha=0 top=0 left=0 ! mix. \
uridecodebin uri="file:///home/me/2.wmv" ! videoscale ! videorate ! "video/x-raw-yuv,width=300,height=200,framerate=25/1" ! videobox border-alpha=0 top=0 left=-300 ! mix. \
uridecodebin uri="file:///home/me/3.wmv" ! videoscale ! videorate ! "video/x-raw-yuv,width=300,height=200,framerate=25/1" ! videobox border-alpha=0 top=-200 left=0 ! mix. \
uridecodebin uri="file:///home/me/4.wmv" ! videoscale ! videorate ! "video/x-raw-yuv,width=300,height=200,framerate=25/1" ! videobox border-alpha=0 top=-200 left=-300 ! mix. \
If you'd would know the shortest duration, you could use a seek-event with a stop time. EOS is per definition when the whole pipeline is done. You might be able to use pad-probes to catch the EOS on the sink-pads of videomixer and then stop.