How to mix a live source and a non-live source with GStreamer videomixer plug-in?
Gst-launch shows nothing when mixing uridecodebin ( some mpeg video ) & videotestsrc
gst-launch \
videomixer name=mix sink_0::zorder=0 sink_1::zorder=1 ! ffmpegcolorspace ! autovideosink \
uridecodebin uri=file:///test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=704 ,height=576 ! queue ! mix.sink_0 \
videotestsrc ! video/x-raw-yuv, width=176,height=144 ! queue ! mix.sink_1
But it works if I change both of the source to the mpeg video,
gst-launch
videomixer name=mix sink_0::zorder=0 sink_1::zorder=1 ! ffmpegcolorspace ! autovideosink
uridecodebin uri=file:///test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=704 ,height=576 ! queue ! mix.sink_0
uridecodebin uri=file:///test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=176,height=144 ! queue ! mix.sink_1
It seems that I made a silly mistake. Here is my answer to my own question here:
With the above command and the tested video clip, they output different video format.
After forcing the video format of videotestsrc to I420, it works fine.
Here is a command that mixes:
gst-launch -v
videomixer name=mix sink_0::zorder=0 sink_1::zorder=1 ! ffmpegcolorspace ! autovideosink
uridecodebin uri=file:///media/sf_share/test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=704 ,height=576 ! videorate force-fps=-1 ! queue ! mix.sink_0
videotestsrc ! video/x-raw-yuv,width=352,height=288,format=(fourcc)I420 ! timeoverlay ! queue ! mix.sink_1
Related
I am exeriencing latency with gstreamer. Can anyone recomend an optimised command for the two web cameras I am trying to use?
Any recomendations on what the command would look like adding audio as well?
Logitech Brio - Send
gst-launch-1.0 -v -e v4l2src device=$CAMERA ! image/jpeg,format=MJPG,width=1920,height=1080,framerate=60/1 ! jpegparse ! rtpjpegpay ! udpsink host=$HOSTIP port=$PORT sync=false
Logitech Brio - Receive (OBS)
udpsrc port=8555 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)JPEG, payload=(int)26" ! rtpjpegdepay ! jpegparse ! jpegdec ! video.
Logitech c920 - Send
gst-launch-1.0 -v -e v4l2src device=$CAMERA ! video/x-h264,width=320,height=240,framerate=30/1 ! x264enc bitrate=6000 pass=pass1 speed-preset=ultrafast tune=zerolatency sliced-threads=true threads=6 ! h264parse ! rtph264pay ! udpsink host=$HOSTIP port=$PORT sync=false
Logitech c920 - Receive (OBS)
udpsrc port=8554 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! avdec_h264 ! video.
The documentation for some software I'm using says to use this gstreamer pipeline to stream video from a camera:
gst-launch-1.0 v4l2src device=/dev/video5 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! rtph264pay mtu=1024 ! udpsink host=127.0.0.1 port=5600
If I wanted to adapted this to pipe to a .mp4, I thought something like this would work:
gst-launch-1.0 v4l2src device=/dev/video5 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! filesink location=test.mp4
but the resulting file is not playable in vlc.
What am I missing?
Thanks in advance.
You would use a container (such as qtmux here) :
# For recording 100 frames:
gst-launch-1.0 v4l2src device=/dev/video5 num-buffers=100 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! qtmux ! filesink location=test.mp4
# If you want to stop mnaually with Ctrl-C, add EOS:
gst-launch-1.0 -e v4l2src device=/dev/video5 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! qtmux ! filesink location=test.mp4
I've a problem with picture in picture using gstreamer:
I'm using this command to play the stream.
gst-launch -v souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! jpegdec ! videomixer name=mix ! autovideosink souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! jpegdec ! mix.
But I get the following error:
http://pastebin.com/7Xry2Q8x
Have anybody an idea?
The videomixer wants some sort of framerate information to be delivered to it from each of the streams. The mjpeg format has none. Here is a sample that works but assumes a framerate of 30fps.
I also added queue elements before each stream before they connect to the mixer.
gst-launch-1.0 -v souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! image/jpeg,framerate=30/1 ! jpegdec ! queue ! videomixer name=mix ! autovideosink sync=false souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! image/jpeg,framerate=30/1 ! jpegdec ! queue ! mix.
This kind of pipeline can be tricky to build. What kind of MJPEGs are you trying to mix?
I would like to receive a rtmp-stream and create a pipe with v4l2sink as output
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! videorate ! v4l2sink device=/dev/video1
But I get only a green screen: https://www.dropbox.com/s/yq9oqi9m62c5afo/screencast1422465570.webm?dl=0
Your pipeline is telling GStreamer to treat encoded, muxed RTMP data as YUV video buffers.
Instead you need to parse, demux, and decode the video part of the RTMP data. I don't have a sample stream to test on, but you may be able to just use decodebin (which in GStreamer 0.10 was called decodebin2 for whatever reason). You'll also want to reorder the videorate to be before the framerate caps, so it knows what to convert to.
Wild stab in the dark:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1
Now It works:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=1920,height=1080,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1
I have two avi files (one contains only video and the second file consists of video and audio stream). I would like to composite/mix two avi files to one video file (side by side) and preserve audio.
I can compose video streams (using videomixer) but I don't know how to add audio stream:
gst-launch filesrc location=test01.avi ! decodebin ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=640, height=480 ! videobox border-alpha=0 left=-640 ! videomixer name=mix ! ffmpegcolorspace ! queue ! avimux ! filesink location=videoTestMix666.avi filesrc location=test02.avi ! decodebin ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=640, height=480 ! videobox right=-640 ! mix.
Could you advice me hot to do this - how to add audio? Many thanks