GStreamer - How to compose three streams (two video files with audio) - gstreamer

I have two avi files (one contains only video and the second file consists of video and audio stream). I would like to composite/mix two avi files to one video file (side by side) and preserve audio.
I can compose video streams (using videomixer) but I don't know how to add audio stream:
gst-launch filesrc location=test01.avi ! decodebin ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=640, height=480 ! videobox border-alpha=0 left=-640 ! videomixer name=mix ! ffmpegcolorspace ! queue ! avimux ! filesink location=videoTestMix666.avi filesrc location=test02.avi ! decodebin ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=640, height=480 ! videobox right=-640 ! mix.
Could you advice me hot to do this - how to add audio? Many thanks

Related

Gstreamer x-raw to h264 mp4

The documentation for some software I'm using says to use this gstreamer pipeline to stream video from a camera:
gst-launch-1.0 v4l2src device=/dev/video5 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! rtph264pay mtu=1024 ! udpsink host=127.0.0.1 port=5600
If I wanted to adapted this to pipe to a .mp4, I thought something like this would work:
gst-launch-1.0 v4l2src device=/dev/video5 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! filesink location=test.mp4
but the resulting file is not playable in vlc.
What am I missing?
Thanks in advance.
You would use a container (such as qtmux here) :
# For recording 100 frames:
gst-launch-1.0 v4l2src device=/dev/video5 num-buffers=100 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! qtmux ! filesink location=test.mp4
# If you want to stop mnaually with Ctrl-C, add EOS:
gst-launch-1.0 -e v4l2src device=/dev/video5 ! video/x-raw ! videoconvert ! v4l2h264enc ! h264parse config-interval=3 ! qtmux ! filesink location=test.mp4

combine two gstreamer pipelines

I have 2 gstreamer pipelines. One displays a scaled live video captured from camera on the screen and the other takes the video in its original format and saves it to a file on the disk after encoding it with the H264 format. The two pipelines are as follows;
# Capture and display scaled camera feed
gst-launch-1.0 -v autovideosrc ! videoscale ! video/x-raw,
width=480,height=270 ! xvimagesink -e --gst-debug-level=3 sync=false
# Save the camera feed in its original format to disk
gst-launch-1.0 -v autovideosrc ! omxh264enc ! 'video/x-h264,
stream-format=(string)byte-stream' ! h264parse ! qtmux ! filesink
location=test.mp4 -e
These two pipelines work by themselves and I was wondering how i could combine them into one i.e. show the scaled video on the screen AND record the video in its original format to a file?
Looks like I needed the tee element. not sure if I am doing this right but it seems to work:
gst-launch-1.0 -v autovideosrc ! tee name = t ! queue ! omxh264enc !
'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! qtmux !
filesink location=test.mp4 t. ! queue ! videoscale ! video/x-raw,
width=480,height=270 ! xvimagesink -e sync=false

RTMPSrc to v4l2sink

I would like to receive a rtmp-stream and create a pipe with v4l2sink as output
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! videorate ! v4l2sink device=/dev/video1
But I get only a green screen: https://www.dropbox.com/s/yq9oqi9m62c5afo/screencast1422465570.webm?dl=0
Your pipeline is telling GStreamer to treat encoded, muxed RTMP data as YUV video buffers.
Instead you need to parse, demux, and decode the video part of the RTMP data. I don't have a sample stream to test on, but you may be able to just use decodebin (which in GStreamer 0.10 was called decodebin2 for whatever reason). You'll also want to reorder the videorate to be before the framerate caps, so it knows what to convert to.
Wild stab in the dark:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1
Now It works:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=1920,height=1080,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1

Use gstreamer to stream video and audio of Logitech C920

I'm quite a newbie on using gstreamer. I want to stream video and audio from my C920 webcam to another PC but I keep getting wrong in combining things..
I can now stream h264 video from my C920 to another PC using:
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-h264,width=1280,height=720,framerate=30/1 ! h264parse ! rtph264pay pt=127 config-interval=4 ! udpsink host=172.19.3.103
And view it with:
gst-launch-1.0 udpsrc port=1234 ! application/x-rtp, payload=127 ! rtph264depay ! avdec_h264 ! xvimagesink sync=false
I can also get the audio from the C920 and record it to a file together with a test-image:
gst-launch videotestsrc ! videorate ! video/x-raw-yuv,framerate=5/1 ! queue ! theoraenc ! queue ! mux. pulsesrc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_F1894590-02-C920.analog-stereo" ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! queue ! audioconvert ! queue ! vorbisenc ! queue ! mux. oggmux name=mux ! filesink location=stream.ogv
But I' trying to get something like this (below) to work.. This one is not working, presumably it's even a very bad combi I made!
gst-launch v4l2src device=/dev/video1 ! video/x-h264,width=1280,height=720,framerate=30/1 ! queue ! mux. pulsesrc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_F1894590-02-C920.analog-stereo" ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! queue ! audioconvert ! queue ! x264enc ! queue ! udpsink host=127.0.0.1 port=1234
You should encode your video before linking it against the mux. Also, I do not see you declaring the type of muxer you are using and you do not put the audio in the mux.
I am not sure it is even possible to send audio AND video over the same rtp stream in this manner in gstreamer. I know that the rtsp server implementation in gstreamer allows audio and video together but even in it I am not sure if it is still two streams just being abstracted away from implementation.
You may want to just use to separate streams and pass them through to a gstrtpbin element.

Can't mix videotestsrc and uridecodebin with GStreamer/videomixer

How to mix a live source and a non-live source with GStreamer videomixer plug-in?
Gst-launch shows nothing when mixing uridecodebin ( some mpeg video ) & videotestsrc
gst-launch \
videomixer name=mix sink_0::zorder=0 sink_1::zorder=1 ! ffmpegcolorspace ! autovideosink \
uridecodebin uri=file:///test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=704 ,height=576 ! queue ! mix.sink_0 \
videotestsrc ! video/x-raw-yuv, width=176,height=144 ! queue ! mix.sink_1
But it works if I change both of the source to the mpeg video,
gst-launch
videomixer name=mix sink_0::zorder=0 sink_1::zorder=1 ! ffmpegcolorspace ! autovideosink
uridecodebin uri=file:///test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=704 ,height=576 ! queue ! mix.sink_0
uridecodebin uri=file:///test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=176,height=144 ! queue ! mix.sink_1
It seems that I made a silly mistake. Here is my answer to my own question here:
With the above command and the tested video clip, they output different video format.
After forcing the video format of videotestsrc to I420, it works fine.
Here is a command that mixes:
gst-launch -v
videomixer name=mix sink_0::zorder=0 sink_1::zorder=1 ! ffmpegcolorspace ! autovideosink
uridecodebin uri=file:///media/sf_share/test.mpg ! timeoverlay ! videoscale ! video/x-raw-yuv,width=704 ,height=576 ! videorate force-fps=-1 ! queue ! mix.sink_0
videotestsrc ! video/x-raw-yuv,width=352,height=288,format=(fourcc)I420 ! timeoverlay ! queue ! mix.sink_1