combine two gstreamer pipelines - gstreamer

I have 2 gstreamer pipelines. One displays a scaled live video captured from camera on the screen and the other takes the video in its original format and saves it to a file on the disk after encoding it with the H264 format. The two pipelines are as follows;
# Capture and display scaled camera feed
gst-launch-1.0 -v autovideosrc ! videoscale ! video/x-raw,
width=480,height=270 ! xvimagesink -e --gst-debug-level=3 sync=false
# Save the camera feed in its original format to disk
gst-launch-1.0 -v autovideosrc ! omxh264enc ! 'video/x-h264,
stream-format=(string)byte-stream' ! h264parse ! qtmux ! filesink
location=test.mp4 -e
These two pipelines work by themselves and I was wondering how i could combine them into one i.e. show the scaled video on the screen AND record the video in its original format to a file?

Looks like I needed the tee element. not sure if I am doing this right but it seems to work:
gst-launch-1.0 -v autovideosrc ! tee name = t ! queue ! omxh264enc !
'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! qtmux !
filesink location=test.mp4 t. ! queue ! videoscale ! video/x-raw,
width=480,height=270 ! xvimagesink -e sync=false

Related

gstreamer pipeline saves my camera stream to a file, but I need a pipeline to stream it live to my monitor

ok, this works
This gstreamer pipeline works well to save my camera video stream to a file on my raspberry pi.
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,framerate=30/1,format=UYVY' ! v4l2h264enc ! 'video/x-h264,level=(string)4' ! filesink location = test_video6.h264
but what is the correct pipeline to display a live video stream from my camera in order to watch it in real time on my monitor, instead of just saving it to a file to view it later with VLC.
For example, I have tried adding
! videoconvert ! autovideosink
to the above pipeline, but it does not work.
Try this:
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,framerate=30/1,format=UYVY' ! v4l2h264enc ! 'video/x-h264,level=(string)4' ! decodebin ! videoconvert ! autovideosink
If this doesn't work you can use the general example of a video pipeline from here and use:
gst-launch-1.0 v4l2src ! decodebin ! videoconvert ! autovideosink
from there you can add the settings you want.
EDIT: Another implementation is creating a tee of your file and send it to play through a queue in this case you do:
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,framerate=30/1,format=UYVY' ! v4l2h264enc ! 'video/x-h264,level=(string)4'! tee name="source"! queue ! filesink location = test_video6.h264 source. ! queue ! decodebin ! videoconvert ! autovideosink

Using v4l2loopback and GStreamer with MJPEG cameras

I have one 4k camera which has MJPEG and YUY2 formats. Currently, I can run
$ gst-launch-1.0 v4l2src device=/dev/video1 ! "video/x-raw,format=YUY2,width=640,height=480,framerate=30/1" ! tee name=t ! queue ! v4l2sink device=/dev/video20 t. ! queue ! v4l2sink device=/dev/video21
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
And stream video1 image to two different devices.
Q: How to pass MJPEG image from video1 to both video20 and video21, which are in YUY2 format.
In the MJPEG case you need to add image/jpeg caps to v4l2src. After v4l2src you need to convert it to raw video.
Gstreamer has jpegdec and avdec_mjpeg plugins. In my current version jpegdec does not support YUY2 output, so I would use avdec_mjpeg. Alernatively you can use jpegdec with videoconvert (i.e.... ! jpegdec ! videoconvert ! ...).
The following line should do it:
gst-launch-1.0 v4l2src device=/dev/video1 ! "image/jpeg,width=3840,height=2160,framerate=30/1" ! avdec_mjpeg ! "video/x-raw,format=YUY2,width=3840,height=2160,framerate=30/1" ! tee name=t ! queue ! v4l2sink device=/dev/video20 t. ! queue ! v4l2sink device=/dev/video21

GStreamer images to video

I have 4 png with different resolutions (1920x1080, 1280x720, 1920x1200) and I try to convert them to a video slideshow.
This pipeline works:
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=1920,height=1080" ! pngdec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! autovideosink
while when I try to force framerate it only reads the first image.
I tried :
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=1920,height=1080" ! pngdec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! videorate ! video/x-raw,width=1920,height=1080,framerate=25/1 ! autovideosink
and
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=854,height=480" ! pngdec ! videoconvert ! videoscale ! videorate ! video/x-raw,width=1920,height=1080,framerate=25/1 ! autovideosink
I don't understand why adding framerate would cause my pipeline to ignore some pictures.
(I am under Windows 10 with the brand new GStreamer 1.14.0)
EDIT: I forgot to tell that when I manually resize my picture so they have all the same resolution, all the above pipelines work!
I suspect it is a timing issue. You are running a real-time pipeline, but most likely the PNG decoding is not fast enough to deliver frames in a 25/1 fps manner and the videosink drops them has they arrive too late. Maybe adding max-lateness=-1 to the videosink prevents the dropping of frames in your case.

MJPG picture in picture

I've a problem with picture in picture using gstreamer:
I'm using this command to play the stream.
gst-launch -v souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! jpegdec ! videomixer name=mix ! autovideosink souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! jpegdec ! mix.
But I get the following error:
http://pastebin.com/7Xry2Q8x
Have anybody an idea?
The videomixer wants some sort of framerate information to be delivered to it from each of the streams. The mjpeg format has none. Here is a sample that works but assumes a framerate of 30fps.
I also added queue elements before each stream before they connect to the mixer.
gst-launch-1.0 -v souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! image/jpeg,framerate=30/1 ! jpegdec ! queue ! videomixer name=mix ! autovideosink sync=false souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! image/jpeg,framerate=30/1 ! jpegdec ! queue ! mix.
This kind of pipeline can be tricky to build. What kind of MJPEGs are you trying to mix?

RTMPSrc to v4l2sink

I would like to receive a rtmp-stream and create a pipe with v4l2sink as output
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! videorate ! v4l2sink device=/dev/video1
But I get only a green screen: https://www.dropbox.com/s/yq9oqi9m62c5afo/screencast1422465570.webm?dl=0
Your pipeline is telling GStreamer to treat encoded, muxed RTMP data as YUV video buffers.
Instead you need to parse, demux, and decode the video part of the RTMP data. I don't have a sample stream to test on, but you may be able to just use decodebin (which in GStreamer 0.10 was called decodebin2 for whatever reason). You'll also want to reorder the videorate to be before the framerate caps, so it knows what to convert to.
Wild stab in the dark:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1
Now It works:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=1920,height=1080,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1