GStreamer images to video - gstreamer

I have 4 png with different resolutions (1920x1080, 1280x720, 1920x1200) and I try to convert them to a video slideshow.
This pipeline works:
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=1920,height=1080" ! pngdec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! autovideosink
while when I try to force framerate it only reads the first image.
I tried :
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=1920,height=1080" ! pngdec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! videorate ! video/x-raw,width=1920,height=1080,framerate=25/1 ! autovideosink
and
gst-launch-1.0.exe -e multifilesrc location="multi_img_%d.png" index=0 caps="image/png,framerate=(fraction)1/2,width=854,height=480" ! pngdec ! videoconvert ! videoscale ! videorate ! video/x-raw,width=1920,height=1080,framerate=25/1 ! autovideosink
I don't understand why adding framerate would cause my pipeline to ignore some pictures.
(I am under Windows 10 with the brand new GStreamer 1.14.0)
EDIT: I forgot to tell that when I manually resize my picture so they have all the same resolution, all the above pipelines work!

I suspect it is a timing issue. You are running a real-time pipeline, but most likely the PNG decoding is not fast enough to deliver frames in a 25/1 fps manner and the videosink drops them has they arrive too late. Maybe adding max-lateness=-1 to the videosink prevents the dropping of frames in your case.

Related

How to form a gstreamer pipeline to encode mp4 video from tiff files?

I'm new to gstreamer and am stuck trying to form a gstreamer pipeline to encode mp4 video from tiff files on nvidia Jetson platform. Here is the pipeline I've come up with :
gst-launch-1.0 multifilesrc location=%03d.tiff index=0 start-index=0 stop-index=899 blocksize=720000 num-buffers=900 do-timestamp=true typefind=true ! 'video/x-raw,format=(string)RGB,width=(int)1280,height=(int)720,framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw,format=(string)I420,framerate=(fraction)30/1' ! omxh264enc ! 'video/x-h264,stream-format=(string)byte-stream,framerate=(fraction)30/1' ! h264parse ! filesink sync=true location=test.mp4 -e
With this, the mp4 file gets created successfully and plays but the actual video content is all garbled. Any idea what am I doing wrong ? Thank You
You are not doing any demux/decode of your TIFF data, so you throw random bytes at the encoder.
Also you are doing a lot of things with caps without having proper elements between that could alter the formats correctly.
You should use decodebin to let GStreamer handle most of the things automatically. E.g. something like that:
multifilesrc ! decodebin ! videoconvert ! omxh264enc ! h264parse ! filesink
Depending on your encoder you want to force the color format to be a 4:2:0 so that it does not accidentally encode in 4:4:4 (which is not very common and not supported by many encoders):
multifilesrc ! decodebin ! videoconvert ! video/x-raw, format=I420 ! omxh264enc ! h264parse ! filesink

How do I save a video with an alpha channel in GStreamer?

I have a collection of RGBA png files, and have verified the presence of an alpha layer on each file:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! videomixer background=checker ! videoconvert ! ximagesink
I want to take these files and make them into a video file (in any format that GStreamer will readily handle with a simple decodebin). What would be a good set of encoders, containers, and elements to use for this?
I've tried avimux but no alpha data was saved. I also tried avenc_huffyuv, and that would decode fine as raw data using avenc_huffyuv, but decodebin could not detect it.
Nothing like a good night's sleep to solve an issue..
Apparently the huffyuv encoder and avi muxer work nicely together to preserve tranpsarency:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! avenc_huffyuv ! avimux ! filesink location=/tmp/test.avi

MJPG picture in picture

I've a problem with picture in picture using gstreamer:
I'm using this command to play the stream.
gst-launch -v souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! jpegdec ! videomixer name=mix ! autovideosink souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! jpegdec ! mix.
But I get the following error:
http://pastebin.com/7Xry2Q8x
Have anybody an idea?
The videomixer wants some sort of framerate information to be delivered to it from each of the streams. The mjpeg format has none. Here is a sample that works but assumes a framerate of 30fps.
I also added queue elements before each stream before they connect to the mixer.
gst-launch-1.0 -v souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! image/jpeg,framerate=30/1 ! jpegdec ! queue ! videomixer name=mix ! autovideosink sync=false souphttpsrc location='http://mjpeg.sanford.io/count.mjpeg' ! multipartdemux ! image/jpeg,framerate=30/1 ! jpegdec ! queue ! mix.
This kind of pipeline can be tricky to build. What kind of MJPEGs are you trying to mix?

RTMPSrc to v4l2sink

I would like to receive a rtmp-stream and create a pipe with v4l2sink as output
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! videorate ! v4l2sink device=/dev/video1
But I get only a green screen: https://www.dropbox.com/s/yq9oqi9m62c5afo/screencast1422465570.webm?dl=0
Your pipeline is telling GStreamer to treat encoded, muxed RTMP data as YUV video buffers.
Instead you need to parse, demux, and decode the video part of the RTMP data. I don't have a sample stream to test on, but you may be able to just use decodebin (which in GStreamer 0.10 was called decodebin2 for whatever reason). You'll also want to reorder the videorate to be before the framerate caps, so it knows what to convert to.
Wild stab in the dark:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=640,height=480,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1
Now It works:
gst-launch rtmpsrc location="rtmp://localhost/live/test" ! decodebin2 ! videoscale ! ffmpegcolorspace ! videorate ! "video/x-raw-yuv,width=1920,height=1080,framerate=30/1,format=(fourcc)YUY2" ! v4l2sink device=/dev/video1

Problems with video playback

I have h264 video track and aac audio track inside mp4 container and I want to play it, but when I run my pipeline there's just first frame shown and no sound.
Here's my pipeline:
gst-launch filesrc location=/home/dmitry/Downloads/big_buck_bunny.mp4 ! qtdemux name=demux \
demux.audio_00 ! queue ! faad ! audioconvert ! audioresample ! autoaudiosink \
demux.video_00 ! queue ! ffdec_h264 ! ffmpegcolorspace ! autovideosink
Your queues might not be large enough for this scenario. You should try using playbin2 or decodebin for decoding and it will automatically adjust the queue sizes for playback.
If you have to stick to this pipeline, try setting larger values to the max-size-* properties on the queues.
On a plus side: please move to 1.2 version, 0.10 is obsolete for 2 years now.