I'm trying to get used to using the gstreamer compositor.
I have this basic boilerplate example working. (Compositing 2 videotestsrc next to each other):
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
videotestsrc pattern=8 ! "video/x-raw" ! comp.sink_1
Then I tried changing one of the video test src to a mp4 file
I know that this command line works:
gst-launch-1.0 filesrc location=tst.mp4 ! decodebin ! videoconvert ! autovideosink
So I tried combining these two working pipelines
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! decodebin ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
filesrc location=tst.mp4 ! "video/x-raw" ! comp.sink_1
When I run this I get an error saying that the filter caps do not complete specify the output format.... output caps are unfixed.
I'm positive this must be a simple syntax error. Does anyone know how to fix my pipeline?
No, you need to use most of the elements that made the standalone command line work. E.g.
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! decodebin ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
filesrc location=tst.mp4 ! decodebin ! videoconvert ! comp.sink_1
I use the following pipeline to play my video on screen
gst-launch-1.0 filesrc location=01_189_libxvid_1920x1080_6M.mp4 !
qtdemux ! mpeg4videoparse ! omxmpeg4videodec ! videobalance brightness=100 !
video/x-raw,format=BGRA ! waylandsink --gst-debug=*:2
but now instead of directly playing I want to encode it and save it in some folder. Please suggest
Should be something like this (example with h264 codec):
gst-launch-1.0 -e --gst-debug=3 \
filesrc location="/path/input/sample_in.mp4" \
! qtdemux \
! mpeg4videoparse \
! omxmpeg4videodec \
! queue \
! x264enc \
! qtmux \
! filesink location="/path/output/sample_out.mp4"
I used gstreamer-1.2.4 to send stream from v4l2src to multicast and to write it in shared memory. Pipeline is:
v4l2src device=${device} do-timestamp=true blocksize=400000 typefind=true \
! "video/x-h264, width=(int)${width}, height=(int)${height}, framerate=(fraction)${framerate}, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1" \
! h264parse \
! "video/x-h264, alignment=(string)au, stream-format=(string)avc, parsed=(boolean)true" \
! tee name=video1tee \
video1tee. ! queue \
! rtph264pay config-interval=1 \
! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 \
! queue \
! udpsink multicast-iface=${IFACE_OUT} host=${MULTICAST_OUT_IP_ADDR} port=${VIDEO_OUT_UDP_PORT} ttl-mc=10 auto-multicast=false sync=true async=false
video1tee. ! queue leaky=downstream \
! shmsink socket-path=\"/tmp/camera_shmsink\" shm-size=20000000 wait-for-connection=false max-lateness=5000000000 sync=false async=false > ${logfile} 2>&1
Now, I updated it to gstreamer-1.8.3 and this pipeline doesn't work. Then I try to view this multicast stream, I don't get anything. Hardware has not changed.
Also I have warnings:
v4l2src gstv4l2src.c:862:gst_v4l2src_create:<v4l2src0> lost frames detected: count = 18446744073709551615 - ts: 0:00:09.8444373543
h264parse gsth264parse.c:1205:gst_h264_parse_handle_frame:<h264parse0> broken/invalid nal Type: 1 Slice, Size: 22638 will be dropped
Then I delete
! h264parse \
! "video/x-h264, alignment=(string)au, stream-format=(string)avc, parsed=(boolean)true" \
I get a green screen.
Pipeline to view video:
udpsrc do-timestamp=true buffer-size=30000000 address=227.1.1.11 auto-multicast=true port=51012 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" ! rtph264depay ! h264parse ! avdec_h264 output-corrupt=false ! autovideoconvert ! videocrop ! videoscale ! videobalance ! video/x-raw, format=(string)YV12, colorimetry=(string)bt601 ! autovideoconvert ! fpsdisplaysink
Why did it work in the old version, but does not work in the new one?
This command adds a text to the video, but the audio is missing in the output MP4 file:
gst-launch-1.0 filesrc location=input.mp4 name=src ! decodebin ! textoverlay text="My Text" ! x264enc ! h264parse ! mp4mux ! filesink location=output.mp4
How can I fix this, so that the audio is preserved?
Thanks
This works:
gst-launch-1.0 \
filesrc location=input.mp4 name=src\
! decodebin name=demuxer \
demuxer. ! queue \
! textoverlay text="My Text" \
! x264enc ! muxer. \
demuxer. ! queue \
! audioconvert ! voaacenc ! muxer. \
mp4mux name=muxer \
! filesink location=output.mp4
I'm just trying to get an RTP sample working, but every example I've seen doesn't execute due to missing plugins or incorrect pins.
This seems the most promising, but, although the server and client seem to launch properly and go to "PLAYING", nothing happens:
Server:
gst-launch -v videotestsrc ! \
video/x-raw-rgb, format=\(fourcc\)RGB, width=4, height=4, frame-rate=1/1 ! rtpvrawpay !
udpsink host=127.0.0.1
Server output:
/GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGB, depth=(string)8, width=(string)4, height=(string)4, colorimetry=(string)SMPTE240M, payload=(int)96, ssrc=(uint)3779397700, clock-base=(uint)1161131286, seqnum-base=(uint)43390
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
Client:
gst-launch-0.10 -v udpsrc caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGB, depth=(string)8, width=(string)4, height=(string)4, colorimetry=(string)SMPTE240M, payload=(int)96, ssrc=(uint)3779397700, clock-base=(uint)1161131286, seqnum-base=(uint)43390" ! rtpvrawdepay ! xvimagesink
Client output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0.GstPad:src: caps = video/x-raw-rgb, width=(int)4, height=(int)4, format=(fourcc)0x00000000, framerate=(fraction)0/1, endianness=(int)4321, red_mask=(int)16711680, green_mask=(int)65280, blue_mask=(int)255, bpp=(int)24, depth=(int)24
/GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0.GstPad:sink: caps = application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)RGB, depth=(string)8, width=(string)4, height=(string)4, colorimetry=(string)SMPTE240M, ssrc=(uint)3779397700, clock-base=(uint)1161131286, seqnum-base=(uint)43390
These work:
Server:
gst-launch-0.10 -v \
gstrtpbin name=rtpbin1 \
videotestsrc ! x264enc ! rtph264pay ! rtpbin1.send_rtp_sink_0 \
rtpbin1.send_rtp_src_0 ! udpsink host=127.0.0.1 port=5011 \
rtpbin1.send_rtcp_src_0 ! udpsink host=127.0.0.1 port=5012 \
udpsrc port=5013 ! rtpbin1.recv_rtcp_sink_0
Client:
gst-launch-0.10 -v \
videomixer name=mix ! ffmpegcolorspace ! autovideosink sync=false async=false \
gstrtpbin name=rtpbin1 \
udpsrc port=5011 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)\"Z0LAFdkBQfsBEAAAAwAXc1lAAPFi5IAA\\,aMuMsg\\=\\=\", ssrc=(uint)595281375, payload=(int)96, clock-base=(uint)3105254905, seqnum-base=(uint)59233" ! rtpbin1.recv_rtp_sink_0 rtpbin1. ! rtph264depay ! queue ! ffdec_h264 ! videobox border-alpha=0 top=0 left=0 ! mix. \
udpsrc port=5012 ! rtpbin1.recv_rtcp_sink_0 \
rtpbin1.send_rtcp_src_0 ! udpsink port=5013 host=127.0.0.1