Here is the code for 2 mp4 videos playing in videoboxes.
gst-launch-1.0 filesrc location=1.mp4 ! decodebin ! queue !
videoconvert ! videobox border-alpha=0 right=-100 ! videomixer
name=mix ! videoconvert ! autovideosink filesrc location=2.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 !
mix.
I have tried with this code to play 3 videos
gst-launch-1.0 filesrc location=Downloads/1.mp4 ! decodebin ! queue !
videoconvert ! videobox border-alpha=0 right=-100 ! videomixer
name=mix !
videoconvert ! autovideosink filesrc location=Downloads/2.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 !
mix !
videoconvert ! autovideosink filesrc location=Downloads/3.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-200 !
mix.
I get syntax error :(
Something like that with videomixer
gst-launch-1.0 -e \
videomixer name=mix background=0 \
sink_1::xpos=0 sink_1::ypos=0 \
sink_2::xpos=200 sink_2::ypos=0 \
sink_3::xpos=100 sink_3::ypos=100 \
! autovideosink \
uridecodebin uri='file:///data/big_buck_bunny_trailer-360p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_1 \
uridecodebin uri='file:///data/sintel_trailer-480p.webm' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_2 \
uridecodebin uri='file:///data/the_daily_dweebs-720p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_3
Once you instantiate an element with a name (eg. videomixer name=mix), you can later connect to it with . (eg. mix.). You don't need to repeat autovideosink 3 times after that.
gst-launch-1.0 filesrc location=Downloads/1.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 right=-100 ! videomixer name=mix ! videoconvert ! autovideosink
filesrc location=Downloads/2.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 ! mix.
filesrc location=Downloads/3.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-200 ! mix.
Here, we have initialized 3 pipes and merged three of them with mix element.
Related
I need to record 4 RTSP streams into a single stream of the Kinesis Video Streams.
Streams must be placed in the video like this:
---------- ----------
| | |
| STREAM 1 | STREAM 2 |
| | |
|----------|----------|
| | |
| STREAM 3 | STREAM 4 |
| | |
---------- ----------
I was able to insert a single stream and make it work perfectly, using the command below:
gst-launch-1.0 rtspsrc user-id="admin" user-pw="password" location="rtsp://admin:password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
However, my goal is to insert an array of streams into the same stream in the Kinesis Video Streams.
For this I found the example with videomixer that's below:
gst-launch-1.0 -e rtspsrc location=rtsp_url1 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_0 \
rtspsrc location=rtsp_url2 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_1 \
rtspsrc location=rtsp_url3 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_2 \
rtspsrc location=rtsp_url4 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_3 \
videomixer name=m sink_1::xpos=1280 sink_2::ypos=720 sink_3::xpos=1280 sink_3::ypos=720 ! x264enc ! mp4mux ! filesink location=./out.mp4 sync=true
I adapted the example to just two streams and made it work inside the container, using a command like the one below:
gst-launch-1.0 -e rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! m.sink_0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! m.sink_1 \
videomixer name=m sink_0::xpos=1080 sink_1::ypos=1080 ! x265enc ! h265parse ! video/x-h265, alignment=au ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
And in another way:
gst-launch-1.0 -e videomixer name=mix sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=0 sink_1::xpos=0 sink_1::ypos=0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! mix.sink_0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! mix.sink_1 \
mix. ! queue ! videoconvert ! x265enc ! queue ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
The container in question is from: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp
However, when I log into Kinesis Video Streams and try to download a getClip, in both cases I get this error:
MissingCodecPrivateDataException
Missing codec private data in fragment for track 1.
Status code: 400
The logs with GST_DEBUG=1 can be found at https://gist.github.com/vbbandeira/b15ec8af6986237a4cd7e382e4ede261
And the logs with GST_DEBUG=4 can be found at https://gist.github.com/vbbandeira/6bd4b7a014a69da5f46cd036eaf32aec
Can you guys please let me know what is going on there?
Or if possible, help me find the solution to this error.
Thanks!
for those looking for the same solution, I managed to make it work by replacing the videomixer which is deprecated by the composer, below is an example of the command I used and it worked:
gst-launch-1.0 rtspsrc location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! decodebin ! videoconvert ! comp.sink_0 \
rtspsrc location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! decodebin ! videoconvert ! comp.sink_1 \
compositor name=comp sink_0::xpos=0 sink_1::xpos=1280 ! x264enc ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
However, I was only able to do this using h264.
I'm trying to get used to using the gstreamer compositor.
I have this basic boilerplate example working. (Compositing 2 videotestsrc next to each other):
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
videotestsrc pattern=8 ! "video/x-raw" ! comp.sink_1
Then I tried changing one of the video test src to a mp4 file
I know that this command line works:
gst-launch-1.0 filesrc location=tst.mp4 ! decodebin ! videoconvert ! autovideosink
So I tried combining these two working pipelines
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! decodebin ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
filesrc location=tst.mp4 ! "video/x-raw" ! comp.sink_1
When I run this I get an error saying that the filter caps do not complete specify the output format.... output caps are unfixed.
I'm positive this must be a simple syntax error. Does anyone know how to fix my pipeline?
No, you need to use most of the elements that made the standalone command line work. E.g.
gst-launch-1.0 compositor name=comp \
sink_0::alpha=1 sink_0::xpos=0 sink_0::ypos=0 \
sink_1::alpha=0.5 sink_1::xpos=320 sink_1::ypos=0 ! \
queue2 ! decodebin ! video/x-raw, width=800, height=600 ! videoconvert ! xvimagesink \
videotestsrc pattern=1 ! "video/x-raw" ! comp.sink_0 \
filesrc location=tst.mp4 ! decodebin ! videoconvert ! comp.sink_1
This command adds a text to the video, but the audio is missing in the output MP4 file:
gst-launch-1.0 filesrc location=input.mp4 name=src ! decodebin ! textoverlay text="My Text" ! x264enc ! h264parse ! mp4mux ! filesink location=output.mp4
How can I fix this, so that the audio is preserved?
Thanks
This works:
gst-launch-1.0 \
filesrc location=input.mp4 name=src\
! decodebin name=demuxer \
demuxer. ! queue \
! textoverlay text="My Text" \
! x264enc ! muxer. \
demuxer. ! queue \
! audioconvert ! voaacenc ! muxer. \
mp4mux name=muxer \
! filesink location=output.mp4
I try to make a vector of image that I get to many URI. I have succeeded to display an image with videomixer and uridecodebin plus a videoscale cap.
gst-launch -e videomixer name = mixer \
sink_0::xpos = 0 sink_0::ypos = 0 \
! xvimagesink \
uridecodebin uri=http://www.logotheque.fr/6396-2/logo+RMC+INFO.jpg \
! ffmpegcolorspace ! imagefreeze ! videoscale method = 1 \
! video/x-raw-yuv,width=100,height=100 ! queue ! mixer.sink_0.
But when I add the same "uri_Image" on another position in the videomixer with the same videoscale cap :
gst-launch -e videomixer name = mixer \
sink_0::xpos = 0 sink_0::ypos = 0 \
sink_1::xpos = 100 sink_1::ypos = 0 \
! xvimagesink \
uridecodebin uri=http://www.logotheque.fr/6396-2/logo+RMC+INFO.jpg
! ffmpegcolorspace ! imagefreeze ! videoscale ! \
video/x-raw-yuv,width=100,height=100 ! queue2 ! mixer.sink_0. \
uridecodebin uri=http://www.logotheque.fr/6396-2/logo+RMC+INFO.jpg
! ffmpegcolorspace ! imagefreeze ! videoscale ! \
video/x-raw-yuv,width=100, height=100 ! queue2 ! mixer.sink_1.
I get this error : "videoscale1 : not negotiated
gstbasetransform.c(2541): gst_base_transform_handle_buffer (): /GstPipeline:pipeline0/GstVideoScale:videoscale1:
"
So I don't understand why this error appears on the second sink, because this is the same process in both cases.
Edit :
I have found a partial solution for those interested.
gst-launch -e videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
uridecodebin uri=http://www.logotheque.fr/6396-2/logo+RMC+INFO.jpg ! videoscale ! video/x-raw-yuv,width=100,height=100 \
! videobox top=0 left=0 ! imagefreeze ! mix. \
uridecodebin uri=http://upload.wikimedia.org/wikipedia/fr/1/14/Logo_vibration.JPG ! videoscale ! video/x-raw- yuv,width=100,height=100 \
! videobox top=0 left=-100 ! imagefreeze ! mix.
But this solution doesn't work with png files, I don't know why because uridecodebin is an universal decoder...
If anybody have an idea...
ok try this pipeline. With pipeline you can add png file if you need:
gst-launch -e videomixer2 name=mixer sink_0::xpos=0 sink_0::ypos=0 sink_1::xpos=100 sink_1::ypos=0 ! ffmpegcolorspace ! xvimagesink uridecodebin uri=http://www.logotheque.fr/6396-2/logo+RMC+INFO.jpg ! ffmpegcolorspace ! imagefreeze ! videoscale ! "video/x-raw-yuv, format=(fourcc)AYUV, width=100, height=100" ! queue2 ! mixer.sink_0. uridecodebin uri=http://www.logotheque.fr/6396-2/logo+RMC+INFO.jpg ! ffmpegcolorspace ! imagefreeze ! videoscale ! "video/x-raw-yuv, format=(fourcc)AYUV, width=100, height=100" ! queue2 ! mixer.sink_1. -v
I am trying to convert a DVD to mkv file with gstreamer. The pipeline I use is:
gst-launch -evv multifilesrc location="VTS_01_%d.VOB" index=1 ! dvddemux name=demuxer \
matroskamux name=mux ! filesink location=test.mkv \
demuxer.current_video ! queue ! mpeg2dec ! x264enc ! mux. \
demuxer.current_audio ! queue ! ffdec_ac3 ! lamemp3enc ! mux.
Unfortunately the pipeline does not go beyond prerolling. When I replace x264enc with for instance ffenc_mpeg4, then everything works fine..
This may work :
gst-launch filesrc location=file.vob \
! queue \
! dvddemux name=demuxer matroskamux name=mux \
! queue \
! filesink location=test.mkv demuxer.current_video\
! queue \
! ffdec_mpeg2video \
! ffdeinterlace \
! x264enc \
! 'video/x-h264, width=720, height=576, framerate=25/1' \
! mux. demuxer.current_audio \
! queue max-size-bytes=0 max-size-buffers=0 max-size-time=10000000000 \
! ffdec_ac3 \
! audioconvert \
! lamemp3enc \
! mux.
Byte stream should be 0 - sorry for that earlier
You need to give the caps of the video after the x264enc
and you need to increase the limits on the audio queue to handle the delay in x264enc
These two changes have the pipeline running at my end.
The deinterlacer is optional but desirable for interlaced content.