gstreamer add textoverlay in c++ - c++

I am trying to add a textoverlay to an mp4 movie with gstreamer-0.10. Yes I know its old but I only need to do few changes to the mp4. I know how to do it with gst-launch-0.10:
gst-launch-0.10 filesrc location=input.mp4 name=src ! decodebin
name=demuxer demuxer. ! queue ! textoverlay text="My Text" ! x264enc !
muxer. demuxer. ! queue ! audioconvert ! voaacenc ! muxer. mp4mux
name=muxer ! filesink location=output.mp4
This creates a text overlay movie for me. But now I need to add the textoverlay in the following bin in cpp - this is my working pipeline creating an mp4:
QGst::BinPtr m_encBin = QGst::Bin::fromDescription(
"filesrc location=\""+path+"videoname.raw.mkv\" ! queue ! matroskademux name=\"demux\" "
"demux.video_00 ! queue ! ffmpegcolorspace ! queue ! x264enc ! queue ! mux.video_00 "
"demux.audio_00 ! queue ! audioconvert ! queue ! faac ! queue ! mux.audio_00 "
"mp4mux name=\"mux\" ! queue ! filesink name=\"filesink\" sync=false ",
QGst::Bin::NoGhost);
Anyone knows how I can add the textoverlay into the bin?
Cheers Fredrik

I think you should add queue and textoverlay elements to your pipeline description between ffmpegcolorspace and queue elements:
QGst::BinPtr m_encBin = QGst::Bin::fromDescription(
"filesrc location=\""+path+"videoname.raw.mkv\" ! queue ! matroskademux name=\"demux\" "
"demux.video_00 ! queue ! ffmpegcolorspace ! queue ! textoverlay text=\"My Text\" ! queue ! x264enc ! queue ! mux.video_00 "
"demux.audio_00 ! queue ! audioconvert ! queue ! faac ! queue ! mux.audio_00 "
"mp4mux name=\"mux\" ! queue ! filesink name=\"filesink\" sync=false ",
QGst::Bin::NoGhost);
I think you received downvote because you didn't try to understand GStreamer pipelines description and asked for ready-to-use solution.

Related

Multi-Camera in WebRTC application with Gstreamer C++

In my pipeline I would like to use 2 different v4l2 source. But When I used like code 1 with double v4l2src , I can get some error like "ERROR GST_PIPELINE grammar.y:740:gst_parse_perform_link: could not link h264parse1 to payloader"
pipe1 =
gst_parse_launch ("webrtcbin name=sendrecv stun-server=stun://" STUN_SERVER " "
"v4l2src device=/dev/video0 "
"! videorate "
"! video/x-raw,width=640,height=360,framerate=15/1 "
"! videoconvert "
"! queue max-size-buffers=1 "
"! x264enc bitrate=600 speed-preset=ultrafast tune=zerolatency key-int-max=15 "
"! video/x-h264,profile=constrained-baseline "
"! queue max-size-time=100000000 ! h264parse "
"! rtph264pay config-interval=-1 name=payloader "
"! sendrecv. ", &error);
I can also use directly gstreamer multi camera run codes, And I can recorded videos from 2 different camera.
gst-launch-1.0 -e v4l2src device=/dev/video0 ! videoconvert! videoscale ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=c c. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8574 host=192.168.1.110 port=8574 loop=false c. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-RightFacingCamera.mp4 v4l2src device=/dev/video1 ! videoconvert! videoscale ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=b b. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8564 host=192.168.1.110 port=8564 loop=false b. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-LeftFacingCamera.mp4
But I couldn't fix my pipeline directly for my WebRTC project, I have to add one more v4l2src , Can you help me about it?

Gstreamer multiudpsink broadcast over all subnets

Below is my actual pipeline for sender and reciever. I would like to stream over all subnets (from 192.168.1.1 to 192.168.1.255. I would like that the receiver can decode the stream whatever is ip is : 192.168.1.10 or 192.168.1.235 or...
How do I have to use multiudpsink to do that?
SENDER
appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! omxh264enc ! video/x-h264, stream-format=byte-stream ! h264parse ! rtph264pay pt=96 config-interval=1 ! udpsink host=192.168.1.2 port=5200 sync=false
RECIEVER
udpsrc ! rtpjitterbuffer mode=0 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert !d3dvideosink sync=false

GStreamer Playing 3 videos side by side

Here is the code for 2 mp4 videos playing in videoboxes.
gst-launch-1.0 filesrc location=1.mp4 ! decodebin ! queue !
videoconvert ! videobox border-alpha=0 right=-100 ! videomixer
name=mix ! videoconvert ! autovideosink filesrc location=2.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 !
mix.
I have tried with this code to play 3 videos
gst-launch-1.0 filesrc location=Downloads/1.mp4 ! decodebin ! queue !
videoconvert ! videobox border-alpha=0 right=-100 ! videomixer
name=mix !
videoconvert ! autovideosink filesrc location=Downloads/2.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 !
mix !
videoconvert ! autovideosink filesrc location=Downloads/3.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-200 !
mix.
I get syntax error :(
Something like that with videomixer
gst-launch-1.0 -e \
videomixer name=mix background=0 \
sink_1::xpos=0 sink_1::ypos=0 \
sink_2::xpos=200 sink_2::ypos=0 \
sink_3::xpos=100 sink_3::ypos=100 \
! autovideosink \
uridecodebin uri='file:///data/big_buck_bunny_trailer-360p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_1 \
uridecodebin uri='file:///data/sintel_trailer-480p.webm' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_2 \
uridecodebin uri='file:///data/the_daily_dweebs-720p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_3
Once you instantiate an element with a name (eg. videomixer name=mix), you can later connect to it with . (eg. mix.). You don't need to repeat autovideosink 3 times after that.
gst-launch-1.0 filesrc location=Downloads/1.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 right=-100 ! videomixer name=mix ! videoconvert ! autovideosink
filesrc location=Downloads/2.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 ! mix.
filesrc location=Downloads/3.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-200 ! mix.
Here, we have initialized 3 pipes and merged three of them with mix element.

WARNING: erroneous pipeline: no element "voaacenc"

i try to execute
gst-launch-1.0 -em rtpbin name=rtpbin latency=5 udpsrc port=5102 caps="application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS" ! rtpbin.recv_rtp_sink_0 rtpbin. ! queue ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! voaacenc ! mux. udpsrc port=5104 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" ! rtpbin.recv_rtp_sink_1 rtpbin. ! queue ! rtph264depay ! h264parse ! mux. flvmux name=mux streamable=true ! rtmpsink sync=false location="rtmp://127.0.0.1:1935/show/stream live=1"
Unfortunately it raise an error: WARNING: erroneous pipeline: no element "voaacenc"
Did you try running gst-inspect voaacenc?
Try installing gst-plugins-bad, should solve it.

interleaving 4 channels of audio into vorbisenc or opusenc in gstreamer

I’m trying to interleave 4 channels of audio into one audio file
I have managed to successfully save them into wav with wavenc
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav !
decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_0
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert !
audio/x-raw,format=(string)F32LE ! queue ! i.sink_1
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_2
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_3
i.src ! queue ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved ! queue !
wavenc ! queue ! filesink location=out2.wav
but when I save it as vorbisenc oggmux
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_0
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_1
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_2
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_3
i.src ! queue ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved ! queue !
wavenc ! queue ! wavparse ! audioconvert ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved !
vorbisenc ! oggmux ! filesink location=out2.ogg
the channels get completely messed up when I play the file, or look at it in audacity.
I have also tried using
channel-positions=GST_AUDIO_CHANNEL_POSITION_REAR_LEFT
channel-mask=(bitmask)0x4
for each channel like this>
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channel-position=GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT,channel-mask=(bitmask)0x1 ! queue ! i.
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channels=(int)1,channel-position=GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT,channel-mask=(bitmask)0x2 ! queue ! i.
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channels=(int)1,channel-position=GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT,channel-mask=(bitmask)0x3 ! queue ! i.
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channels=(int)1,channel-position=GST_AUDIO_CHANNEL_POSITION_REAR_LEFT,channel-mask=(bitmask)0x4 ! queue ! i.
i.src ! queue ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved ! queue !
wavenc ! queue ! wavparse ! audioconvert ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved !
vorbisenc ! oggmux ! filesink location=out2.ogg
Same problem
Any suggestion as of how to solve this?
I am not restricted only to vorbis, in fact I have similar issues also with opusenc.
Thanks.
Mar
So. I got it working,
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,rate=24000,format=F32LE ! queue ! i.sink_0
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,channels=(int)1,rate=24000,format=F32LE ! queue ! i.sink_1
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,channels=(int)1,rate=24000,format=F32LE ! queue ! i.sink_2
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,channels=(int)1,rate=24000,format=F32LE ! queue ! i.sink_3
i.src ! capssetter caps=audio/x-raw,channels=4,channel-mask=(bitmask)0x33 ! audioconvert ! audioresample ! vorbisenc ! oggmux ! filesink location=out2.ogg
There were two issues
1. that the caps need to be set to the interleave
2. vorbisenc bitrate could not bare with 4 channels at 48khz