WARNING: erroneous pipeline: no element "voaacenc" - gstreamer

i try to execute
gst-launch-1.0 -em rtpbin name=rtpbin latency=5 udpsrc port=5102 caps="application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS" ! rtpbin.recv_rtp_sink_0 rtpbin. ! queue ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! voaacenc ! mux. udpsrc port=5104 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" ! rtpbin.recv_rtp_sink_1 rtpbin. ! queue ! rtph264depay ! h264parse ! mux. flvmux name=mux streamable=true ! rtmpsink sync=false location="rtmp://127.0.0.1:1935/show/stream live=1"
Unfortunately it raise an error: WARNING: erroneous pipeline: no element "voaacenc"

Did you try running gst-inspect voaacenc?
Try installing gst-plugins-bad, should solve it.

Related

Recording multiple RTSP streams h265 format to Kinesis Video Streams using Gstreamer and Kvssink

I need to record 4 RTSP streams into a single stream of the Kinesis Video Streams.
Streams must be placed in the video like this:
---------- ----------
| | |
| STREAM 1 | STREAM 2 |
| | |
|----------|----------|
| | |
| STREAM 3 | STREAM 4 |
| | |
---------- ----------
I was able to insert a single stream and make it work perfectly, using the command below:
gst-launch-1.0 rtspsrc user-id="admin" user-pw="password" location="rtsp://admin:password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
However, my goal is to insert an array of streams into the same stream in the Kinesis Video Streams.
For this I found the example with videomixer that's below:
gst-launch-1.0 -e rtspsrc location=rtsp_url1 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_0 \
rtspsrc location=rtsp_url2 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_1 \
rtspsrc location=rtsp_url3 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_2 \
rtspsrc location=rtsp_url4 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_3 \
videomixer name=m sink_1::xpos=1280 sink_2::ypos=720 sink_3::xpos=1280 sink_3::ypos=720 ! x264enc ! mp4mux ! filesink location=./out.mp4 sync=true
I adapted the example to just two streams and made it work inside the container, using a command like the one below:
gst-launch-1.0 -e rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! m.sink_0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! m.sink_1 \
videomixer name=m sink_0::xpos=1080 sink_1::ypos=1080 ! x265enc ! h265parse ! video/x-h265, alignment=au ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
And in another way:
gst-launch-1.0 -e videomixer name=mix sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=0 sink_1::xpos=0 sink_1::ypos=0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! mix.sink_0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! mix.sink_1 \
mix. ! queue ! videoconvert ! x265enc ! queue ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
The container in question is from: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp
However, when I log into Kinesis Video Streams and try to download a getClip, in both cases I get this error:
MissingCodecPrivateDataException
Missing codec private data in fragment for track 1.
Status code: 400
The logs with GST_DEBUG=1 can be found at https://gist.github.com/vbbandeira/b15ec8af6986237a4cd7e382e4ede261
And the logs with GST_DEBUG=4 can be found at https://gist.github.com/vbbandeira/6bd4b7a014a69da5f46cd036eaf32aec
Can you guys please let me know what is going on there?
Or if possible, help me find the solution to this error.
Thanks!
for those looking for the same solution, I managed to make it work by replacing the videomixer which is deprecated by the composer, below is an example of the command I used and it worked:
gst-launch-1.0 rtspsrc location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! decodebin ! videoconvert ! comp.sink_0 \
rtspsrc location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! decodebin ! videoconvert ! comp.sink_1 \
compositor name=comp sink_0::xpos=0 sink_1::xpos=1280 ! x264enc ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
However, I was only able to do this using h264.

using gstreamer srtp for audio streaming

tried the below but cant hear audio
gst-launch-1.0 udpsrc port=6000 ! "application/x-srtp,media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, channels=(int)2, payload=(int)96, srtp-key=(buffer)012345678901234567890123456789012345678901234567890123456789, srtp-cipher=(string)aes-128-icm, srtp-auth=(string)hmac-sha1-80, srtcp-cipher=(string)aes-128-icm, srtcp-auth=(string)hmac-sha1-80, roc=(uint)0" ! srtpdec ! rtpL16depay ! audioconvert ! alsasink
gst-launch-1.0 -v alsasrc ! audioconvert ! audio/x-raw,channels=2,depth=16,width=16,rate=44100 ! rtpL16pay ! srtpenc key="012345678901234567890123456789012345678901234567890123456789" ! udpsink host=3.204.26.22 port=6000
That is because you havent set ssrc, try the following pipelines
Sender pipeline as:
gst-launch-1.0 -v alsasrc ! audioconvert ! audio/x-raw,channels=2,depth=16,width=16,rate=44100 ! rtpL16pay ! 'application/x-rtp, ssrc=(uint)3412089386' ! srtpenc key="012345678901234567890123456789012345678901234567890123456789" ! udpsink host=3.204.26.22 port=6000
Receiver pipeline as:
gst-launch-1.0 udpsrc port=6000 ! "application/x-srtp,media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, channels=(int)2, payload=(int)96,ssrc=(uint)3412089386, srtp-key=(buffer)012345678901234567890123456789012345678901234567890123456789, srtp-cipher=(string)aes-128-icm, srtp-auth=(string)hmac-sha1-80, srtcp-cipher=(string)aes-128-icm, srtcp-auth=(string)hmac-sha1-80, roc=(uint)0" ! srtpdec ! rtpL16depay ! audioconvert ! alsasink

GStreamer Playing 3 videos side by side

Here is the code for 2 mp4 videos playing in videoboxes.
gst-launch-1.0 filesrc location=1.mp4 ! decodebin ! queue !
videoconvert ! videobox border-alpha=0 right=-100 ! videomixer
name=mix ! videoconvert ! autovideosink filesrc location=2.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 !
mix.
I have tried with this code to play 3 videos
gst-launch-1.0 filesrc location=Downloads/1.mp4 ! decodebin ! queue !
videoconvert ! videobox border-alpha=0 right=-100 ! videomixer
name=mix !
videoconvert ! autovideosink filesrc location=Downloads/2.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 !
mix !
videoconvert ! autovideosink filesrc location=Downloads/3.mp4 !
decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-200 !
mix.
I get syntax error :(
Something like that with videomixer
gst-launch-1.0 -e \
videomixer name=mix background=0 \
sink_1::xpos=0 sink_1::ypos=0 \
sink_2::xpos=200 sink_2::ypos=0 \
sink_3::xpos=100 sink_3::ypos=100 \
! autovideosink \
uridecodebin uri='file:///data/big_buck_bunny_trailer-360p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_1 \
uridecodebin uri='file:///data/sintel_trailer-480p.webm' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_2 \
uridecodebin uri='file:///data/the_daily_dweebs-720p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_3
Once you instantiate an element with a name (eg. videomixer name=mix), you can later connect to it with . (eg. mix.). You don't need to repeat autovideosink 3 times after that.
gst-launch-1.0 filesrc location=Downloads/1.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 right=-100 ! videomixer name=mix ! videoconvert ! autovideosink
filesrc location=Downloads/2.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-100 ! mix.
filesrc location=Downloads/3.mp4 ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-200 ! mix.
Here, we have initialized 3 pipes and merged three of them with mix element.

Working example of rtpvrawpay in GStreamer

Can someone paste a working pair of gst-launch pipelines that use rtpvrawpay and rtpvrawdepay?
Here's my first stab at it:
gst-launch-1.0 videotestsrc ! videoconvert ! video/x-raw,width=128,height=128,format=BGR ! rtpvrawpay ! application/x-rtp,payload=96 ! udpsink host=... port=...
gst-launch-1.0 udpsrc port=9999 ! application/x-rtp,media=video,payload=96,clock-rate=90000,encoding-name=RAW,sampling=BGR,depth=16 ! rtpvrawdepay ! video/x-raw,width=128,height=128,format=BGR,framerate=30/1 ! videoconvert ! ximagesink
Pay: gst-launch-1.0 -v videotestsrc ! rtpvrawpay ! udpsink host="127.0.0.1" port="5000"
Depay: gst-launch-1.0 udpsrc port="5000" caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)1103043224, timestamp-offset=(uint)1948293153, seqnum-offset=(uint)27904" ! rtpvrawdepay ! videoconvert ! queue ! xvimagesink sync=false
Check the caps on your pipeline again.

interleaving 4 channels of audio into vorbisenc or opusenc in gstreamer

I’m trying to interleave 4 channels of audio into one audio file
I have managed to successfully save them into wav with wavenc
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav !
decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_0
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert !
audio/x-raw,format=(string)F32LE ! queue ! i.sink_1
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_2
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_3
i.src ! queue ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved ! queue !
wavenc ! queue ! filesink location=out2.wav
but when I save it as vorbisenc oggmux
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_0
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_1
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_2
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE ! queue ! i.sink_3
i.src ! queue ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved ! queue !
wavenc ! queue ! wavparse ! audioconvert ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved !
vorbisenc ! oggmux ! filesink location=out2.ogg
the channels get completely messed up when I play the file, or look at it in audacity.
I have also tried using
channel-positions=GST_AUDIO_CHANNEL_POSITION_REAR_LEFT
channel-mask=(bitmask)0x4
for each channel like this>
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channel-position=GST_AUDIO_CHANNEL_POSITION_REAR_RIGHT,channel-mask=(bitmask)0x1 ! queue ! i.
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channels=(int)1,channel-position=GST_AUDIO_CHANNEL_POSITION_FRONT_RIGHT,channel-mask=(bitmask)0x2 ! queue ! i.
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channels=(int)1,channel-position=GST_AUDIO_CHANNEL_POSITION_FRONT_LEFT,channel-mask=(bitmask)0x3 ! queue ! i.
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audio/x-raw,format=(string)F32LE,channels=(int)1,channel-position=GST_AUDIO_CHANNEL_POSITION_REAR_LEFT,channel-mask=(bitmask)0x4 ! queue ! i.
i.src ! queue ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved ! queue !
wavenc ! queue ! wavparse ! audioconvert ! audio/x-raw,rate=48000,channels=4,format=F32LE,layout=interleaved !
vorbisenc ! oggmux ! filesink location=out2.ogg
Same problem
Any suggestion as of how to solve this?
I am not restricted only to vorbis, in fact I have similar issues also with opusenc.
Thanks.
Mar
So. I got it working,
gst-launch-1.0 interleave name=i filesrc location=FourMICS_RR_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,rate=24000,format=F32LE ! queue ! i.sink_0
filesrc location=FourMICS_CR_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,channels=(int)1,rate=24000,format=F32LE ! queue ! i.sink_1
filesrc location=FourMICS_CL_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,channels=(int)1,rate=24000,format=F32LE ! queue ! i.sink_2
filesrc location=FourMICS_LL_long.wav ! decodebin ! audioconvert ! audioresample ! audio/x-raw,channels=(int)1,rate=24000,format=F32LE ! queue ! i.sink_3
i.src ! capssetter caps=audio/x-raw,channels=4,channel-mask=(bitmask)0x33 ! audioconvert ! audioresample ! vorbisenc ! oggmux ! filesink location=out2.ogg
There were two issues
1. that the caps need to be set to the interleave
2. vorbisenc bitrate could not bare with 4 channels at 48khz