Resample and depayload audio rtp using gstreamer - c++

I am developing an application where I am using a wave file from a location at one end of a pipeline and udpsink at the other end of it.
gst-launch-1.0 filesrc location=/path/to/wave/file/Tornado.wav ! wavparse ! audioconvert ! audio/x-raw,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=xxx.xxx.xxx.xxx port=5000
The Above wave file is having sampling rate = 44100 Hz and single-channel(mono)
On the same PC I am using a c++ program application to catch these packets and depayload to a headerless audio file (say Tornado.raw)
The pipeline I am creating for this is basically
gst-launch-1.0 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! filesink location=Tornado.raw
Now This works fine. I get the headerless data and when I play it using the Audacity It plays great!
I am trying to resample this audio file while it is in pipeline from 44100 Hz to 8000 Hz
Simply changing the clock-rate=(int)44100 to clock-rate=(int)8000 is not helping (also absurd logically)
I am looking for how to get the headerless file at the pipeline output with 8000 Hz sampling.
Also the data that I am getting now is Big-endian, but I want Little-endian as output. how do I set that in the pipeline?
You might relate this to one of my earlier question.

First, you have some weird caps in your pipeline - width and height are for video here. They probably will be just ignored.. but still.. not sure on others there as well but meh..
For the actual question. Just use audioresample and audioconvert elements of Gstreamer to transfer in your desired format.
E.g.
[..] ! rtpL16depay ! audioresample ! audioconvert ! \
audio/x-raw, rate=8000, format=S16LE ! filesink location=Tornado.raw

Related

GStreamer pipeline stops playing after fast and shaky camera movement

I'm working on a video streaming wearable device. During the tests, it came up that the pipeline clock and stream stop while fast walking or running. It's bizarre behaviour because in debug messages there are no errors about the broken pipeline, besides lost frames. It's frizzed and only restarting help. May you guys guess what causes the problem?
The pipelines I use:
streaming device:
gst-launch-1.0 -vem --gst-debug=3 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=\(fraction\)30/1 ! v4l2h264enc extra-controls=s,video_bitrate=250000 capture-io-mode=4 output-io-mode=4 ! "video/x-h264,level=(string)4" ! rtph264pay config-interval=1 ! multiudpsink clients="127.0.0.1:5008,10.123.0.2:5008"
client:
udpsrc port=5008 do-timestamp=true ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 ! rtpjitterbuffer latency=100 drop-on-latency=true drop-messages-interval=100000000 ! queue max-size-buffers=20000 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! glupload ! qmlglsink name=qmlglsink sync=false
The hardware I use is a PS3 Eye cam, and LTE modem to transmit video with a pretty low uplink of 1-2 Mbit/s, and everything running on RaspberryPi 3b+ 1GB.
For more debug info there are also pictures of the log file after last registered dropped frame and every next "cycle" sends a new query, loops over GST Element from sink to the source which is my camera and ends with max query duration(highlighted query to v4l2src)
Do you know how to overcome this problem?
The problem has been resolved. The issue was not variable encoder bitrate.
A more detailed inspection and pipeline that works for me is in this GStreamer issue page

How to form a gstreamer pipeline to encode mp4 video from tiff files?

I'm new to gstreamer and am stuck trying to form a gstreamer pipeline to encode mp4 video from tiff files on nvidia Jetson platform. Here is the pipeline I've come up with :
gst-launch-1.0 multifilesrc location=%03d.tiff index=0 start-index=0 stop-index=899 blocksize=720000 num-buffers=900 do-timestamp=true typefind=true ! 'video/x-raw,format=(string)RGB,width=(int)1280,height=(int)720,framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw,format=(string)I420,framerate=(fraction)30/1' ! omxh264enc ! 'video/x-h264,stream-format=(string)byte-stream,framerate=(fraction)30/1' ! h264parse ! filesink sync=true location=test.mp4 -e
With this, the mp4 file gets created successfully and plays but the actual video content is all garbled. Any idea what am I doing wrong ? Thank You
You are not doing any demux/decode of your TIFF data, so you throw random bytes at the encoder.
Also you are doing a lot of things with caps without having proper elements between that could alter the formats correctly.
You should use decodebin to let GStreamer handle most of the things automatically. E.g. something like that:
multifilesrc ! decodebin ! videoconvert ! omxh264enc ! h264parse ! filesink
Depending on your encoder you want to force the color format to be a 4:2:0 so that it does not accidentally encode in 4:4:4 (which is not very common and not supported by many encoders):
multifilesrc ! decodebin ! videoconvert ! video/x-raw, format=I420 ! omxh264enc ! h264parse ! filesink

How do I save a video with an alpha channel in GStreamer?

I have a collection of RGBA png files, and have verified the presence of an alpha layer on each file:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! videomixer background=checker ! videoconvert ! ximagesink
I want to take these files and make them into a video file (in any format that GStreamer will readily handle with a simple decodebin). What would be a good set of encoders, containers, and elements to use for this?
I've tried avimux but no alpha data was saved. I also tried avenc_huffyuv, and that would decode fine as raw data using avenc_huffyuv, but decodebin could not detect it.
Nothing like a good night's sleep to solve an issue..
Apparently the huffyuv encoder and avi muxer work nicely together to preserve tranpsarency:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! avenc_huffyuv ! avimux ! filesink location=/tmp/test.avi

Gstreamer streaming multiple cameras over RTP while saving each stream

I am currently working on a project that utilizes a Nvidia Jetson. We need to stream 3 cameras over UDP RTP to a single source (unicast), while saving the contents of all three cameras.
I am having issues with my pipeline, It is probably a simple mistake somewhere that I simply am not seeing.
gst-launch-1.0 -e v4l2src device=/dev/video0 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=c c. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8574 host=129.21.57.204 port=8574 loop=false c. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-RightFacingCamera.mp4 v4l2src device=/dev/video1 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=b b. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8564 host=129.21.57.204 port=8564 loop=false b. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-LeftFacingCamera.mp4 v4l2src device=/dev/video2 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=a a. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8554 host=129.21.57.204 port=8554 loop=false a. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-FrontFacingCamera.mp4
Now the issue here is that 2 of the 3 streams will simply stop without cause, there is no debug information at all, they will simply cease to stream and write to the file after about 2 minutes of up time.
Additionally, I have considered converting this into C/C++ w/Gstreamer, I would not know where to begin if someone would like to point me in a direction. Currently I have a javascript code written up that detects each camera by serial number and assigns a port to the given camera. Then runs this command.
Thanks for any help.
This issue was caused by the cameras themselves. Turns out that ECON brand cameras have an issue where 3 of the identical camera will not work in v4l2. My team and I have bought new cameras, all identical model to test, and it works fine.
We were using ECONS because of supposed scientific quality and USB-3 speeds. Unfortunately we do not have USB3 speeds or bandwidth, so we are stuck on a lower resolution.
Hope that helps anyone that runs into a simaler problem, the current cameras that seem to all work asynchronously over USB2.0 are Logitech c922s
This is usb bandwidth limitation of Jetson. We can support 3 camera at a time with compromising the frame-rate. The Logitech camera is compared and that camera is H.264 camera (It gives compressed frames) so it afford to give 60fps bandwidth.

Problems with video playback

I have h264 video track and aac audio track inside mp4 container and I want to play it, but when I run my pipeline there's just first frame shown and no sound.
Here's my pipeline:
gst-launch filesrc location=/home/dmitry/Downloads/big_buck_bunny.mp4 ! qtdemux name=demux \
demux.audio_00 ! queue ! faad ! audioconvert ! audioresample ! autoaudiosink \
demux.video_00 ! queue ! ffdec_h264 ! ffmpegcolorspace ! autovideosink
Your queues might not be large enough for this scenario. You should try using playbin2 or decodebin for decoding and it will automatically adjust the queue sizes for playback.
If you have to stick to this pipeline, try setting larger values to the max-size-* properties on the queues.
On a plus side: please move to 1.2 version, 0.10 is obsolete for 2 years now.