Gstreamer streaming multiple cameras over RTP while saving each stream - c++

I am currently working on a project that utilizes a Nvidia Jetson. We need to stream 3 cameras over UDP RTP to a single source (unicast), while saving the contents of all three cameras.
I am having issues with my pipeline, It is probably a simple mistake somewhere that I simply am not seeing.
gst-launch-1.0 -e v4l2src device=/dev/video0 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=c c. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8574 host=129.21.57.204 port=8574 loop=false c. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-RightFacingCamera.mp4 v4l2src device=/dev/video1 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=b b. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8564 host=129.21.57.204 port=8564 loop=false b. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-LeftFacingCamera.mp4 v4l2src device=/dev/video2 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=a a. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8554 host=129.21.57.204 port=8554 loop=false a. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-FrontFacingCamera.mp4
Now the issue here is that 2 of the 3 streams will simply stop without cause, there is no debug information at all, they will simply cease to stream and write to the file after about 2 minutes of up time.
Additionally, I have considered converting this into C/C++ w/Gstreamer, I would not know where to begin if someone would like to point me in a direction. Currently I have a javascript code written up that detects each camera by serial number and assigns a port to the given camera. Then runs this command.
Thanks for any help.

This issue was caused by the cameras themselves. Turns out that ECON brand cameras have an issue where 3 of the identical camera will not work in v4l2. My team and I have bought new cameras, all identical model to test, and it works fine.
We were using ECONS because of supposed scientific quality and USB-3 speeds. Unfortunately we do not have USB3 speeds or bandwidth, so we are stuck on a lower resolution.
Hope that helps anyone that runs into a simaler problem, the current cameras that seem to all work asynchronously over USB2.0 are Logitech c922s

This is usb bandwidth limitation of Jetson. We can support 3 camera at a time with compromising the frame-rate. The Logitech camera is compared and that camera is H.264 camera (It gives compressed frames) so it afford to give 60fps bandwidth.

Related

Gstreamer streaming pipeline optimisation to reduce latency

I am currently streaming my entire Surface Pro's desktop to another device on my local network.
I would like to optimize and reduce lag as much as possible between the two devices.
I currently have arround 500ms latency between what I have on the screen of the streaming device, and what I see on the device receiving and displaying the stream.
I have used UDP instead of TCP and encoded with H264.
gst-launch-1.0.exe dxgiscreencapsrc width=2880 height=1920 monitor=0 ! video/x-raw,framerate=30/1 ! queue !
videoconvert ! videoscale ! video/x-raw,format=NV12,width=1440,height=960 ! videoconvert !
videocrop top=50 left=20 bottom=280 right=20 ! queue ! mfh264enc rc-mode=0 low-latency=true
bitrate=1500 gop-size=10 ! queue ! rtph264pay ! udpsink host=127.0.0.1 port=49800

GStreamer pipeline stops playing after fast and shaky camera movement

I'm working on a video streaming wearable device. During the tests, it came up that the pipeline clock and stream stop while fast walking or running. It's bizarre behaviour because in debug messages there are no errors about the broken pipeline, besides lost frames. It's frizzed and only restarting help. May you guys guess what causes the problem?
The pipelines I use:
streaming device:
gst-launch-1.0 -vem --gst-debug=3 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=\(fraction\)30/1 ! v4l2h264enc extra-controls=s,video_bitrate=250000 capture-io-mode=4 output-io-mode=4 ! "video/x-h264,level=(string)4" ! rtph264pay config-interval=1 ! multiudpsink clients="127.0.0.1:5008,10.123.0.2:5008"
client:
udpsrc port=5008 do-timestamp=true ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 ! rtpjitterbuffer latency=100 drop-on-latency=true drop-messages-interval=100000000 ! queue max-size-buffers=20000 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! glupload ! qmlglsink name=qmlglsink sync=false
The hardware I use is a PS3 Eye cam, and LTE modem to transmit video with a pretty low uplink of 1-2 Mbit/s, and everything running on RaspberryPi 3b+ 1GB.
For more debug info there are also pictures of the log file after last registered dropped frame and every next "cycle" sends a new query, loops over GST Element from sink to the source which is my camera and ends with max query duration(highlighted query to v4l2src)
Do you know how to overcome this problem?
The problem has been resolved. The issue was not variable encoder bitrate.
A more detailed inspection and pipeline that works for me is in this GStreamer issue page

Resample and depayload audio rtp using gstreamer

I am developing an application where I am using a wave file from a location at one end of a pipeline and udpsink at the other end of it.
gst-launch-1.0 filesrc location=/path/to/wave/file/Tornado.wav ! wavparse ! audioconvert ! audio/x-raw,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=xxx.xxx.xxx.xxx port=5000
The Above wave file is having sampling rate = 44100 Hz and single-channel(mono)
On the same PC I am using a c++ program application to catch these packets and depayload to a headerless audio file (say Tornado.raw)
The pipeline I am creating for this is basically
gst-launch-1.0 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! filesink location=Tornado.raw
Now This works fine. I get the headerless data and when I play it using the Audacity It plays great!
I am trying to resample this audio file while it is in pipeline from 44100 Hz to 8000 Hz
Simply changing the clock-rate=(int)44100 to clock-rate=(int)8000 is not helping (also absurd logically)
I am looking for how to get the headerless file at the pipeline output with 8000 Hz sampling.
Also the data that I am getting now is Big-endian, but I want Little-endian as output. how do I set that in the pipeline?
You might relate this to one of my earlier question.
First, you have some weird caps in your pipeline - width and height are for video here. They probably will be just ignored.. but still.. not sure on others there as well but meh..
For the actual question. Just use audioresample and audioconvert elements of Gstreamer to transfer in your desired format.
E.g.
[..] ! rtpL16depay ! audioresample ! audioconvert ! \
audio/x-raw, rate=8000, format=S16LE ! filesink location=Tornado.raw

Use gstreamer to stream video and audio of Logitech C920

I'm quite a newbie on using gstreamer. I want to stream video and audio from my C920 webcam to another PC but I keep getting wrong in combining things..
I can now stream h264 video from my C920 to another PC using:
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-h264,width=1280,height=720,framerate=30/1 ! h264parse ! rtph264pay pt=127 config-interval=4 ! udpsink host=172.19.3.103
And view it with:
gst-launch-1.0 udpsrc port=1234 ! application/x-rtp, payload=127 ! rtph264depay ! avdec_h264 ! xvimagesink sync=false
I can also get the audio from the C920 and record it to a file together with a test-image:
gst-launch videotestsrc ! videorate ! video/x-raw-yuv,framerate=5/1 ! queue ! theoraenc ! queue ! mux. pulsesrc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_F1894590-02-C920.analog-stereo" ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! queue ! audioconvert ! queue ! vorbisenc ! queue ! mux. oggmux name=mux ! filesink location=stream.ogv
But I' trying to get something like this (below) to work.. This one is not working, presumably it's even a very bad combi I made!
gst-launch v4l2src device=/dev/video1 ! video/x-h264,width=1280,height=720,framerate=30/1 ! queue ! mux. pulsesrc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_F1894590-02-C920.analog-stereo" ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! queue ! audioconvert ! queue ! x264enc ! queue ! udpsink host=127.0.0.1 port=1234
You should encode your video before linking it against the mux. Also, I do not see you declaring the type of muxer you are using and you do not put the audio in the mux.
I am not sure it is even possible to send audio AND video over the same rtp stream in this manner in gstreamer. I know that the rtsp server implementation in gstreamer allows audio and video together but even in it I am not sure if it is still two streams just being abstracted away from implementation.
You may want to just use to separate streams and pass them through to a gstrtpbin element.

Problems with video playback

I have h264 video track and aac audio track inside mp4 container and I want to play it, but when I run my pipeline there's just first frame shown and no sound.
Here's my pipeline:
gst-launch filesrc location=/home/dmitry/Downloads/big_buck_bunny.mp4 ! qtdemux name=demux \
demux.audio_00 ! queue ! faad ! audioconvert ! audioresample ! autoaudiosink \
demux.video_00 ! queue ! ffdec_h264 ! ffmpegcolorspace ! autovideosink
Your queues might not be large enough for this scenario. You should try using playbin2 or decodebin for decoding and it will automatically adjust the queue sizes for playback.
If you have to stick to this pipeline, try setting larger values to the max-size-* properties on the queues.
On a plus side: please move to 1.2 version, 0.10 is obsolete for 2 years now.