I am currently streaming my entire Surface Pro's desktop to another device on my local network.
I would like to optimize and reduce lag as much as possible between the two devices.
I currently have arround 500ms latency between what I have on the screen of the streaming device, and what I see on the device receiving and displaying the stream.
I have used UDP instead of TCP and encoded with H264.
gst-launch-1.0.exe dxgiscreencapsrc width=2880 height=1920 monitor=0 ! video/x-raw,framerate=30/1 ! queue !
videoconvert ! videoscale ! video/x-raw,format=NV12,width=1440,height=960 ! videoconvert !
videocrop top=50 left=20 bottom=280 right=20 ! queue ! mfh264enc rc-mode=0 low-latency=true
bitrate=1500 gop-size=10 ! queue ! rtph264pay ! udpsink host=127.0.0.1 port=49800
Related
i am trying to stream live video feed from a camera connected to a Jetson NX, to a computer connected to the same network, the network works as wireless ethernet, meaning the jetson sees it as wired connection but in reality its wireless and is limited by bitrate.
On the jetson side, I am using OpenCV VideoWrite to send frames over the network using this pipeline:
cv::VideoWriter video_write("appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv\
! video/x-raw(memory:NVMM),format=NV12,width=640,height=360,framerate=30/1 ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 idrinterval=30 \
bitrate=1800000 EnableTwopassCBR=1 ! h264parse ! rtph264pay ! udpsink host=169.254.84.2 port=5004 auto-multicast=0",\
cv::CAP_GSTREAMER,30,cv::Size(640,360));
on the receiving computer my video capture is :
cv::VideoCapture video("udpsrc port=5004 auto_multicast=0 !
application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 !
rtph264depay ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1 "
, cv::CAP_GSTREAMER);
Problem is, if the camera jitters a lot or moves a lot to any direction, the video stream either freezes or completely pixelates. I was hoping I could get suggestions for either a better encoder(im not limited to nvv4l2h264enc or h264 in general) or a solution for the pixaltion and freeze, or maybe even a better way to stream the video other than VideoWrite.
I am trying to stream 360p video at 30fps, my bitrate is limited to either 6mbps or 2.5mbps depending on the distance i want to limit myself at. it does not seem like a network problem simply because If i change parameters like Codec(MJPG instead of gstreamer for example) it changes the behaviour of the video feed, in my case it lowers the amount of freezing but makes the pixelating worse.
I'm working on a video streaming wearable device. During the tests, it came up that the pipeline clock and stream stop while fast walking or running. It's bizarre behaviour because in debug messages there are no errors about the broken pipeline, besides lost frames. It's frizzed and only restarting help. May you guys guess what causes the problem?
The pipelines I use:
streaming device:
gst-launch-1.0 -vem --gst-debug=3 v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=\(fraction\)30/1 ! v4l2h264enc extra-controls=s,video_bitrate=250000 capture-io-mode=4 output-io-mode=4 ! "video/x-h264,level=(string)4" ! rtph264pay config-interval=1 ! multiudpsink clients="127.0.0.1:5008,10.123.0.2:5008"
client:
udpsrc port=5008 do-timestamp=true ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 ! rtpjitterbuffer latency=100 drop-on-latency=true drop-messages-interval=100000000 ! queue max-size-buffers=20000 ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! glupload ! qmlglsink name=qmlglsink sync=false
The hardware I use is a PS3 Eye cam, and LTE modem to transmit video with a pretty low uplink of 1-2 Mbit/s, and everything running on RaspberryPi 3b+ 1GB.
For more debug info there are also pictures of the log file after last registered dropped frame and every next "cycle" sends a new query, loops over GST Element from sink to the source which is my camera and ends with max query duration(highlighted query to v4l2src)
Do you know how to overcome this problem?
The problem has been resolved. The issue was not variable encoder bitrate.
A more detailed inspection and pipeline that works for me is in this GStreamer issue page
I am trying to display the camera data, store the video and stream the video over wifi at the same time.
test-launch "(v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1,width=720,height=480 ! tee name="splitter" ! queue ! imxvpuenc_h264 bitrate=2000 ! rtph264pay name=pay0 pt=96 splitter. ! queue ! videoconvert ! video/x-raw,width=720,height=480 ! fbdevsink)" &
This command is working fine with one problem. The video on display is not working until I connect the Wi-Fi and start the steaming. I would like to start the video playback to display. I don't want to be dependent on Wi-Fi streaming.
I am currently working on a project that utilizes a Nvidia Jetson. We need to stream 3 cameras over UDP RTP to a single source (unicast), while saving the contents of all three cameras.
I am having issues with my pipeline, It is probably a simple mistake somewhere that I simply am not seeing.
gst-launch-1.0 -e v4l2src device=/dev/video0 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=c c. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8574 host=129.21.57.204 port=8574 loop=false c. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-RightFacingCamera.mp4 v4l2src device=/dev/video1 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=b b. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8564 host=129.21.57.204 port=8564 loop=false b. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-LeftFacingCamera.mp4 v4l2src device=/dev/video2 ! 'video/x-raw, width=(int)640, height=(int)480' ! tee name=a a. ! queue ! omxvp8enc bitrate=1500000 ! rtpvp8pay ! udpsink bind-port=8554 host=129.21.57.204 port=8554 loop=false a. ! queue ! omxh264enc bitrate=1500000 ! mp4mux ! queue ! filesink location=test-FrontFacingCamera.mp4
Now the issue here is that 2 of the 3 streams will simply stop without cause, there is no debug information at all, they will simply cease to stream and write to the file after about 2 minutes of up time.
Additionally, I have considered converting this into C/C++ w/Gstreamer, I would not know where to begin if someone would like to point me in a direction. Currently I have a javascript code written up that detects each camera by serial number and assigns a port to the given camera. Then runs this command.
Thanks for any help.
This issue was caused by the cameras themselves. Turns out that ECON brand cameras have an issue where 3 of the identical camera will not work in v4l2. My team and I have bought new cameras, all identical model to test, and it works fine.
We were using ECONS because of supposed scientific quality and USB-3 speeds. Unfortunately we do not have USB3 speeds or bandwidth, so we are stuck on a lower resolution.
Hope that helps anyone that runs into a simaler problem, the current cameras that seem to all work asynchronously over USB2.0 are Logitech c922s
This is usb bandwidth limitation of Jetson. We can support 3 camera at a time with compromising the frame-rate. The Logitech camera is compared and that camera is H.264 camera (It gives compressed frames) so it afford to give 60fps bandwidth.
I have h264 video track and aac audio track inside mp4 container and I want to play it, but when I run my pipeline there's just first frame shown and no sound.
Here's my pipeline:
gst-launch filesrc location=/home/dmitry/Downloads/big_buck_bunny.mp4 ! qtdemux name=demux \
demux.audio_00 ! queue ! faad ! audioconvert ! audioresample ! autoaudiosink \
demux.video_00 ! queue ! ffdec_h264 ! ffmpegcolorspace ! autovideosink
Your queues might not be large enough for this scenario. You should try using playbin2 or decodebin for decoding and it will automatically adjust the queue sizes for playback.
If you have to stick to this pipeline, try setting larger values to the max-size-* properties on the queues.
On a plus side: please move to 1.2 version, 0.10 is obsolete for 2 years now.