I’m trying to create the following pipeline:
On Jetson:
1.1) camera -> … - > udpsink
1.2) udpsrc -> rtspserver
On Host PC
2.1) rtspsrc -> jitterbuffer -> detection -> tracker -> analytics
The main question is
My Jetson connected to Host PC over local WiFi network. I’ve choosen mesh WiFi Tenda Nova MW3. When Jetson reconnect from one WiFi access point to another, I loose some frames (from 0.5 to 10 seconds of stream). I understand that we cann’t get ideal seamless WiFi network, and system will "loose some frames" during reconnection.
I've try to setup buffer on udpsrc and udpsink, I've try to setup do-retransmission on rtspsrc, but it didn't work, or may be I did it wrong.
How to setup buffer in RTSP server to keep frames on Jetson client when it reconnects to another WiFi point and continue send frames from last point to Host PC?
Should I setup buffer on udpsink or udpsrc or rtspserver?
How to config rtpsrc to send frames from “losted time”?
I loose some frames (from 0.5 to 10 seconds of stream)
Maybe because decoder in Host pipeline misses the I frame, and should wait for the next I frame, 0.5~10s depends on the first frame position in the GOP(The GOP can be changed via encoder property).
You can use a rtmp server instead of rtsp, srs is a good choose, and open the gop cache. the pipeine would like to be camera -> encoder -> flvmux -> rtmpsink
vhost __defaultVhost__ {
gop_cache on;
}
The latency will increase while enable the gop cache, so the GOP for encoder should not be that large, may be 2s is good.
the pipeline on host pc may receive video data that had processed before reconnecting wifi, also the latency is bigger compare with gop disabled. If either counts, you should drop the outdated frame after decoding.
Related
I'm trying to figure out how RTSP works, when the handshake is complete.
Does the client have to query the server for each new piece of data? Or the server sends data all the time and doesn't care how the client receives the data?
The reason why I ask this is my Gstreamer pipeline receiving RTSP stream from IP camera:
uridecodebin -> nvstreammux -> queue1 -> nvinfer -> queue2 -> fakesink
The IP camera has 30FPS but the nvinfer element can process only 10FPS. So I assumed that pending frames are stored in queue1 element waiting to be processed. However the current number of buffers of the queue1 is 1 all the time.
So one possible answer is that frames or packets are queued in one of the elements of uridecodebin element, but I did not find any such element there. Next it can mean that the IP camera was informed by uridecodebin to decrease FPS. Or if the uridecodebin element has to ask for each new piece of data, it just means that it asks for new data only when all frames are processed in the pipeline.
Do you have any idea?
First it is worth mentioning that RTSP is a control protocol - the actual video media is usually sent over RTP protocol in most 'RTSP' video streaming cases.
RTSP is a streaming control protocol, to control streaming servers (whoami's remote control analogy here is nice: https://stackoverflow.com/a/43045354/334402) It defines how to package data to stream it and how both ends of the connection should behave to support the protocol.
So RTSP does not actually transport the media data itself - as mentioned above it is usually RTP (Real Time Transport) that does this.
Back to your question - RTCP and RTP can be transferred over UDP or TCP in IP Networks. In the former case, a socket is opened a stream established and the client can sit back and wait for the packets to arrive. In the latter case, the client requests each packet as you outline.
In practice you will likely find TCP is more commonly used, even if it is not necessarily the best match for streamed video, because it is more suited to traversing the many firewalls and NAT's on the internet.
In fact it gets more complicated sometimes with the RTSP data interleaved with RTP and RTCP (Real Tine Control Protocol that collects statistics about the flow) packets over TCP - you can see most detail by looking at the IEFT RFC: https://datatracker.ietf.org/doc/html/rfc2326
Summary: How do I stream high quality video using WebRTC native?
I have an h264 stream that's 1920x1080 at about 30fps. I can currently stream this from a server on localhost to a native client on localhost just fine.
I wrote a WebRTC server using Google's WebRTC native library. I've written a VideoEncoder and VideoEncoderFactory that takes frames consisting of already encoded data and and broadcasts it over a video track. Using this I can send my h264 stream to the WebRTC server over a pipe and I can see the video stream in a browser.
However, any time something moves the video gets corrupted. It continues to play but is full of artifacts. Eventually I discovered that WebRTC is dropping some of my frames. When I attach a sequentially increasing ID to each frame before I pass it to rtc::AdaptedVideoTrackSource::OnFrame and I log this same ID in webrtc::VideoEncoder::Encode I can see that some of my frames simply disappear.
This kind of makes sense, I'm trying to stream high quality video over something meant for video chat and lowing my framerate fixes the corruption. However, I'm not asking the WebRTC library to do a lot, it's just forwarding already encoded data to a client on localhost. I have a native app that does this fine and I've seen one browser WebRTC client that can do this. Is there a field in the SDP or some configuration change that will allow me to stream my video?
This was the solution How to control bandwidth in WebRTC video call? .
I had heard about changing the offer sdp but dismissed it because I was told that the browser will accept unlimited bandwidth by default and that you'd only need to to this if you want to limit bandwidth. However, adding "b=AS:high number" has fixed all of my problems.
Here's something I can't understand. I developed a C++ video streaming Windows app. When streaming multiple video streams between 2 PCs on a local network, I get some latency and frame drops. However, if I add a TeamViewer connection between the 2 machines, there is no more latency and frame drops.
The opposite would be logical, right? What I'm I doing wrong?
For me, it looks like there is some buffering on the connection. Adding a Teamviewer connection seems to force a "push" of the data on the network.
I tried with a VNC connection instead of TeamViewer, but the latency and frame drops remains.
My streamer can use TCP or UDP. I only get the lag with TCP, not with UDP.
I am using VGA camera at input side and framegrabber for H.264 compression. I am getting RTSP stream from framegrabber over ethernet. this stream has been connected to Server laptop with point to point connection.
When I request a RTSP stream from Client side using gstreamer (sometimes VLC), I get the stream for max 1 minute. After 1 minute the network from server side is going down. only server WiFi connection getting disturbed. Client is alive in this case too.
I am unable to troubleshoot the exact problem.
I did some wireshark testing with different inputs:
1. Framegrabber with VGA camera
2. Surveillence camera
It work perfectly fine with serveillence camera.
One thing which I have seen is even there is network breakdown, framegrabber is keep sending frames to server. Normally it has to stop but it is keep sending it. I am confuse here also.
Configuration:
Framegrabber:
bitrate - 1 Mbps
Resolution - 720 * 480
Framerate - 30fps ( can not be changed because of use of PAL )
Same with surveillence camera except framerate is 25fps.
Please guide me for solving network breakdown issue.
Thanks in advance!!!
I want to create server and client applications for controlling electronic device through network.
Server application should stream video from web camera (RGB, 320 * 240) with some information about current state of device and control microcontroller device via RS-232.
Client application should allow to adjust controlling process and show video from web camera and some information about device.
What i have done: I use Qt framework in oder to create GUI of both applications and TCP sockets, also I use OpenCV to get images from camera. Server application compresses images in JPEG, add some information about device (~ 60 bytes) and send to client.
Problem: In Local network everything works fine, but working through Internet I can get only about 15 fps because JPEG images are too large. With stronger JPEG compression I can get suitable fps, but with bad image quality. So I wonder is there any better ways to stream video with some extra information about current state of device? Maybe with FFMPEG or something else.
Thanks for your replies and sorry for my english!