c++ tcp socket speed changes - c++

Here's something I can't understand. I developed a C++ video streaming Windows app. When streaming multiple video streams between 2 PCs on a local network, I get some latency and frame drops. However, if I add a TeamViewer connection between the 2 machines, there is no more latency and frame drops.
The opposite would be logical, right? What I'm I doing wrong?
For me, it looks like there is some buffering on the connection. Adding a Teamviewer connection seems to force a "push" of the data on the network.
I tried with a VNC connection instead of TeamViewer, but the latency and frame drops remains.
My streamer can use TCP or UDP. I only get the lag with TCP, not with UDP.

Related

ACCUMULATIVE DELAY in Sending a NON_STOP serial stream over ip - Real Time issues?

I've written a program in C++/Qt that reads data from the serial port (ttyS0) and writes it over the network (on a specific TCP port).
At the other end of the network, I read the data from the corresponding socket and write it back over the serial port.
I' ve essentially created a transparent serial-to-serial bridge over the ip network.
So far so good.
The problem starts when the input serial data comes in NON-STOP. Sending the non-stop serial stream (#19200bps) over the network (#100Mbps) and back over the remote serial port (#19200bps) creates an ACCUMULATIVE DELAY.
The first serial packets are sent over the REMOTE SERIAL interface with neglible delay (i.e. the total delay from the time I receive the serial data # local side to the time i write it over the remote serial port is negligible)
But the delay just adds up and after some time, the "SERIAL pkts" received at local side can be seen at the remote serial port with huge delays (in the order of "minutes" and counting up)!!
Some notes:
I am using 2 linux boxes that are directly connected (so network delay is not an issue)
no pkt/serial data is lost in the process (the only problem is accumulative delays)
my OS is not real time, does this have to do with reat time issues!!
I believe the problem stems from small delays that result in the remote serial port becoming IDLE for short amounts of time.
Since data can leave the system #19200 bps, any delay in the process that might cause the remote serial port becoming idle with lead to an accumulative delay.
But I've no idea on how to measure the delays or make the program to work timely.
Any help/hints are highly appreciated

Concurrent UDP connection limit in C/C++

I wrote a server in c which receive UDP data from client in port X. I have used Epoll(non block) socket for UDP listening and has only one thread as worker. Pseudo code is following:
on_data_receive(socket){
process(); //take 2-4 millisecond
send_response(socket);
}
But when I send 5000 concurrent (using thread) request server miss 5-10% request. on_data_receive() never called for 5-10% request. I am testing in local network so you can assume there is no packet loss. My question is why on_data_receive didn't call for some request? What is the connection limit for socket? With the increase of concurrent request loss ratio also increase.
Note: I used random sleep upto 200 millisecond before sending the request to server.
There is no 'connection' for UDP. All packets are just sent between peers, and the OS does some magic buffering to avoid packet loss to some degree.
But when too many packets arrive, or if the receiving application is too slow reading the packets, some packets get dropped without notice. This is not an error.
For example Linux has a UDP receive buffer which is about 128k by default (I think). You can probably change that, but it is unlikely to solve the systematic problem that UDP may expose packet loss.
With UDP there is no congestion control like there is for TCP. The raw artifacts of the underlying transport (Ethernet, local network) are exposed. Your 5000 senders get probably more CPU time in total than your receiver, and so they can send more packets than the receiver can receive. With UDP senders do not get blocked (e.g. in sendto()) when the receiver cannot keep up receiving the packets. With UDP the sender always needs to control and limit the data rate explicitly. There is no back pressure from the network side (*).
(*) Theoretically there is no back pressure in UDP. But on some operating systems (e.g. Linux) you can observe that there is back pressure (at least to some degree) when sending for example over a local Ethernet. The OS blocks in sendto() when the network driver of the physical network interface reports that it is busy (or that its buffer is full). But this back pressure stops working when the local network adapter cannot determine the network 'being busy' for the whole network path. See also "Ethernet flow control (Pause Frames)". Through this the sending side can block the sending application even when the receive-buffer on the receiving side is full. This explains why often there seems to be a UDP back-pressure like a TCP back pressure, although there is nothing in the UDP protocol to support back pressure.

Network going down at server side after 1 minute once RTSP requested from Client

I am using VGA camera at input side and framegrabber for H.264 compression. I am getting RTSP stream from framegrabber over ethernet. this stream has been connected to Server laptop with point to point connection.
When I request a RTSP stream from Client side using gstreamer (sometimes VLC), I get the stream for max 1 minute. After 1 minute the network from server side is going down. only server WiFi connection getting disturbed. Client is alive in this case too.
I am unable to troubleshoot the exact problem.
I did some wireshark testing with different inputs:
1. Framegrabber with VGA camera
2. Surveillence camera
It work perfectly fine with serveillence camera.
One thing which I have seen is even there is network breakdown, framegrabber is keep sending frames to server. Normally it has to stop but it is keep sending it. I am confuse here also.
Configuration:
Framegrabber:
bitrate - 1 Mbps
Resolution - 720 * 480
Framerate - 30fps ( can not be changed because of use of PAL )
Same with surveillence camera except framerate is 25fps.
Please guide me for solving network breakdown issue.
Thanks in advance!!!

TCP streams on iOS don't show up on a wireless network

I am trying to send and receive TCP streams from an iPad via a wireless connection to a laptop. I create sockets with boost::asio. This project is a port of a data streaming library that I maintain that works quite well on Windows, OSX, and Linux.
I can get the app to send and receive streams to/from other computers on a wired LAN when I run it on the simulator. But, when I target the device itself, I can't see any streams.
As I say, I am communicating via wireless between an iPad and a laptop. I create a wireless network on the laptop and then give the iPad a static IP. The connection appears to be fine, because I can ping the iPad with no packet loss. I have also tried connecting the devices by putting them on the same wireless LAN (although I'm not supposed to use wireless routers at work) and this also does not work.
According to apple, setting up streams like this with NSStream will just work. Maybe there is some permissions magic happening under the hood that I am not doing with my calls to boost::asio functions. In any case, I can't see the streams.
Actually, it turns out the only thing that was wrong was that I needed to set up my routing table so that it pointed multicast to the wireless card:
> sudo route -nv add -net 224.0.0.183 -interface en1
I got the IP from inspecting packets in wireshark -- it is the address that my device is multicasting to in my laptop. Sending works (from device to laptop), receiving is still silent though. This may be something else that needs to be set int the routing table (I really don't understand much at all about multicasting) or else I can fiddle with some config settings with my library.

C++ UDP socket corrupts packet over a certain frequency

I am developing a simple file transfer protocol based on UDP.
To make sure that packets are sent correctly, I am checksumming them. At the moment of receiving, corrupt packets are dropped. I begun by testing my protocol at home within my home network. I have seen it support several MB/s upload bandwidth to the internet so I expected it to perform nicely with two computers connected to the same wifi router.
What happened is that when I reach up to 10000 packets per second (packets are of a few bytes only!) packets start appearing massively (about 40% to 60%) corrupt (checksum fails). What could be the cause of this problem? Any help would be really appreciated!
UDP is a connectionless oriented protocol - meaning, you can send UDP packets at any time - if someone is listening they'll get the packet. If they don't, they don't. Packets are NOT guaranteed to arrive.
You cannot send UDP packets the same way you are doing with TCP. You have to handle each packet as its own. For example, with socket/TCP, you can write as much data as you want and TCP will get it over there unless you overflow the socket itself. It's reliable.
UDP is not. If you send UDP packet and it gets lost, it's lost forever and there no way to recover it - you'll have to do the recovery yourself in your own protocol above the layer. There is no resend, it's not a reliable connection.
Although there is a checksum, it's typically optional and typically not used.
UDP is great for streaming data, such as music, voice, etc. There are recovery protocols such as RTP above the UDP layer for voice that can recover data in the voice coders themselves.
I bet if you put a counter in the UDP packet, you'll note that some of them does not arrive if you exceed a certain bandwith and definitely will run into this if you are connecting it through a switch/network. If you do a direct connection between two computers, it may work at a very high bandwidth though.