I'm writing two programs(on visual studio), one sending and one receiving UDP packets(1,000 bytes). Currently, my sender is sending a udp packet, sleep for 1 ms, before sending another packet. To increase the rate, i made the sender sleep for 1 ms only after sending about 20packets. Problem is if i increase the number of packets before each sleep, the receiver tend to miss packets. So i thought maybe if i increase the socket buffer size it will be better?
I know TCP is a safer choice, but let's assume i can only use UDP, and i need to ensure i can send packets over at a high rate with no missing packets, what should i do? And how do i check what is my socket buffer size? Currently my program only used 'sethost' and 'bind', followed by 'sendto' to send udp packets.
You can set the socket buffer size with setsockopt() using the SO_RCVBUFSIZ or SO_SNDBUFSIZ options, and you can get it back via getsockopt().
However I agree with #JonathonReinhart that this is most unlikely to fix your real problem.
Related
I have a UDP socket in blocking mode, I have bursts of packets and some are getting lost.
How can I find out current used size in receive buffer in winsock?
How can I understand whether system is discarding packets?
WSAIoctl passed FIONREAD is documented this way:
If the socket passed in the s parameter is message oriented (for
example, type SOCK_DGRAM), FIONREAD returns the reports the total
number of bytes available to read, not the size of the first datagram
(message) queued on the socket.
I think this answers your first question. As for the second, I see no way to programmatically figure that out. You should use sequence numbers in your application to detect gaps, and then look at the receive buffer size and guess that if it's close to full, the losses are due to running out of buffer space.
The receive buffer size for any socket is given by calling getsockopt() with the SO_RCVBUF option
I don't see how you can distinguish at the receiver between a packet lost in the network and a packet discarded at the local host. All you can tell is that it didn't arrive, and you need a higher-level protocol than UDP to tell you so, a sequence-numbered protocol with ACKs or NACKs.
I have some doubts over increasing TCP Window Size in application. In my C++ software application, we are sending data packets of size around 1k from client to server using TCP/IP blocking socket. Recently I came across this concept TCP Window Size. So I tried increasing the value to 64K using setsockopt() for both SO_SNDBUF and SO_RCVBUF. After increasing this value, I get some improvements in performance for WAN connection but not in LAN connection.
As per my understanding in TCP Window Size,
Client will send data packets to server. Upon reaching this TCP Window Size, it will wait to make sure ACK received from the server for the first packet in the window size. In case of WAN connection, ACK is getting delayed from the server to the client because of latency in RTT of around 100ms. So in this case, increasing TCP Window Size compensates ACK wait time and thereby improving performance.
I want to understand how the performance improves in my application.
In my application, even though TCP Window Size (Both Send and Receive Buffer) is increased using setsockopt at socket level, we still maintain the same packet size of 1k (i.e the bytes we send from client to server in a single socket send). Also we disabled Nagle algorithm (inbuilt option to consolidate small packets into a large packet thereby avoiding frequent socket call).
My doubts are as follows:
Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server. Then how does the performance improve after improving the TCP window Size in WAN connection alone ? If I misunderstood the concept of TCP Window Size, please correct me.
For sending 64K of data, I believe I still need to call socket send function 64 times ( since i am sending 1k per send through blocking socket) even though I increased my TCP Window Size to 64K. Please confirm this.
What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm ?
I am not so good in my English. If you couldn't understand any of the above, please let me know.
First of all, there is a big misconception evident from your question: that the TCP window size is what is controlled by SO_SNDBUF and SO_RCVBUF. This is not true.
What is the TCP window size?
In a nutshell, the TCP window size determines how much follow-up data (packets) your network stack is willing to put on the wire before receiving acknowledgement for the earliest packet that has not been acknowledged yet.
The TCP stack has to live with and account for the fact that once a packet has been determined to be lost or mangled during transmission, every packet sent, from that one onwards, has to be re-sent since packets may only be acknowledged in order by the receiver. Therefore, allowing too many unacknowledged packets to exist at the same time consumes the connection's bandwidth speculatively: there is no guarantee that the bandwidth used will actually produce anything useful.
On the other hand, not allowing multiple unacknowledged packets at the same time would simply kill the bandwidth of connections that have a high bandwidth-delay product. Therefore, the TCP stack has to strike a balance between using up bandwidth for no benefit and not driving the pipe aggressively enough (and thus allowing some of its capacity to go unused).
The TCP window size determines where this balance is struck.
What do SO_SNDBUF and SO_RCVBUF do?
They control the amount of buffer space that the network stack has reserved for servicing your socket. These buffers serve to accumulate outgoing data that the stack has not yet been able to put on the wire and data that has been received from the wire but not yet read by your application respectively.
If one of these buffers is full you won't be able to send or receive more data until some space is freed. Note that these buffers only affect how the network stack handles data on the "near" side of the network interface (before they have been sent or after they have arrived), while the TCP window affects how the stack manages data on the "far" side of the interface (i.e. on the wire).
Answers to your questions
No. If that were the case then you would incur a roundtrip delay for each packet sent, which would totally destroy the bandwidth of connections with high latency.
Yes, but that has nothing to do with either the TCP window size or with the size of the buffers allocated to that socket.
According to all sources I have been able to find (example), scaling allows the window to reach a maximum size of 1GB.
Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server.
Wrong. Sending in TCP is asynchronous. send() just transfers the data to the socket send buffer and returns. It only blocks while the socket send buffer is full.
Then how does the performance improve after improving the TCP window Size in WAN connection alone?
Because you were wrong about it blocking until it got an ACK.
For sending 64K of data, I believe I still need to call socket send function 64 times
Why? You could just call it once with the 64k data buffer.
( since i am sending 1k per send through blocking socket)
Why? Or is this a repetition of your misconception under (1)?
even though I increased my TCP Window Size to 64K. Please confirm this.
No. You can send it all at once. No loop required.
What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm?
Much bigger than you will ever need.
I am writing a C++ application on Linux. My application has a UDP server which sends data to clients on some events. The UDP server also receives some feedback/acknowledgement back from the clients.
To implement this application I used a single UDP Socket(e.g. int fdSocket) to send and receive data from all the clients. I bound this socked to port 8080 and have set the socket into NON_BLOCKING mode.
I created two threads. In one thread I wait for some event to happen, if an event occurs then I use the fdsocket to send data to all the clients(in a for loop).
In the another thread I use the fdSocket to receive data from clients (recvfrom()). This thread is scheduled to run every 4 seconds (i.e. every 4 seconds it will call recvfrom() to retrieve the data from the socket buffer. Since it is in NON-BLOCKING mode the recvfrom() function will return immediately if no UDP data is available, then I will go to sleep for 4 secs).
The UDP Feedback/Acknowledge from all the clients has a fixed payload whose size is 20bytes.
Now I have two questions related to this implementation:
Is it correct to use the same socket for sending/receiving UDP data
with Mulitiple clients ?
How to find the maximum number of UDP Feedback/Acknowledge Packets my application can handling without UDP Socket Buffer Overflow (since I am reading at every 4secs, if I
receive lot of packets within this 4 seconds I might loose some packet ie., I need to find the rate in packets/sec I can handle safely)?
I tried to get the Linux Socket Buffer size for my socket (fdsocket) using the function call getsockopt(fdsocket,SOL_SOCKET,SO_RCVBUF,(void *)&n, &m);. From this function I discover that my Socket Buffer size is 110592. But I am not clear as what data will be stored in this socket buffer: will it store only the UDP Payload or Entire UDP Packet or event the Entire Ethernet Packet? I referred this link to get some idea but got confused.
Currently my code it little bit dirty , I will clean and post it soon here.
The following are the links I have referred before posting this question.
Linux Networking
UDP SentTo and Recvfrom Max Buffer Size
UDP Socket Buffer Overflow Detection
UDP broadcast and unicast through the same socket?
Sending from the same UDP socket in multiple threads
How to flush Input Buffer of an UDP Socket in C?
How to find the socket buffer size of linux
How do I get amount of queued data for UDP socket?
Having socket reading at fixed interval of four seconds definitely sets you up for losing packets. The conventional tried-and-true approach to non-blocking I/O is the de-multiplexer system calls select(2)/poll(2)/epoll(7). See if you can use these to capture/react to your other events.
On the other hand, since you are already using threads, you can just do blocking recv(2) without that four second sleep.
Read Stevens for explanation of SO_RCVBUF.
You can see the maximum allowed buffer size:
sysctl net.core.rmem_max
You can set the maximum buffer size you can use by:
sysctl -w net.core.rmem_max=8388608
You can also set the buffer size at run-time (not exceeding the max above) by using setsockopt and changing SO_RCVBUF. You can see the buffer level by looking at /proc/net/udp.
The buffer is used to store the UDP header and application data, rest belong to lower levels.
Q: Is it correct to use the same socket for sending/receiving UDP data with Mulitiple clients ?
A: Yes, it is correct.
Q: How to find the maximum number of UDP Feedback/Acknowledge Packets my application can handling without UDP Socket Buffer Overflow (since I am reading at every 4secs, if I receive lot of packets within this 4secs I might loose some packet ie., I need to find the rate : noofpackets/sec I can handle safely)?
A: The bottleneck might be the network bandwidth, or CPU, or memory. You could simply do a testing, using a client which sends ACK to the server with consecutive number, and verify whether there is packet loss at the server.
I have a big 1GB file, which I am trying to send to another node. After the sender sends 200 packets (before sending the complete file) the code jumps out. Saying "Sendto no send space available". What can be the problem and how to take care of it.
Apart from this, we need maximum throughput in this transfer. So what send buffer size we should use to be efficient?
What is the maximum MTU which we can use to transfer the file without fragmentation?
Thanks
Ritu
Thank you for the answers. Actually, our project specifies to use UDP and then some additional code to take care of lost packets.
Now I am able to send the complete file, using blocking UDP sockets.
I am running the whole setup on an emulab like environment, called deter. I have set link loss to 0 but still my some packets are getting lost. What could be the possible reason behind that? Even if I add delay (assuming receiver drops the packet when its buffer is full) after sending every packet..still this packet losts persists.
It's possible to use UDP for high speed data transfer, but you have to make sure not to send() the data out faster than your network card can pump it onto the wire. In practice that means either using blocking I/O, or blocking on select() and only sending the next packet when select() indicates that the socket is ready-for-write. (ideally you'd also not send the data faster than the receiving machine can receive it, but that's less of an issue these days since modern CPU speeds are generally much faster than modern network I/O speeds)
Once you have that logic working properly, the size of your send-buffer isn't terribly important. (i.e. your send buffer will never be large enough to hold a 1GB file anyway, so making sure your program doesn't overflow the send buffer is the key issue whether the send buffer is large or small) The size of the receive-buffer on the receiver is important though... best to make that as large as possible, so the receiving computer won't drop packets if the receiving process gets held off of the CPU by another program.
Regarding MTU, if you want to avoid packet fragmentation (and assuming your packets are traveling over Ethernet), then you shouldn't place more than 1468 bytes into each UDP packet (or 1452 bytes if you're using IPv6). (Calculated by subtracting the size of the necessary IP and UDP headers from Ethernet's 1500-byte frame size)
Also agree with #jonfen. No UDP for high speed file transfer.
UDP incur less protocol overhead. However, at the maximum transfer rate, transmit errors are inevitable (such as packet loss). So one must incorporate TCP like error correction scheme. End result is lower than TCP performance.
I wrote two simple programs server and a client using sockets in C++ (Linux). And initially it was a sample client-server application (echo-message sending and receiving the answer). Next, I changed the client in order to implement HTTP GET (now I do not use my sample server anymore). It works, but whatever buffer size I set, the client receives only 1440 bytes. I want to receive whole page into the buffer. I think that this is concerned with the TCP properties and I should implement some kind of cycle inside my client's code to capture all the parts of the answer. But I don't know what exactly I should do.
This is my code:
...
int bytesSent = send(sock, tmpCharArr, message.size()+1, 0);
// Wait for the answer. Receive it into the buffer defined.
int bytesRecieved = recv(sock, resultBuf, 2048*100, 0);
...
2048*100 is a buffer size and I think this is more than enough for the relatively small WEB-page used for testing. But as I mentioned, I receive only 1440 bytes.
What can I do with recv() function call to capture all the reply "parts" when the server's response is larger then 1440 bytes?
Thanks in advance.
The buffer size is determined by factors outside your control (routers, ADSL links, IP stacks, etc.). The standard way to transmit large volumes of data is to call recv() repeatedly.
HTTP works over TCP, and to understand the working of TCP sockets better you have to think of them as a stream rather than packets.
For further clarity, read my earlier post: recombine split TCP packet with flash sockets
As to why you receive only 1400 (or so) bytes, you have to understand MTU and Fragmentation. To sum it up, MTU (Maximum Transmission Unit) is the ability of the network to transfer a single packet of a certain maximum size. MTU of a entire network is the lowest MTU of all the routers involved. Fragmentation is splitting up of the packets if you try to send a single packet of size larger than the MTU of that network.
For a better understanding of MTU and Fragmentation, read: http://www.miislita.com/internet-engineering/ip-packet-fragmentation-tutorial.pdf
Now as for how to receive the entire page in the buffer, one alternative is to keep calling recv() and appending the data you get in a buffer, until recv() returns zero. This will work because typically a web-server will close the TCP connection after it sends you the response. However, this technique will fail to work if the web-server doesn't close the session (maybe keep-alives are configures).
Therefore, the correct solution would be to keep receiving until you have received the HTTP header. Take a peek and determine the length of the entire HTTP response (Content-Length:) and then you can keep on receiving until you have received the exact amount of bytes you were supposed to receive.