I have some doubts over increasing TCP Window Size in application. In my C++ software application, we are sending data packets of size around 1k from client to server using TCP/IP blocking socket. Recently I came across this concept TCP Window Size. So I tried increasing the value to 64K using setsockopt() for both SO_SNDBUF and SO_RCVBUF. After increasing this value, I get some improvements in performance for WAN connection but not in LAN connection.
As per my understanding in TCP Window Size,
Client will send data packets to server. Upon reaching this TCP Window Size, it will wait to make sure ACK received from the server for the first packet in the window size. In case of WAN connection, ACK is getting delayed from the server to the client because of latency in RTT of around 100ms. So in this case, increasing TCP Window Size compensates ACK wait time and thereby improving performance.
I want to understand how the performance improves in my application.
In my application, even though TCP Window Size (Both Send and Receive Buffer) is increased using setsockopt at socket level, we still maintain the same packet size of 1k (i.e the bytes we send from client to server in a single socket send). Also we disabled Nagle algorithm (inbuilt option to consolidate small packets into a large packet thereby avoiding frequent socket call).
My doubts are as follows:
Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server. Then how does the performance improve after improving the TCP window Size in WAN connection alone ? If I misunderstood the concept of TCP Window Size, please correct me.
For sending 64K of data, I believe I still need to call socket send function 64 times ( since i am sending 1k per send through blocking socket) even though I increased my TCP Window Size to 64K. Please confirm this.
What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm ?
I am not so good in my English. If you couldn't understand any of the above, please let me know.
First of all, there is a big misconception evident from your question: that the TCP window size is what is controlled by SO_SNDBUF and SO_RCVBUF. This is not true.
What is the TCP window size?
In a nutshell, the TCP window size determines how much follow-up data (packets) your network stack is willing to put on the wire before receiving acknowledgement for the earliest packet that has not been acknowledged yet.
The TCP stack has to live with and account for the fact that once a packet has been determined to be lost or mangled during transmission, every packet sent, from that one onwards, has to be re-sent since packets may only be acknowledged in order by the receiver. Therefore, allowing too many unacknowledged packets to exist at the same time consumes the connection's bandwidth speculatively: there is no guarantee that the bandwidth used will actually produce anything useful.
On the other hand, not allowing multiple unacknowledged packets at the same time would simply kill the bandwidth of connections that have a high bandwidth-delay product. Therefore, the TCP stack has to strike a balance between using up bandwidth for no benefit and not driving the pipe aggressively enough (and thus allowing some of its capacity to go unused).
The TCP window size determines where this balance is struck.
What do SO_SNDBUF and SO_RCVBUF do?
They control the amount of buffer space that the network stack has reserved for servicing your socket. These buffers serve to accumulate outgoing data that the stack has not yet been able to put on the wire and data that has been received from the wire but not yet read by your application respectively.
If one of these buffers is full you won't be able to send or receive more data until some space is freed. Note that these buffers only affect how the network stack handles data on the "near" side of the network interface (before they have been sent or after they have arrived), while the TCP window affects how the stack manages data on the "far" side of the interface (i.e. on the wire).
Answers to your questions
No. If that were the case then you would incur a roundtrip delay for each packet sent, which would totally destroy the bandwidth of connections with high latency.
Yes, but that has nothing to do with either the TCP window size or with the size of the buffers allocated to that socket.
According to all sources I have been able to find (example), scaling allows the window to reach a maximum size of 1GB.
Since I am using blocking socket, for each data packet send of 1k, it should block if ACK doesn't come from the server.
Wrong. Sending in TCP is asynchronous. send() just transfers the data to the socket send buffer and returns. It only blocks while the socket send buffer is full.
Then how does the performance improve after improving the TCP window Size in WAN connection alone?
Because you were wrong about it blocking until it got an ACK.
For sending 64K of data, I believe I still need to call socket send function 64 times
Why? You could just call it once with the 64k data buffer.
( since i am sending 1k per send through blocking socket)
Why? Or is this a repetition of your misconception under (1)?
even though I increased my TCP Window Size to 64K. Please confirm this.
No. You can send it all at once. No loop required.
What is the maximum limit of TCP window size with windows scaling enabled with RFC 1323 algorithm?
Much bigger than you will ever need.
Related
To achieve effective data transfer mechanism, I need to find out how many bits can fill up a network link.
Let me explain the situation,
Once I send a data(application protocol) it will reply a ACK after it process the data (in application layer) . If the RTT is high (Like 500 ms RTT) it takes too much time to send a ACK back. Until the ACK is received data will not being sent and it is in idle mode. To rectify the situation , I need to flight some data in-between intervals.
So I decide to transfer the data until the bandwidth delay product value(how many bits can fill up a network link) is exhaust by sent data size
BDP = Bandwidth(bits per sec) x RTT ( in secs).
How to find the network bandwidth of the device.
Is there any Windows API or other ways to finds the bandwidth of link ?
PS : I am newbie to network programming
You do not calculate bandwidth. The bandwidth is a property of the network interface. A 100 Mbps ethernet interface always has a 100 Mbps bandwidth. You are using the incorrect term.
If you are using TCP, the sender will constantly increase the send/congestion window until there is a problem, then it exponentially reduces the window size, and again starts increasing it until there is again a problem, repeating that over and over. Only a sender will know this window.
The receiver has a buffer that is the receive window, and it will communicate the current window size to the sender in every acknowledgement. The receive window will shrink as the buffer is filled, and grows as the buffer is emptied. The receive window determines how much data the sender is allowed to send before stopping to wait for an acknowledgement.
TCP handles all that automatically, calculating the SRTT and automatically adjusting to give you a good throughput for the conditions. You seem to want to control what TCP inherently does for you. You can tweak things like the receive buffer to increase the throughput, but you need to write your own transport protocol to do what you propose because you will overrun the receive buffer, losing data or crashing the receiving host.
Also, remember that TCP creates a connection between two equal TCP peers. Both are senders and both are receivers. Either side can send and receive, and either side can initiate closing the connection or kill it with a RST.
winapi GetIpNetworkConnectionBandwidthEstimates() gets "historical" BW "estimates" for a network connection (this is more relevant than the whole interface/link) on the spec'd intf.
I am using C++ TCP/IP sockets. According to my requirements my client has to connect to a server and read the messages sent by it (that's something really new, isn't it) but... in my application I have to wait for some time (typically 1 - 2 hrs) before I actually start reading messages (through recv() or read()) and the server still keeps on sending messages.
I want to know whether there is a limit on the capacity of the buffer which keeps those messages in case they are not read and whose physical memory is used to buffer those messages? Sender's or receiver's?
TCP data is buffered at both sender and receiver. The size of the receiver's socket receive buffer determines how much data can be in flight without acknowledgement, and the size of the sender's send buffer determines how much data can be sent before the sender blocks or gets EAGAIN/EWOULDBLOCK, depending on blocking/non-blocking mode. You can set these socket buffers as large as you like up to 2^32-1 bytes, but if you set the client receive buffer higher than 2^16-1 you must do so before connecting the socket, so that TCP window scaling can be negotiated in the connect handshake, so that the upper 16 bits can come into play. [The server receive buffer isn't relevant here, but if you set it >= 64k you need to set it on the listening socket, from where it will be inherited by accepted sockets, again so the handshake can negotiate window scaling.]
However I agree entirely with Martin James that this is a silly requirement. It wastes a thread, a thread stack, a socket, a large socket send buffer, an FD, and all the other associated resources at the server for two hours, and possibly affects other threads and therefore other clients. It also falsely gives the server the impression that two hours' worth of data has been received, when it has really only been transmitted to the receive buffer, which may lead to unknown complications in recovery situations: for example, the server may be unable to reconstruct the data sent so far ahead. You would be better off not connecting until you are ready to start receiving the data, or else reading and spooling the data to yourself at the client for processing later.
I have a big 1GB file, which I am trying to send to another node. After the sender sends 200 packets (before sending the complete file) the code jumps out. Saying "Sendto no send space available". What can be the problem and how to take care of it.
Apart from this, we need maximum throughput in this transfer. So what send buffer size we should use to be efficient?
What is the maximum MTU which we can use to transfer the file without fragmentation?
Thanks
Ritu
Thank you for the answers. Actually, our project specifies to use UDP and then some additional code to take care of lost packets.
Now I am able to send the complete file, using blocking UDP sockets.
I am running the whole setup on an emulab like environment, called deter. I have set link loss to 0 but still my some packets are getting lost. What could be the possible reason behind that? Even if I add delay (assuming receiver drops the packet when its buffer is full) after sending every packet..still this packet losts persists.
It's possible to use UDP for high speed data transfer, but you have to make sure not to send() the data out faster than your network card can pump it onto the wire. In practice that means either using blocking I/O, or blocking on select() and only sending the next packet when select() indicates that the socket is ready-for-write. (ideally you'd also not send the data faster than the receiving machine can receive it, but that's less of an issue these days since modern CPU speeds are generally much faster than modern network I/O speeds)
Once you have that logic working properly, the size of your send-buffer isn't terribly important. (i.e. your send buffer will never be large enough to hold a 1GB file anyway, so making sure your program doesn't overflow the send buffer is the key issue whether the send buffer is large or small) The size of the receive-buffer on the receiver is important though... best to make that as large as possible, so the receiving computer won't drop packets if the receiving process gets held off of the CPU by another program.
Regarding MTU, if you want to avoid packet fragmentation (and assuming your packets are traveling over Ethernet), then you shouldn't place more than 1468 bytes into each UDP packet (or 1452 bytes if you're using IPv6). (Calculated by subtracting the size of the necessary IP and UDP headers from Ethernet's 1500-byte frame size)
Also agree with #jonfen. No UDP for high speed file transfer.
UDP incur less protocol overhead. However, at the maximum transfer rate, transmit errors are inevitable (such as packet loss). So one must incorporate TCP like error correction scheme. End result is lower than TCP performance.
I wrote two simple programs server and a client using sockets in C++ (Linux). And initially it was a sample client-server application (echo-message sending and receiving the answer). Next, I changed the client in order to implement HTTP GET (now I do not use my sample server anymore). It works, but whatever buffer size I set, the client receives only 1440 bytes. I want to receive whole page into the buffer. I think that this is concerned with the TCP properties and I should implement some kind of cycle inside my client's code to capture all the parts of the answer. But I don't know what exactly I should do.
This is my code:
...
int bytesSent = send(sock, tmpCharArr, message.size()+1, 0);
// Wait for the answer. Receive it into the buffer defined.
int bytesRecieved = recv(sock, resultBuf, 2048*100, 0);
...
2048*100 is a buffer size and I think this is more than enough for the relatively small WEB-page used for testing. But as I mentioned, I receive only 1440 bytes.
What can I do with recv() function call to capture all the reply "parts" when the server's response is larger then 1440 bytes?
Thanks in advance.
The buffer size is determined by factors outside your control (routers, ADSL links, IP stacks, etc.). The standard way to transmit large volumes of data is to call recv() repeatedly.
HTTP works over TCP, and to understand the working of TCP sockets better you have to think of them as a stream rather than packets.
For further clarity, read my earlier post: recombine split TCP packet with flash sockets
As to why you receive only 1400 (or so) bytes, you have to understand MTU and Fragmentation. To sum it up, MTU (Maximum Transmission Unit) is the ability of the network to transfer a single packet of a certain maximum size. MTU of a entire network is the lowest MTU of all the routers involved. Fragmentation is splitting up of the packets if you try to send a single packet of size larger than the MTU of that network.
For a better understanding of MTU and Fragmentation, read: http://www.miislita.com/internet-engineering/ip-packet-fragmentation-tutorial.pdf
Now as for how to receive the entire page in the buffer, one alternative is to keep calling recv() and appending the data you get in a buffer, until recv() returns zero. This will work because typically a web-server will close the TCP connection after it sends you the response. However, this technique will fail to work if the web-server doesn't close the session (maybe keep-alives are configures).
Therefore, the correct solution would be to keep receiving until you have received the HTTP header. Take a peek and determine the length of the entire HTTP response (Content-Length:) and then you can keep on receiving until you have received the exact amount of bytes you were supposed to receive.
Using 2 PC's with Windows XP, 64kB Tcp Window size, connected with a crossover cable
Using Qt 4.5.3, QTcpServer and QTcpSocket
Sending 2000 messages of 40kB takes 2 seconds (40MB/s)
Sending 1 message of 80MB takes 80 seconds (1MB/s)
Anyone has an explanation for this? I would expect the larger message to go faster, since the lower layers can then fill the Tcp packets more efficiently.
This is hard to comment on without seeing your code.
How are you timing this on the sending side? When do you know you're done?
How does the client read the data, does it read into fixed sized buffers and throw the data away or does it somehow know (from the framing) that the "message" is 80MB and try and build up the "message" into a single data buffer to pass up to the application layer?
It's unlikely to be the underlying Windows sockets code that's making this work poorly.
TCP, from the application side, is stream-based which means there are no packets, just a sequence of bytes. The kernel may collect multiple writes to the connection before sending it out and the receiving side may make any amount of the received data available to each "read" call.
TCP, on the IP side, is packets. Since standard Ethernet has an MTU (maximum transfer unit) of 1500 bytes and both TCP and IP have 20-byte headers, each packet transferred over Ethernet will pass 1460 bytes (or less) of the TCP stream to the other side. 40KB or 80MB writes from the application will make no difference here.
How long it appears to take data to transfer will depend on how and where you measure it. Writing 40KB will likely return immediately since that amount of data will simply get dropped in TCP's "send window" inside the kernel. An 80MB write will block waiting for it all to get transferred (well, all but the last 64KB which will fit, pending, in the window).
TCP transfer speed is also affected by the receiver. It has a "receive window" that contains everything received from the peer but not fetched by the application. The amount of space available in this window is passed to the sender with every return ACK so if it's not being emptied quickly enough by the receiving application, the sender will eventually pause. WireShark may provide some insight here.
In the end, both methods should transfer in the same amount of time since an application can easily fill the outgoing window faster than TCP can transfer it no matter how that data is chunked.
I can't speak for the operation of QT, however.
Bug in Qt 4.5.3
..................................