maximum TCP packet size using c++ socket programming - c++

I am writing c++ socket code and i need some help !
in my program i don't know what the size of the message would be , it may send a part of file or the file it self ,the file may be a huge file , so shall i specify a maximum size for the packet , shall i divide it to more than one if exceeded the maximum ?

It's never constructive to think about "packets" and "messages" when using TCP because:
The network engine has its own ways of deciding the best segment size
Segment size makes no difference to application code: the receiving TPC is free to coalesce segments before passing data to the receiving process
You should view TCP the way it was designed: reliable in order bytes-stream service. So just write large-enough blocks and the engine and its myriad of rules will take care of it.

The problem is a little vague, but the approach seems universal. The transmitter should send an indication of how many bytes the receiver should expect. The receiver should expect to see this indication, and then prepare to receive that many bytes.
As far as packet size, generally an application does not worry about how the bytes are delivered on the network per se, but the application may care about not calling send and recv system calls too many times. This is particularly important on a concurrent server, when efficiency is key to scalability. So, you want a buffer that is big enough to avoid making too many system calls, but not so big as to cause you to block for a long time waiting for the data to drain into the kernel buffer. Matching the send/recv socket buffer size is usually sufficient for that, but it depends on other factors, like the bandwidth and latency of the network, and how quickly the receiver is draining the data, and the timeslice you want to allow per connection being handled during concurrency.

Related

Should I send a file over sockets in one read\write or multiple chunks?

I have a basic question regarding how to implement a basic file transfer client\server in c++.
I am not sure why it is not a good way to send file in one read\write and why the good way is sending it in chunks of small buffer?
I am not sure why it is not a good way to send file in one read\write and why the good way is sending it in chunks of small buffer?
If you have a file with a size of 2TB it you would first need to allocate this amount of RAM and load the whole file into this single buffer. Then you need to write out all of this buffer. This will probably not succeed with a 2TB file anyway because of out of memory but even for smaller files this would be a waste of resources. Since the reads from the disk and the writes to the network card are internally done in chunks anyway you would not get better performance even if the whole file fits into RAM.
A could compromise might be to read/write in chunks of something between 4k and 32k, optimal size depends on OS, disk buffer, socket buffer, speed of disk and network etc.
If you want to use an IP based protocol to send this file you are anyhow limited to the packet size of IP which is around 64K.
If you use UDP, then your transfer is not reliable but then it also better to limit the packet size to around 450 bytes to avoid that parts of the packet are lost. You would also have to device some kind of sequence number in each send to detect that such a chunk gets lost, and perhaps an acking mechanism. TFTP has such a mechanism already in place.
If you use TCP, which is reliable, then you don't have to chop in 450 bytes, you can use 32000 bytes or even 64K (not a limit of TCP but you avoid needing a large chunk of memory, reading and sending in blocks of 32k is perfectly reasonable). TCP is going to send according to the Mtu. So be prepared that on the receiving side you will not receive packets of 32000.
For performance reasons you want your disk access to be 'bigger chunks'. Reading and sending byte per byte will probably work but will be terribly slow.

Unexpected Socket CPU Utilization

I'm having a performance issue that I don't understand. The system I'm working on has two threads that look something like this:
Version A:
Thread 1: Data Processing -> Data Selection -> Data Formatting -> FIFO
Thread 2: FIFO -> Socket
Where 'Selection' thins down the data and the FIFO at the end of thread 1 is the FIFO at the beginning of thread 2 (the FIFOs are actually TBB Concurrent Queues). For performance reasons, I've altered the threads to look like this:
Version B:
Thread 1: Data Processing -> Data Selection -> FIFO
Thread 2: FIFO -> Data Formatting -> Socket
Initially, this optimization proved to be successful. Thread 1 is capable of much higher throughput. I didn't look too hard at Thread 2's performance because I expected the CPU usage would be higher and (due to data thinning) it wasn't a major concern. However, one of my colleagues asked for a performance comparison of version A and version B. To test the setup I had thread 2's socket (a boost asio tcp socket) write to an instance of iperf on the same box (127.0.0.1) with the goal of showing the maximum throughput.
To compare the two set ups I first tried forcing the system to write data out of the socket at 500 Mbps. As part of the performance testing I monitored top. What I saw surprised me. Version A did not show up on 'top -H' nor did iperf (this was actually as suspected). However, version B (my 'enhanced version') was showing up on 'top -H' with ~10% cpu utilization and (oddly) iperf was showing up with 8%.
Obviously, that implied to me that I was doing something wrong. I can't seem to prove that I am though! Things I've confirmed:
Both versions are giving the socket 32k chunks of data
Both versions are using the same boost library (1.45)
Both have the same optimization setting (-O3)
Both receive the exact same data, write out the same data, and write it at the same rate.
Both use the same blocking write call.
I'm testing from the same box with the exact same setup (Red Hat)
The 'formatting' part of thread 2 is not the issue (I removed it and reproduced the problem)
Small packets across the network is not the issue (I'm using TCP_CORK and I've confirmed via wireshark that the TCP Segments are all ~16k).
Putting a 1 ms sleep right after the socket write makes the CPU usage on both the socket thread and iperf(?!) go back to 0%.
Poor man's profiler reveals very little (the socket thread is almost always sleeping).
Callgrind reveals very little (the socket write barely even registers)
Switching iperf for netcat (writing to /dev/null) doesn't change anything (actually netcat's cpu usage was ~20%).
The only thing I can think of is that I've introduced a tighter loop around the socket write. However, at 500 Mbps I wouldn't expect that the cpu usage on both my process and iperf would be increased?
I'm at a loss to why this is happening. My coworkers and I are basically out of ideas. Any thoughts or suggestions? I'll happily try anything at this point.
This is going to be very hard to analyze without code snippets or actual data quantities.
One thing that comes to mind: if the pre-formatted data stream is significantly larger than post-format, you may be expending more bandwidth/cycles copying a bunch more data through the FIFO (socket) boundary.
Try estimating or measuring the data rate at each stage. If the data rate is higher at the output of 'selection', consider the effects of moving formatting to the other side of the boundary. Is it possible that no copy is required for the select->format transition in configuration A, and configuration B imposes lots of copies?
... just a guesses without more insight into the system.
What if the FIFO was the bottleneck in version A. Then both threads would sit and wait for the FIFO most of the time. And in version B, you'd be handing the data off to iperf faster.
What exactly do you store in the FIFO queues? Do you store packets of data i.e buffers?
In version A, you were writing formatted data (probably bytes) to the queue. So, sending it on the socket involved just writing out a fixed size buffer.
However in version B, you are storing high level data in the queues. Formatting it is now creating bigger buffer sizes that are being written directly to the socket. This causes the TCp/ip stack to spend CPU cycles in fragmenting & overhead...
THis is my theory based on what you have said so far.

Sockets: large message performance

I have a doubt regarding the use of Berkeley Sockets under Ubuntu. In terms of performance and reliability which option is best? To send a high amount of messages but with short length or to send a low amount of messages but this ones with a large size? I do not know which is the main design rule I should follow here.
Thank you all!
In terms of reliability, unless you have very specific requirements it isn't worth worrying about much. If you are talking about TCP, it is going to do a better job than you managing things until you come across some edge case that really requires you to fiddle with some knobs, in which case a more specific question would be in order. In terms of packet size, with TCP unless you circumvent Nagel's algorithm, you don't really have the control you might think.
With UDP, arguably the best thing to do is use path MTU discovery, which TCP does for you automatically, but as a general rule you are fine just using something in 500 byte range. If you start to get too fancy you will find yourself reinventing parts of TCP.
With TCP, one option is to use the TCP_CORK socket option. See the getsockopt man page. Set TCP_CORK on the socket, write a batch of small messages, then remove the TCP_CORK option and they will be transmitted in a minimum number of network packets. This can increase throughput at the cost of increased latency.
Each network message has a 40-byte length header, but large messages are harder to route and easier to lose. If you are talking about UDP, so the best message size is Ethernet block, which is 1496 bytes long, if yiu are using TCP, leave it up to network layer to handle how much to send.
Performance you can find out yourself with iperf. Run a few experiments and you will see it yourself. As for reliability as far as I understand if you use TCP the TCP connection guarantee that data will be delivered of course if the connection is not broken.
"In terms of performance and reliability which option is best"
On a lossy layer, performance and reliability are almost a straight trade-off against each other, and greater experts than us have put years of work into finding sweet spots, and techniques which beat the straight trade-off and improve both at once.
You have two basic options:
1) Use stream sockets (TCP). Any "Messages" that your application knows about are defined at the application layer rather than at the sockets. For example, you might consider an HTTP request to be a message, and the response to be another in the opposite direction. You should see your job as being to keep the output buffer as full as possible, and the input buffer as empty as possible, at all times. Reliability has pretty much nothing to do with message length, and for a fixed size of data performance is mostly determined by the number of request-response round-trips performed rather than the number of individual writes on the socket. Obviously if you're sending one byte at a time with TCP_NODELAY then you'd lose performance, but that's pretty extreme.
2) Use datagrams (UDP). "Messages" are socket-layer entities. Performance is potentially better than TCP, but you have to invent your own system for reliability, and potentially this will hammer performance by requiring data to be re-sent. TCP has the same issue, but greater minds, etc. Datagram length can interact very awkwardly with both performance and reliability, hence the MTU discovery mentioned by Duck. If you send a large packet and it's fragmented, then if any fragment goes astray then your message won't arrive. There's a size N, where if you send N-size datagrams they won't fragment, but if you send N+1-size datagrams they will. Hence that +1 doubles the number of failed messages. You don't know N until you know the network route (and maybe not even then). So it's basically impossible to say at compile time what sizes will give good performance: even if you measure it, it'll be different for different users. If you want to optimise, there's no alternative to knowing your stuff.
UDP is also more work than TCP if you need reliability, which is built in to TCP. Potentially UDP has big payoffs, but should probably be thought of as the assembler programming of sockets.
There's also (3): use a protocol for improved UDP reliability, such as RUDP. This isn't part of Berkeley-style sockets APIs, so you'll need a library to help.

Networking Method

Hey guys, Iv'e noticed that when I send a complete packet (collect it's data in a buffer and send) it is much slower than sending the packet byte by byte.
Will it be okay if I make an online game using this method?
Sounds like a naggling-related problem.
You have to disable naggling for latency-demanding applications. (See setsockopt, TCP_NODELAY).
Explanation:
TCP stack behaves differently for small chunks, trying to combine them in bizare ways on the way to IP datagrams. This is a performance optimization suggested by J.Nagle (hence nagling). Keep in mind that enabling NODELAY will make every send() call a kernel-mode transition, so you may wish to pack streams into chunks yourself by means of memory copying, before feeding them into send() if performance is an issue for what you are doing.
I think you need to define what are your measurement points (what exactly are you measuring). By the way is this TCP or UDP?
Anyway Winsock has its own internal buffers that you can modify by calls to setsockopt.
That sounds bizarre. There is much more overhead in sending data byte by byte. Your transport headers will far outweigh the payload! Not to mention O(n) send calls (where n is the number of bytes).
You're doing something wrong if that's what you experience.
What I didn't really measure anything, I'm pretty sure it has something to do with sending data and not collecting it..
I'm using C# for server-side and C++ for client side, in the server side I wrapped the socket with a BinaryWriter and BinaryReader, and in the client I just used send and recv
to send every byte.

What is the optimal size of a UDP packet for maximum throughput?

I need to send packets from one host to another over a potentially lossy network. In order to minimize packet latency, I'm not considering TCP/IP. But, I wish to maximize the throughput uisng UDP. What should be the optimal size of UDP packet to use?
Here are some of my considerations:
The MTU size of the switches in the network is 1500. If I use a large packet, for example 8192, this will cause fragmentation. Loss of one fragment will result in the loss of the entire packet, right?
If I use smaller packets, I'll incur the overhead of the UDP and IP header
If I use a really large packet, what is the largest that I can use? I read that the largest datagram size is 65507. What is the buffer size I should use to allow me to send such sizes? Would that help to bump up my throughput?
What are the typical maximum datagram size supported by the common OSes (eg. Windows, Linux, etc.)?
Updated:
Some of the receivers of the data are embedded systems for which TCP/IP stack is not implemented.
I know that this place is filled with people who are very adament about using what's available. But I hope to have better answers than just focusing on MTU alone.
The best way to find the ideal datagram size is to do exactly what TCP itself does to find the ideal packet size: Path MTU discovery.
TCP also has a widely used option where both sides tell the other what their MSS (basically, MTU minus headers) is.
Alternative answer: be careful to not reinvent the wheel.
TCP is the product of decades of networking experience. There is a reson for every or almost every thing it does. It has several algorithms most people do not think about often (congestion control, retransmission, buffer management, dealing with reordered packets, and so on).
If you start reimplementing all the TCP algorithms, you risk ending up with an (paraphasing Greenspun's Tenth Rule) "ad hoc, informally-specified, bug-ridden, slow implementation of TCP".
If you have not done so yet, it could be a good idea to look at some recent alternatives to TCP/UDP, like SCTP or DCCP. They were designed for niches where neither TCP nor UDP was a good match, precisely to allow people to use an already "debugged" protocol instead of reinventing the wheel for every new application.
The easiest workaround to find mtu in c# is to send udp packets with dontfragment flag set to true. if it throws an exception, try reduce the packet size. do this until there is no exception thrown. you can start with 1500 packet size.
Another thing to consider is that some network devices don't handle fragmentation very well. We've seen many routers that drop fragmented UDP packets or packets that are too big. The suggestion by CesarB to use Path MTU is a good one.
Maximum throughput is not driven only by the packet size (though this contributes of course). Minimizing latency and maximizing throughput are often at odds with one other. In TCP you have the Nagle algorithm which is designed (in part) to increase overall throughput. However, some protocols (e.g., telnet) often disable Nagle (i.e., set the No Delay bit) in order to improve latency.
Do you have some real time constraints for the data? Streaming audio is different than pushing non-realtime data (e.g., logging information) as the former benefits more from low latency while the latter benefits from increased throughput and perhaps reliability. Are there reliability requirements? If you can't miss packets and have to have a protocol to request retransmission, this will reduce overall throughput.
There are a myriad of other factors that go into this and (as was suggested in another response) at some point you get a bad implementation of TCP. That being said, if you want to achieve low latency and can tolerate loss using UDP with an overall packet size set to the PATH MTU (be sure to set the payload size to account for headers) is likely the optimal solution (esp. if you can ensure that UDP can get from one end to the other.
Well, I've got a non-MTU answer for you. Using a connected UDP socket should speed things up for you. There are two reasons to call connect on your UDP socket. The first is efficiency. When you call sendto on an unconnected UDP socket what happens is that the kernel temporarily connects the socket, sends the data and then disconnects it. I read about a study indicating that this takes up nearly 30% of processing time when sending. The other reason to call connect is so that you can get ICMP error messages. On an unconnected UDP socket the kernel doesn't know what application to deliver ICMP errors to and so they just get discarded.
IP header is >= 20 bytes but mostly 20 and UDP header is 8 bytes. This leaves you 1500 - 28 = 1472 bytes for you data. PATH MTU discovery finds the smallest possible MTU on the way to destination. But this does not necessarily mean that, when you use the smallest MTU, you will get the best possible performance. I think the best way is to do a benchmark. Or maybe you should not care about the smallest MTU on the way at all. A network device may very well use a small MTU and also transfer packets very fast. And its value may very well change in the future. So you can not discover this and save it somewhere to use later on, you have to do it periodically. If I were you, I would set the MTU to something like 1440 and benchmark the application...
Even though the MTU at the switch is 1500, you can have situations (like tunneling through a VPN) that wrap a few extra headers around the packet- you may do better to reduce them slightly, and go at 1450 or so.
Can you simulate the network and test performance with different packet sizes?