Sockets: large message performance - c++

I have a doubt regarding the use of Berkeley Sockets under Ubuntu. In terms of performance and reliability which option is best? To send a high amount of messages but with short length or to send a low amount of messages but this ones with a large size? I do not know which is the main design rule I should follow here.
Thank you all!

In terms of reliability, unless you have very specific requirements it isn't worth worrying about much. If you are talking about TCP, it is going to do a better job than you managing things until you come across some edge case that really requires you to fiddle with some knobs, in which case a more specific question would be in order. In terms of packet size, with TCP unless you circumvent Nagel's algorithm, you don't really have the control you might think.
With UDP, arguably the best thing to do is use path MTU discovery, which TCP does for you automatically, but as a general rule you are fine just using something in 500 byte range. If you start to get too fancy you will find yourself reinventing parts of TCP.

With TCP, one option is to use the TCP_CORK socket option. See the getsockopt man page. Set TCP_CORK on the socket, write a batch of small messages, then remove the TCP_CORK option and they will be transmitted in a minimum number of network packets. This can increase throughput at the cost of increased latency.

Each network message has a 40-byte length header, but large messages are harder to route and easier to lose. If you are talking about UDP, so the best message size is Ethernet block, which is 1496 bytes long, if yiu are using TCP, leave it up to network layer to handle how much to send.

Performance you can find out yourself with iperf. Run a few experiments and you will see it yourself. As for reliability as far as I understand if you use TCP the TCP connection guarantee that data will be delivered of course if the connection is not broken.

"In terms of performance and reliability which option is best"
On a lossy layer, performance and reliability are almost a straight trade-off against each other, and greater experts than us have put years of work into finding sweet spots, and techniques which beat the straight trade-off and improve both at once.
You have two basic options:
1) Use stream sockets (TCP). Any "Messages" that your application knows about are defined at the application layer rather than at the sockets. For example, you might consider an HTTP request to be a message, and the response to be another in the opposite direction. You should see your job as being to keep the output buffer as full as possible, and the input buffer as empty as possible, at all times. Reliability has pretty much nothing to do with message length, and for a fixed size of data performance is mostly determined by the number of request-response round-trips performed rather than the number of individual writes on the socket. Obviously if you're sending one byte at a time with TCP_NODELAY then you'd lose performance, but that's pretty extreme.
2) Use datagrams (UDP). "Messages" are socket-layer entities. Performance is potentially better than TCP, but you have to invent your own system for reliability, and potentially this will hammer performance by requiring data to be re-sent. TCP has the same issue, but greater minds, etc. Datagram length can interact very awkwardly with both performance and reliability, hence the MTU discovery mentioned by Duck. If you send a large packet and it's fragmented, then if any fragment goes astray then your message won't arrive. There's a size N, where if you send N-size datagrams they won't fragment, but if you send N+1-size datagrams they will. Hence that +1 doubles the number of failed messages. You don't know N until you know the network route (and maybe not even then). So it's basically impossible to say at compile time what sizes will give good performance: even if you measure it, it'll be different for different users. If you want to optimise, there's no alternative to knowing your stuff.
UDP is also more work than TCP if you need reliability, which is built in to TCP. Potentially UDP has big payoffs, but should probably be thought of as the assembler programming of sockets.
There's also (3): use a protocol for improved UDP reliability, such as RUDP. This isn't part of Berkeley-style sockets APIs, so you'll need a library to help.

Related

Data distribution fairness: is TCP and websocket a good choice?

I am learning about servers and data distribution. Much of what I have read from various sources (here is just one) talks about how market data is distributed over UDP to take advantage of multicasting. Indeed, in this video at this point about building a trading exchange, the presenter mentions how TCP is not the optimal choice to distribute data because it means having to "loop over" every client then send the data to each in turn, meaning that the "first in the list" of clients has a possibly unfair advantage.
I was very surprised then when I learned that I could connect to the Binance feed of market data using a websocket connection, which is TCP, using a command such as
websocat_linux64 wss://stream.binance.com:9443/ws/btcusdt#trade --protocol ws
Many other sources mention Websockets, so they certainly seem to be a common method of delivering market data, indeed this states
"Cryptocurrency trading applications often have real-time market data
streamed to trader front-ends via websockets"
I am confused. If Binance distributes over TCP, is "fairness" really a problem as the YouTube video seems to suggest?
So, overall, my main question is that if I want to distribute data (of any kind generally, but we can keep the market data theme if it helps) to multiple clients (possibly thousands) over the internet, should I use UDP or TCP, and is there any specific technique that could be employed to ensure "fairness" if that is relevant?
I've added the C++ tag as I would use C++, lots of high performance servers are written in C++, and I feel there's a good chance that someone will have done something similar and/or accessed the Binance feeds using C++.
The argument on fairness due to looping, in code, is ridiculous.
The whole field of trading where decisions need to be made quickly, where you need to use new information before someone else does is called: low-latency trading.
This tells you what's important: reducing the latency to a minimum. This is why UDP is used over TCP. TCP has flow control, re-sends data and buffers traffic to deliver it in order. This would make it terrible for low-latency trading.
WebSockets, in addition to being built on top of TCP are heavier and slower simply due to the extra amount of data (and needed processing to read/write it).
So even though the looping would be a tiny marginal latency cost, there's plenty of other reasons to pick UDP over TCP and even more over WebSockets.
So why does Binance does it? Their market is not institutional traders with hardware located at the exchanges. It's for traders that are willing to accept some latency. If you don't trade to the millisecond, then some extra latency is acceptable. It makes it much easier to integrate different piece of software together. It also makes fairness, in latency, not so important. If Alice is 0.253 seconds away and Bob is 0.416 seconds away, does it make any difference who I tell first (by a few microseconds)? Probably not.

How can I recv TCP socket data in one package without dividing

Since I create a TCP socket,it is fine when sending small amount data.no fragment. all data came in one package. but when data becomes bigger and bigger. TCP package has been divided into pieces.. it`s really annoying. Is there any option to set on socket, and the socket will automatically put pieces into one package for me ?
It's a byte stream. All the bytes will arrive correctly and in the right order, but not necessarily when you want them. If you need to send anything more complex than one byte, you need another protocol on top of TCP. That's why there are all those other TCP/IP protocols like HTTP, SMTP etc.
No there is not. There are even situations where you might receive 1 byte.
Consider using higher level messaging libraries like ZMQ. It handles all the message packing and unpacking for you.
TCP provides you reliable bi-directional byte stream. It takes care of sequencing, transport-layer packetization, retransmission, and flow-control. Decades of research went into optimizing its performance. Pretty nifty. The small price you pay for all this convenience is that you have to write and read the stream in a loop, watching for a complete application protocol message you can process when receiving, and flushing yet unbuffered bytes when sending.
Welcome to socket programming!
I'll chime in here and say that there's pretty much nothing you can do to solve you issue without adding extra dependencies on libraries which handle application protocols for you. There are some lower level message packing libraries (google's protocol buffers, among others) which may help.
It's probably the most beneficial to get used to reading and writing TCP data in a loop. It's proven and very portable.. even if you pay a small price in actually writing the streaming codecs yourself.
Try it a few times. It's a useful experience which you can re-use, and it's really not as difficult and annoying once you get the hang of it (like anything else, really).
Furthermore, it's fairly easy to unit-test (rather than dealing with esoteric libraries and uncommon protocols with badly/sparsely documented options)..
You can optimize sockets reads to return larger chunks, on platforms that support it, by setting low watermark using setsockopt() and SO_RECVLOWAT. But you will still have to handle the possibility of getting bytes less than the watermark.
I think you want SOCK_SEQPACKET (or possibly SOCK_RDM). See socket(2).

Reducing network latency in communication of high volume intranet applications

We have a set of server applications which receive measurement data from equipment/tools. Message transfer time is currently our main bottleneck, so we are interested in reducing it to improve the process. The communication between the tools and server applications is via TCP/IP sockets made using C++ on Redhat Linux.
Is it possible to reduce the message transfer time using hardware, by changing the TCP/IP configuration settings or by tweaking tcp kernel functions? (we can sacrifice security for speed, since communication is on a secure intranet)
Depending on the workload, disabling Nagle's Algorithm on the socket connection can help a lot.
When working with high volumes of small messages, i found this made a huge difference.
From memory, I believe the socket option for C++ was called TCP_NODELAY
As #Jerry Coffin proposed, you can switch to UDP. UDP is unreliable protocol, this means you can lose your packets, or they can arrive in wrong order, or be duplicated. So you need to handle these cases on application level. Since you can lose some data (as you stated in your comment) no need for retransmission (the most complicated part of any reliable protocol). You just need to drop outdated packets. Use simple sequence numbering and you're done.
Yes, you can use RTP (it has sequence numbering) but you don't need it. RTP looks like an overkill for your simple case. It has many other features and is used mainly for multimedia streaming.
[EDIT] and similar question here
On the hardware side try Intel Server NICs and make sure the TCP offload Engine (ToE) is enabled.
There is also an important decision to make between latency and goodput, if you want better latency at expense of goodput consider reducing the interrupt coalescing period. Consult the Intel documentation for further details as they offer quite a number of configurable parameters.
If you can, the obvious step to reduce latency would be to switch from TCP to UDP.
Yes.
Google "TCP frame size" for details.

Communication between processes

I'm looking for some data to help me decide which would be the better/faster for communication between two independent processes on Linux:
TCP
Named Pipes
Which is worse: the system overhead for the pipes or the tcp stack overhead?
Updated exact requirements:
only local IPC needed
will mostly be a lot of short messages
no cross-platform needed, only Linux
In the past I've used local domain sockets for that sort of thing. My library determined whether the other process was local to the system or remote and used TCP/IP for remote communication and local domain sockets for local communication. The nice thing about this technique is that local/remote connections are transparent to the rest of the application.
Local domain sockets use the same mechanism as pipes for communication and don't have the TCP/IP stack overhead.
I don't really think you should worry about the overhead (which will be ridiculously low). Did you make sure using profiling tools that the bottleneck of your application is likely to be TCP overhead?
Anyways as Carl Smotricz said, I would go with sockets because it will be really trivial to separate the applications in the future.
I discussed this in an answer to a previous post. I had to compare socket, pipe, and shared memory communications. Pipes were definitely faster than sockets (maybe by a factor of 2 if I recall correctly ... I can check those numbers when I return to work). But those measurements were just for the pure communication. If the communication is a very small part of the overall work, then the difference will be negligible between the two types of communication.
Edit
Here are some numbers from the test I did a few years ago. Your mileage may vary (particularly if I made stupid programming errors). In this specific test, a "client" and "server" on the same machine echoed 100 bytes of data back and forth. It made 10,000 requests. In the document I wrote up, I did not indicate the specs of the machine, so it is only the relative speeds that may be of any value. But for the curious, the times given here are the average cost per request:
TCP/IP: .067 ms
Pipe with I/O Completion Ports: .042 ms
Pipe with Overlapped I/O: .033 ms
Shared Memory with Named Semaphore: .011 ms
There will be more overhead using TCP - that will involve breaking the data up into packets, calculating checksums and handling acknowledgement, none of which is necessary when communicating between two processes on the same machine. Using a pipe will just copy the data into and out of a buffer.
I don't know if this suites you, but a very common way of IPC (interprocess communication) under linux is by using the shared memory. It's actually ultra fast (I didn't profiled this, but this is just shared data on RAM with strong processing around it).
The main problem around this approuch is the semaphore, you must build a little system around it so you must make sure a process is not writing at the same time the other one is trying to read.
A very simple starter tutorial is at here
This is not as portable as using sockets, but the concept would be the same, so if you're migrating this to Windows, you will just have to change the shared memory create/attach layer.
Two things to consider:
Connection setup cost
Continuous Communication cost
On TCP:
(1) more costly - 3way handshake overhead required for (potentially) unreliable channel.
(2) more costly - IP level overhead (checksum etc.), TCP overhead (sequence number, acknowledgement, checksum etc.) pretty much all of which aren't necessary on the same machine because the channel is supposed to be reliable and not introduce network related impairments (e.g. packet reordering).
But I would still go with TCP provided it makes sense (i.e. depends on the situation) because of its ubiquity (read: easy cross-platform support) and the overhead shouldn't be a problem in most cases (read: profile, don't do premature optimization).
Updated: if cross-platform support isn't required and the accent is on performance, then go with named/domain pipes as I am pretty sure the platform developers will have optimize-out the unnecessary functionality deemed required for handling network level impairments.
unix domain socket is a very goog compromise. Not the overhead of tcp, but more evolutive than the pipe solution. A point you did not consider is that socket are bidirectionnal, while named pipes are unidirectionnal.
I think the pipes will be a little lighter, but I'm just guessing.
But since pipes are a local thing, there's probably a lot less complicated code involved.
Other people might tell you to try and measure both to find out. It's hard to go wrong with this answer, but you may not be willing to invest the time. That would leave you hoping my guess is correct ;)

What is the optimal size of a UDP packet for maximum throughput?

I need to send packets from one host to another over a potentially lossy network. In order to minimize packet latency, I'm not considering TCP/IP. But, I wish to maximize the throughput uisng UDP. What should be the optimal size of UDP packet to use?
Here are some of my considerations:
The MTU size of the switches in the network is 1500. If I use a large packet, for example 8192, this will cause fragmentation. Loss of one fragment will result in the loss of the entire packet, right?
If I use smaller packets, I'll incur the overhead of the UDP and IP header
If I use a really large packet, what is the largest that I can use? I read that the largest datagram size is 65507. What is the buffer size I should use to allow me to send such sizes? Would that help to bump up my throughput?
What are the typical maximum datagram size supported by the common OSes (eg. Windows, Linux, etc.)?
Updated:
Some of the receivers of the data are embedded systems for which TCP/IP stack is not implemented.
I know that this place is filled with people who are very adament about using what's available. But I hope to have better answers than just focusing on MTU alone.
The best way to find the ideal datagram size is to do exactly what TCP itself does to find the ideal packet size: Path MTU discovery.
TCP also has a widely used option where both sides tell the other what their MSS (basically, MTU minus headers) is.
Alternative answer: be careful to not reinvent the wheel.
TCP is the product of decades of networking experience. There is a reson for every or almost every thing it does. It has several algorithms most people do not think about often (congestion control, retransmission, buffer management, dealing with reordered packets, and so on).
If you start reimplementing all the TCP algorithms, you risk ending up with an (paraphasing Greenspun's Tenth Rule) "ad hoc, informally-specified, bug-ridden, slow implementation of TCP".
If you have not done so yet, it could be a good idea to look at some recent alternatives to TCP/UDP, like SCTP or DCCP. They were designed for niches where neither TCP nor UDP was a good match, precisely to allow people to use an already "debugged" protocol instead of reinventing the wheel for every new application.
The easiest workaround to find mtu in c# is to send udp packets with dontfragment flag set to true. if it throws an exception, try reduce the packet size. do this until there is no exception thrown. you can start with 1500 packet size.
Another thing to consider is that some network devices don't handle fragmentation very well. We've seen many routers that drop fragmented UDP packets or packets that are too big. The suggestion by CesarB to use Path MTU is a good one.
Maximum throughput is not driven only by the packet size (though this contributes of course). Minimizing latency and maximizing throughput are often at odds with one other. In TCP you have the Nagle algorithm which is designed (in part) to increase overall throughput. However, some protocols (e.g., telnet) often disable Nagle (i.e., set the No Delay bit) in order to improve latency.
Do you have some real time constraints for the data? Streaming audio is different than pushing non-realtime data (e.g., logging information) as the former benefits more from low latency while the latter benefits from increased throughput and perhaps reliability. Are there reliability requirements? If you can't miss packets and have to have a protocol to request retransmission, this will reduce overall throughput.
There are a myriad of other factors that go into this and (as was suggested in another response) at some point you get a bad implementation of TCP. That being said, if you want to achieve low latency and can tolerate loss using UDP with an overall packet size set to the PATH MTU (be sure to set the payload size to account for headers) is likely the optimal solution (esp. if you can ensure that UDP can get from one end to the other.
Well, I've got a non-MTU answer for you. Using a connected UDP socket should speed things up for you. There are two reasons to call connect on your UDP socket. The first is efficiency. When you call sendto on an unconnected UDP socket what happens is that the kernel temporarily connects the socket, sends the data and then disconnects it. I read about a study indicating that this takes up nearly 30% of processing time when sending. The other reason to call connect is so that you can get ICMP error messages. On an unconnected UDP socket the kernel doesn't know what application to deliver ICMP errors to and so they just get discarded.
IP header is >= 20 bytes but mostly 20 and UDP header is 8 bytes. This leaves you 1500 - 28 = 1472 bytes for you data. PATH MTU discovery finds the smallest possible MTU on the way to destination. But this does not necessarily mean that, when you use the smallest MTU, you will get the best possible performance. I think the best way is to do a benchmark. Or maybe you should not care about the smallest MTU on the way at all. A network device may very well use a small MTU and also transfer packets very fast. And its value may very well change in the future. So you can not discover this and save it somewhere to use later on, you have to do it periodically. If I were you, I would set the MTU to something like 1440 and benchmark the application...
Even though the MTU at the switch is 1500, you can have situations (like tunneling through a VPN) that wrap a few extra headers around the packet- you may do better to reduce them slightly, and go at 1450 or so.
Can you simulate the network and test performance with different packet sizes?