Data distribution fairness: is TCP and websocket a good choice? - c++

I am learning about servers and data distribution. Much of what I have read from various sources (here is just one) talks about how market data is distributed over UDP to take advantage of multicasting. Indeed, in this video at this point about building a trading exchange, the presenter mentions how TCP is not the optimal choice to distribute data because it means having to "loop over" every client then send the data to each in turn, meaning that the "first in the list" of clients has a possibly unfair advantage.
I was very surprised then when I learned that I could connect to the Binance feed of market data using a websocket connection, which is TCP, using a command such as
websocat_linux64 wss://stream.binance.com:9443/ws/btcusdt#trade --protocol ws
Many other sources mention Websockets, so they certainly seem to be a common method of delivering market data, indeed this states
"Cryptocurrency trading applications often have real-time market data
streamed to trader front-ends via websockets"
I am confused. If Binance distributes over TCP, is "fairness" really a problem as the YouTube video seems to suggest?
So, overall, my main question is that if I want to distribute data (of any kind generally, but we can keep the market data theme if it helps) to multiple clients (possibly thousands) over the internet, should I use UDP or TCP, and is there any specific technique that could be employed to ensure "fairness" if that is relevant?
I've added the C++ tag as I would use C++, lots of high performance servers are written in C++, and I feel there's a good chance that someone will have done something similar and/or accessed the Binance feeds using C++.

The argument on fairness due to looping, in code, is ridiculous.
The whole field of trading where decisions need to be made quickly, where you need to use new information before someone else does is called: low-latency trading.
This tells you what's important: reducing the latency to a minimum. This is why UDP is used over TCP. TCP has flow control, re-sends data and buffers traffic to deliver it in order. This would make it terrible for low-latency trading.
WebSockets, in addition to being built on top of TCP are heavier and slower simply due to the extra amount of data (and needed processing to read/write it).
So even though the looping would be a tiny marginal latency cost, there's plenty of other reasons to pick UDP over TCP and even more over WebSockets.
So why does Binance does it? Their market is not institutional traders with hardware located at the exchanges. It's for traders that are willing to accept some latency. If you don't trade to the millisecond, then some extra latency is acceptable. It makes it much easier to integrate different piece of software together. It also makes fairness, in latency, not so important. If Alice is 0.253 seconds away and Bob is 0.416 seconds away, does it make any difference who I tell first (by a few microseconds)? Probably not.

Related

How to use ZeroMQ for multiple Server-Client pairs?

I'm implementing a performance heavy two-party protocol in C++14 utilising multithreading and am currently using ZeroMQ as a network layer.
The application has the following simple architecture:
One main server-role,
One main client-role,
Both server and client spawn a fixed number n of threads
All n parallel concurrent thread-pairs execute some performance and communication heavy mutual, but exclusive, protocol exchange, i.e. these run in n fixed pairs and should not mix / interchange any data but with the pairwise fixed-opponent.
My current design uses a single ZeroMQ Context()-instance on both server and client, that is shared between all n-local threads and each respective client/server thread-pair creates a ZMQ_PAIR socket ( I just increment the port# ) on the local, shared, context for communication.
My question
Is there is a smarter or more efficient way of doing this?
i.e.: is there a natural way of using ROUTERS and DEALERS that might increase performance?
I do not have much experience with socket programming and with my approach the number of sockets scales directly with n ( a number of client-server thread-pairs ). This might go to the couple of thousands and I'm unsure if this is a problem or not.
I have control of both the server and client machines and source code and I have no outer restrictions that I need to worry about. All I care about is performance.
I've looked through all the patterns here, but I cannot find anyone that matches the case where the client-server pairs are fixed, i.e. I cannot use load-balancing and such.
Happy man!
ZeroMQ is a lovely and powerful tool for highly scaleable, low-overheads, Formal Communication ( behavioural, yes emulating some sort of the peers mutual behaviour "One Asks, the other Replies" et al ) Patterns.
Your pattern is quite simple, behaviourally-unrestricted and ZMQ_PAIR may serve well for this.
Performance
There ought be some more details on quantitative nature of this attribute.
a process-to-process latency [us]
a memory-footprint of a System-under-Test (SuT) architecture [MB]
a peak-amount of data-flow a SuT can handle [MB/s]
Performance Tips ( if quantitatively supported by observed performance data )
may increase I/O-performance by increasing Context( nIOthreads ) on instantiation
may fine-tune I/O-performance by hard-mapping individual thread# -> Context.IO-thread# which is helpfull for both distributed workload and allows one to keep "separate" localhost IOthread(s) free / ready for higher-priority signalling and others such needs.
shall setup application-specific ToS-labeling of prioritised types of traffic, so as to allow advanced processing on the network-layer alongside the route-segments between the client and server
if memory-footprint hurts ( ZeroMQ is not Zero-copy on TCP-protocol handling at operating-system kernel level ) one may try to move to a younger sister of ZeroMQ -- authored by Martin SUSTRIK, a co-father of ZeroMQ -- a POSIX compliant nanomsg with similar motivation and attractive performance figures. Worth to know about, at least.
Could ROUTER or DEALER increase an overall performance?
No, could not. Having in mind your stated architecture ( declared to be communication heavy ), other, even more sophisticated Scaleable Formal Communication Patterns behaviours that suit some other needs, do not add any performance benefit, but on the contrary, would cost you additional processing overheads without delivering any justifying improvement.
While your Formal Communication remains as defined, no additional bells and whistles are needed.
One point may be noted on ZMQ_PAIR archetype, some sources quote this to be rather an experimental archetype. If your gut sense feeling does not make you, besides SuT-testing observations, happy to live with this, do not mind a slight re-engineering step, that will keep you with all the freedom of un-pre-scribed Formal Communication Pattern behaviour, while having "non"-experimental pipes under the hood -- just replace the solo ZMQ_PAIR with a pair of ZMQ_PUSH + ZMQ_PULL and use messages with just a one-way ticket. Having the stated full-control of the SuT-design and implementation, this would be all within your competence.
How fast could I go?
There are some benchmark test records published for both the ZeroMQ or nanomsg performance / latency envelopes for un-loaded network transports across the traffic-free route-segments ( sure ).
If your SuT-design strives to go even faster -- say under some 800 ns end-to-end, there are other means to achieve this, but your design will have to follow other distributed computing strategy than a message-based data exchange and your project budget will have to adjust for additional expenditure for necessary ultra-low-latency hardware infrastructure.
It may surprise, but definitely well doable and pretty attractive for systems, where hundreds of nanoseconds are a must-have target within a Colocation Data Centre.

C/C++ technologies involved in sending data across networks very fast

In terms of low latency (I am thinking about financial exchanges/co-location- people who care about microseconds) what options are there for sending packets from a C++ program on two Unix computers?
I have heard about kernel bypass network cards, but does this mean you program against some sort of API for the card? I presume this would be a faster option in comparison to using the standard Unix berkeley sockets?
I would really appreciate any contribution, especially from persons who are involved in this area.
EDITED from milliseconds to microseconds
EDITED I am kinda hoping to receive answers based more upon C/C++, rather than network hardware technologies. It was intended as a software question.
UDP sockets are fast, low latency, and reliable enough when both machines are on the same LAN.
TCP is much slower than UDP but when the two machines are not on the same LAN, UDP is not reliable.
Software profiling will stomp obvious problems with your program. However, when you are talking about network performance, network latency is likely to be you largest bottleneck. If you are using TCP, then you want to do things that avoid congestion and loss on your network to prevent retransmissions. There are a few things to do to cope:
Use a network with bandwidth and reliability guarantees.
Properly size your TCP parameters to maximize utilization without incurring loss.
Use error correction in your data transmission to correct for the small amount of loss you might encounter.
Or you can avoid using TCP altogether. But if reliability is required, you will end up implementing much of what is already in TCP.
But, you can leverage existing projects that have already thought through a lot of these issues. The UDT project is one I am aware of, that seems to be gaining traction.
At some point in the past, I worked with a packet sending driver that was loaded into the Windows kernel. Using this driver it was possible to generate stream of packets something 10-15 times stronger (I do not remember exact number) than from the app that was using the sockets layer.
The advantage is simple: The sending request comes directly from the kernel and bypasses multiple layers of software: sockets, protocol (even for UDP packet simple protocol driver processing is still needed), context switch, etc.
Usually reduced latency comes at a cost of reduced robustness. Compare for example the (often greatly advertised) fastpath option for ADSL. The reduced latency due to shorter packet transfer times comes at a cost of increased error susceptibility. Similar technologies migt exist for a large number of network media. So it very much depends on the hardware technologies involved. Your question suggests you're referring to Ethernet, but it is unclear whether the link is Ethernet-only or something else (ATM, ADSL, …), and whether some other network technology would be an option as well. It also very much depends on geographical distances.
EDIT:
I got a bit carried away with the hardware aspects of this question. To provide at least one aspect tangible at the level of application design: have a look at zero-copy network operations like sendfile(2). They can be used to eliminate one possible cause of latency, although only in cases where the original data came from some source other than the application memory.
As my day job, I work for a certain stock exchange. Below answer is my own opinion from the software solutions which we provide exactly for this kind of high throughput low latency data transfer. It is not intended in any way to be taken as marketing pitch(please i am a Dev.)This is just to give what are the Essential components of the software stack in this solution for this kind of fast data( Data could be stock/trading market data or in general any data):-
1] Physical Layer - Network interface Card in case of a TCP-UDP/IP based Ethernet network, or a very fast / high bandwidth interface called Infiniband Host Channel Adaptor. In case of IP/Ethernet software stack, is part of the OS. For Infiniband the card manufacturer (Intel, Mellanox) provide their Drivers, Firmware and API library against which one has to implement the socket code(Even infiniband uses its own 'socketish' protocol for network communications between 2 nodes.
2] Next layer above the physical layer we have is a Middleware which basically abstracts the lower network protocol nittigritties, provides some kind of interface for data I/O from physical layer to application layer. This layer also provides some kind of network data quality assurance (IF using tCP)
3] Last layer would be a application which we provide on top of middleware. Any one who gets 1] and 2] from us, can develop a low latency/hight throughput 'data transfer of network' kind of app for stock trading, algorithmic trading kind os applications using a choice of programming language interfaces - C,C++,Java,C#.
Basically a client like you can develop his own application in C,C++ using the APIs we provide, which will take care of interacting with the NIC or HCA(i.e. the actual physical network interface) to send and receive data fast, really fast.
We have a comprehensive solution catering to different quality and latency profiles demanded by our clients - Some need Microseconds latency is ok but they need high data quality/very little errors; Some can tolerate a few errors, but need nano seconds latency, Some need micro seconds latency, no errors tolerable, ...
If you need/or are interested in any way in this kind of solution , ping me offline at my contacts mentioned here at SO.

Reducing network latency in communication of high volume intranet applications

We have a set of server applications which receive measurement data from equipment/tools. Message transfer time is currently our main bottleneck, so we are interested in reducing it to improve the process. The communication between the tools and server applications is via TCP/IP sockets made using C++ on Redhat Linux.
Is it possible to reduce the message transfer time using hardware, by changing the TCP/IP configuration settings or by tweaking tcp kernel functions? (we can sacrifice security for speed, since communication is on a secure intranet)
Depending on the workload, disabling Nagle's Algorithm on the socket connection can help a lot.
When working with high volumes of small messages, i found this made a huge difference.
From memory, I believe the socket option for C++ was called TCP_NODELAY
As #Jerry Coffin proposed, you can switch to UDP. UDP is unreliable protocol, this means you can lose your packets, or they can arrive in wrong order, or be duplicated. So you need to handle these cases on application level. Since you can lose some data (as you stated in your comment) no need for retransmission (the most complicated part of any reliable protocol). You just need to drop outdated packets. Use simple sequence numbering and you're done.
Yes, you can use RTP (it has sequence numbering) but you don't need it. RTP looks like an overkill for your simple case. It has many other features and is used mainly for multimedia streaming.
[EDIT] and similar question here
On the hardware side try Intel Server NICs and make sure the TCP offload Engine (ToE) is enabled.
There is also an important decision to make between latency and goodput, if you want better latency at expense of goodput consider reducing the interrupt coalescing period. Consult the Intel documentation for further details as they offer quite a number of configurable parameters.
If you can, the obvious step to reduce latency would be to switch from TCP to UDP.
Yes.
Google "TCP frame size" for details.

Sockets: large message performance

I have a doubt regarding the use of Berkeley Sockets under Ubuntu. In terms of performance and reliability which option is best? To send a high amount of messages but with short length or to send a low amount of messages but this ones with a large size? I do not know which is the main design rule I should follow here.
Thank you all!
In terms of reliability, unless you have very specific requirements it isn't worth worrying about much. If you are talking about TCP, it is going to do a better job than you managing things until you come across some edge case that really requires you to fiddle with some knobs, in which case a more specific question would be in order. In terms of packet size, with TCP unless you circumvent Nagel's algorithm, you don't really have the control you might think.
With UDP, arguably the best thing to do is use path MTU discovery, which TCP does for you automatically, but as a general rule you are fine just using something in 500 byte range. If you start to get too fancy you will find yourself reinventing parts of TCP.
With TCP, one option is to use the TCP_CORK socket option. See the getsockopt man page. Set TCP_CORK on the socket, write a batch of small messages, then remove the TCP_CORK option and they will be transmitted in a minimum number of network packets. This can increase throughput at the cost of increased latency.
Each network message has a 40-byte length header, but large messages are harder to route and easier to lose. If you are talking about UDP, so the best message size is Ethernet block, which is 1496 bytes long, if yiu are using TCP, leave it up to network layer to handle how much to send.
Performance you can find out yourself with iperf. Run a few experiments and you will see it yourself. As for reliability as far as I understand if you use TCP the TCP connection guarantee that data will be delivered of course if the connection is not broken.
"In terms of performance and reliability which option is best"
On a lossy layer, performance and reliability are almost a straight trade-off against each other, and greater experts than us have put years of work into finding sweet spots, and techniques which beat the straight trade-off and improve both at once.
You have two basic options:
1) Use stream sockets (TCP). Any "Messages" that your application knows about are defined at the application layer rather than at the sockets. For example, you might consider an HTTP request to be a message, and the response to be another in the opposite direction. You should see your job as being to keep the output buffer as full as possible, and the input buffer as empty as possible, at all times. Reliability has pretty much nothing to do with message length, and for a fixed size of data performance is mostly determined by the number of request-response round-trips performed rather than the number of individual writes on the socket. Obviously if you're sending one byte at a time with TCP_NODELAY then you'd lose performance, but that's pretty extreme.
2) Use datagrams (UDP). "Messages" are socket-layer entities. Performance is potentially better than TCP, but you have to invent your own system for reliability, and potentially this will hammer performance by requiring data to be re-sent. TCP has the same issue, but greater minds, etc. Datagram length can interact very awkwardly with both performance and reliability, hence the MTU discovery mentioned by Duck. If you send a large packet and it's fragmented, then if any fragment goes astray then your message won't arrive. There's a size N, where if you send N-size datagrams they won't fragment, but if you send N+1-size datagrams they will. Hence that +1 doubles the number of failed messages. You don't know N until you know the network route (and maybe not even then). So it's basically impossible to say at compile time what sizes will give good performance: even if you measure it, it'll be different for different users. If you want to optimise, there's no alternative to knowing your stuff.
UDP is also more work than TCP if you need reliability, which is built in to TCP. Potentially UDP has big payoffs, but should probably be thought of as the assembler programming of sockets.
There's also (3): use a protocol for improved UDP reliability, such as RUDP. This isn't part of Berkeley-style sockets APIs, so you'll need a library to help.

C++ Socket Server - Unable to saturate CPU

I've developed a mini HTTP server in C++, using boost::asio, and now I'm load testing it with multiple clients and I've been unable to get close to saturating the CPU. I'm testing on a Amazon EC2 instance, and getting about 50% usage of one cpu, 20% of another, and the remaining two are idle (according to htop).
Details:
The server fires up one thread per core
Requests are received, parsed, processed, and responses are written out
The requests are for data, which is read out of memory (read-only for this test)
I'm 'loading' the server using two machines, each running a java application, running 25 threads, sending requests
I'm seeing about 230 requests/sec throughput (this is application requests, which are composed of many HTTP requests)
So, what should I look at to improve this result? Given the CPU is mostly idle, I'd like to leverage that additional capacity to get a higher throughput, say 800 requests/sec or whatever.
Ideas I've had:
The requests are very small, and often fulfilled in a few ms, I could modify the client to send/compose bigger requests (perhaps using batching)
I could modify the HTTP server to use the Select design pattern, is this appropriate here?
I could do some profiling to try to understand what the bottleneck's are/is
boost::asio is not as thread-friendly as you would hope - there is a big lock around the epoll code in boost/asio/detail/epoll_reactor.hpp which means that only one thread can call into the kernel's epoll syscall at a time. And for very small requests this makes all the difference (meaning you will only see roughly single-threaded performance).
Note that this is a limitation of how boost::asio uses the Linux kernel facilities, not necessarily the Linux kernel itself. The epoll syscall does support multiple threads when using edge-triggered events, but getting it right (without excessive locking) can be quite tricky.
BTW, I have been doing some work in this area (combining a fully-multithreaded edge-triggered epoll event loop with user-scheduled threads/fibers) and made some code available under the nginetd project.
As you are using EC2, all bets are off.
Try it using real hardware, and then you might be able to see what's happening. Trying to do performance testing in VMs is basically impossible.
I have not yet worked out what EC2 is useful for, if someone find out, please let me know.
From your comments on network utilization,
You do not seem to have much network movement.
3 + 2.5 MiB/sec is around the 50Mbps ball-park (compared to your 1Gbps port).
I'd say you are having one of the following two problems,
Insufficient work-load (low request-rate from your clients)
Blocking in the server (interfered response generation)
Looking at cmeerw's notes and your CPU utilization figures
(idling at 50% + 20% + 0% + 0%)
it seems most likely a limitation in your server implementation.
I second cmeerw's answer (+1).
230 requests/sec seems very low for such simple async requests. As such, using multiple threads is probably premature optimisation - get it working properly and tuned in a single thread, and see if you still need them. Just getting rid of un-needed locking may get things up to speed.
This article has some detail and discussion on I/O strategies for web server-style performance circa 2003. Anyone got anything more recent?
ASIO is fine for small to medium tasks but it isn't very good at leveraging the power of the underlying system. Neither are raw socket calls, or even IOCP on Windows but if you are experienced you will always be better than ASIO. Either way there is a lot of overhead with all of those methods, just more with ASIO.
For what it is worth. using raw socket calls on my custom HTTP can serve 800K dynamic requests per second with a 4 core I7. It is serving from RAM, which is where you need to be for that level of performance. At this level of performance the network driver and OS are consuming about 40% of the CPU. Using ASIO I can get around 50 to 100K requests per second, its performance is quite variable and mostly bound in my app. The post by #cmeerw mostly explains why.
One way to improve performance is by implementing a UDP proxy. Intercepting HTTP requests and then routing them over UDP to your backend UDP-HTTP server you can bypass a lot of TCP overhead in the operating system stacks. You can also have front ends which pipe through on UDP themselves, which shouldn't be too hard to do yourself. An advantage of a HTTP-UDP proxy is that it allows you to use any good frontend without modification, and you can swap them out at will without any impact. You just need a couple more servers to implement it. This modification on my example lowered the OS CPU usage to 10%, which increased my requests per second to just over a million on that single backend. And FWIW You should always have a frontend-backend setup for any performant site because the frontends can cache data without slowing down the more important dynamic requests backend.
The future seems to be writing your own driver that implements its own network stack so you can get as close to the requests as possible and implement your own protocol there. Which probably isn't what most programmers want to hear as it is more complicated. In my case I would be able to use 40% more CPU and move to over 1 million dynamic requests per second. The UDP proxy method can get you close to optimal performance without needing to do this, however you will need more servers - though if you are doing this many requests per second you will usually need multiple network cards and multiple frontends to handle the bandwidth so having a couple lightweight UDP proxies in there isn't that big a deal.
Hope some of this can be useful to you.
How many instances of io_service do you have? Boost asio has an example that creates an io_service per CPU and use them in the manner of RoundRobin.
You can still create four threads and assign one per CPU, but each thread can poll on its own io_service.