How to use ZeroMQ for multiple Server-Client pairs? - c++

I'm implementing a performance heavy two-party protocol in C++14 utilising multithreading and am currently using ZeroMQ as a network layer.
The application has the following simple architecture:
One main server-role,
One main client-role,
Both server and client spawn a fixed number n of threads
All n parallel concurrent thread-pairs execute some performance and communication heavy mutual, but exclusive, protocol exchange, i.e. these run in n fixed pairs and should not mix / interchange any data but with the pairwise fixed-opponent.
My current design uses a single ZeroMQ Context()-instance on both server and client, that is shared between all n-local threads and each respective client/server thread-pair creates a ZMQ_PAIR socket ( I just increment the port# ) on the local, shared, context for communication.
My question
Is there is a smarter or more efficient way of doing this?
i.e.: is there a natural way of using ROUTERS and DEALERS that might increase performance?
I do not have much experience with socket programming and with my approach the number of sockets scales directly with n ( a number of client-server thread-pairs ). This might go to the couple of thousands and I'm unsure if this is a problem or not.
I have control of both the server and client machines and source code and I have no outer restrictions that I need to worry about. All I care about is performance.
I've looked through all the patterns here, but I cannot find anyone that matches the case where the client-server pairs are fixed, i.e. I cannot use load-balancing and such.

Happy man!
ZeroMQ is a lovely and powerful tool for highly scaleable, low-overheads, Formal Communication ( behavioural, yes emulating some sort of the peers mutual behaviour "One Asks, the other Replies" et al ) Patterns.
Your pattern is quite simple, behaviourally-unrestricted and ZMQ_PAIR may serve well for this.
Performance
There ought be some more details on quantitative nature of this attribute.
a process-to-process latency [us]
a memory-footprint of a System-under-Test (SuT) architecture [MB]
a peak-amount of data-flow a SuT can handle [MB/s]
Performance Tips ( if quantitatively supported by observed performance data )
may increase I/O-performance by increasing Context( nIOthreads ) on instantiation
may fine-tune I/O-performance by hard-mapping individual thread# -> Context.IO-thread# which is helpfull for both distributed workload and allows one to keep "separate" localhost IOthread(s) free / ready for higher-priority signalling and others such needs.
shall setup application-specific ToS-labeling of prioritised types of traffic, so as to allow advanced processing on the network-layer alongside the route-segments between the client and server
if memory-footprint hurts ( ZeroMQ is not Zero-copy on TCP-protocol handling at operating-system kernel level ) one may try to move to a younger sister of ZeroMQ -- authored by Martin SUSTRIK, a co-father of ZeroMQ -- a POSIX compliant nanomsg with similar motivation and attractive performance figures. Worth to know about, at least.
Could ROUTER or DEALER increase an overall performance?
No, could not. Having in mind your stated architecture ( declared to be communication heavy ), other, even more sophisticated Scaleable Formal Communication Patterns behaviours that suit some other needs, do not add any performance benefit, but on the contrary, would cost you additional processing overheads without delivering any justifying improvement.
While your Formal Communication remains as defined, no additional bells and whistles are needed.
One point may be noted on ZMQ_PAIR archetype, some sources quote this to be rather an experimental archetype. If your gut sense feeling does not make you, besides SuT-testing observations, happy to live with this, do not mind a slight re-engineering step, that will keep you with all the freedom of un-pre-scribed Formal Communication Pattern behaviour, while having "non"-experimental pipes under the hood -- just replace the solo ZMQ_PAIR with a pair of ZMQ_PUSH + ZMQ_PULL and use messages with just a one-way ticket. Having the stated full-control of the SuT-design and implementation, this would be all within your competence.
How fast could I go?
There are some benchmark test records published for both the ZeroMQ or nanomsg performance / latency envelopes for un-loaded network transports across the traffic-free route-segments ( sure ).
If your SuT-design strives to go even faster -- say under some 800 ns end-to-end, there are other means to achieve this, but your design will have to follow other distributed computing strategy than a message-based data exchange and your project budget will have to adjust for additional expenditure for necessary ultra-low-latency hardware infrastructure.
It may surprise, but definitely well doable and pretty attractive for systems, where hundreds of nanoseconds are a must-have target within a Colocation Data Centre.

Related

Data distribution fairness: is TCP and websocket a good choice?

I am learning about servers and data distribution. Much of what I have read from various sources (here is just one) talks about how market data is distributed over UDP to take advantage of multicasting. Indeed, in this video at this point about building a trading exchange, the presenter mentions how TCP is not the optimal choice to distribute data because it means having to "loop over" every client then send the data to each in turn, meaning that the "first in the list" of clients has a possibly unfair advantage.
I was very surprised then when I learned that I could connect to the Binance feed of market data using a websocket connection, which is TCP, using a command such as
websocat_linux64 wss://stream.binance.com:9443/ws/btcusdt#trade --protocol ws
Many other sources mention Websockets, so they certainly seem to be a common method of delivering market data, indeed this states
"Cryptocurrency trading applications often have real-time market data
streamed to trader front-ends via websockets"
I am confused. If Binance distributes over TCP, is "fairness" really a problem as the YouTube video seems to suggest?
So, overall, my main question is that if I want to distribute data (of any kind generally, but we can keep the market data theme if it helps) to multiple clients (possibly thousands) over the internet, should I use UDP or TCP, and is there any specific technique that could be employed to ensure "fairness" if that is relevant?
I've added the C++ tag as I would use C++, lots of high performance servers are written in C++, and I feel there's a good chance that someone will have done something similar and/or accessed the Binance feeds using C++.
The argument on fairness due to looping, in code, is ridiculous.
The whole field of trading where decisions need to be made quickly, where you need to use new information before someone else does is called: low-latency trading.
This tells you what's important: reducing the latency to a minimum. This is why UDP is used over TCP. TCP has flow control, re-sends data and buffers traffic to deliver it in order. This would make it terrible for low-latency trading.
WebSockets, in addition to being built on top of TCP are heavier and slower simply due to the extra amount of data (and needed processing to read/write it).
So even though the looping would be a tiny marginal latency cost, there's plenty of other reasons to pick UDP over TCP and even more over WebSockets.
So why does Binance does it? Their market is not institutional traders with hardware located at the exchanges. It's for traders that are willing to accept some latency. If you don't trade to the millisecond, then some extra latency is acceptable. It makes it much easier to integrate different piece of software together. It also makes fairness, in latency, not so important. If Alice is 0.253 seconds away and Bob is 0.416 seconds away, does it make any difference who I tell first (by a few microseconds)? Probably not.

Can Boost ASIO be used to build low-latency applications?

Can Boost ASIO be used to build low-latency applications, such as HFT (High Frequency Trading)?
So Boost.ASIO uses platform-specific optimal demultiplexing mechanism: IOCP, epoll, kqueue, poll_set, /dev/poll
Also can be used Ethernet-Adapter with TOE (TCP/IP offload engine) and OpenOnload (kernel-bypass BSD sockets).
But can Low-latency application be built by using Boost.ASIO + TOE + OpenOnload?
This is the advice from the Asio author, posted to the public SG-14 Google Group (which unfortunately is having issues, and they have moved to another mailing list system):
I do work on ultra low latency financial markets systems. Like many
in the industry, I am unable to divulge project specifics. However, I
will attempt to answer your question.
In general:
At the lowest latencies you will find hardware based solutions.
Then: Vendor-specific kernel bypass APIs. For example where you encode and decode frames, or use a (partial) TCP/IP stack
implementation that does not follow the BSD socket API model.
And then: Vendor-supplied drop-in (i.e. LD_PRELOAD) kernel bypass libraries, which re-implement the BSD socket API in a way that is
transparent to the application.
Asio works very well with drop-in kernel bypass libraries. Using
these, Asio-based applications can implement standard financial
markets protocols, handle multiple concurrent connections, and expect
median 1/2 round trip latencies of ~2 usec, low jitter and high
message rates.
My advice to those using Asio for low latency work can be summarised
as: "Spin, pin, and drop-in".
Spin: Don't sleep. Don't context switch. Use io_service::poll()
instead of io_service::run(). Prefer single-threaded scheduling.
Disable locking and thread support. Disable power management. Disable
C-states. Disable interrupt coalescing.
Pin: Assign CPU affinity. Assign interrupt affinity. Assign memory to
NUMA nodes. Consider the physical location of NICs. Isolate cores from
general OS use. Use a system with a single physical CPU.
Drop-in: Choose NIC vendors based on performance and availability of
drop-in kernel bypass libraries. Use the kernel bypass library.
This advice is decoupled from the specific protocol implementation
being used. Thus, as a Beast user you could apply these techniques
right now, and if you did you would have an HTTP implementation with
~10 usec latency (N.B. number plucked from air, no actual benchmarking
performed). Of course, a specific protocol implementation should still
pay attention to things that may affect latency, such as encoding and
decoding efficiency, memory allocations, and so on.
As far as the low latency space is concerned, the main things missing
from Asio and the Networking TS are:
Batching datagram syscalls (i.e. sendmmsg, recvmmsg).
Certain socket options.
These are not included because they are (at present) OS-specific and
not part of POSIX. However, Asio and the Networking TS do provide an
escape hatch, in the form of the native_*() functions and the
"extensible" type requirements.
Cheers, Chris
I evaluated Boost Asio for use in high frequency trading a few years ago. To the best of my knowledge the basics are still the same today. Here are some reasons why I decided not to use it:
Asio relies on bind() style callbacks. There is some overhead here.
It is not obvious how to arrange certain low-level operations to occur at the right moment or in the right way.
There is rather a lot of complex code in an area which is important to optimize. It is harder to optimize complex, general code for specific use cases. Thinking that you will not need to look under the covers would be a mistake.
There is little to no need for portability in HFT applications. In particular, having "automatic" selection of a multiplexing mechanism is contrary to the mission, because each mechanism must be tested and optimized separately--this creates more work rather than reducing it.
If a third-party library is to be used, others such as libev, libevent, and libuv are more battle-hardened and avoid some of these downsides.
Related: C++ Socket Server - Unable to saturate CPU

Are actor based solutions to classic concurrency problems easier to prove correct?

I'm trying to grasp whether Actor Model, especially popular frameworks like Akka provides measurable benefits to software designs.
While I'm not CS theorist, I would feel more confident in this model if it were true that it allows to construct simpler correctness proofs and better specifications. Is reliability a strong point of Agent Model? Would it be a good fit for mission-critical software (health care, avionics etc.)?
My worry is that without some strong and possibly unattainable "global" guarantees like fairness (as I believe actors may exhaust threads available to others) or delivery in finite time, using this model can lead to distributed system that is an order of magnitude more complex to design, describe and debug than alternatives.
As with any other model, Akka reliability depends on the developer. Provided that you use Akka correctly (with the recommended best practices), it is very reliable. And using the actor model correctly is easy and intuitive once you learn the basics.
For example: an actor will hold a thread for too long only if you send him too large messages (large == takes long to process). If you use more smaller messages, the dispatcher will guarantee pretty good fairness. Or, if you need absolute fairness, just use a round robin router.
Delivery in finite time... depends highly on how you choose the mailbox. For example you can prioritize some important messages over others.
Another Akka advantage is easy scalability. If something works for 5 actors, it will work for a 10000 actors. So I believe distributed systems in Akka are easier to design and manage than the traditional models.

C/C++ technologies involved in sending data across networks very fast

In terms of low latency (I am thinking about financial exchanges/co-location- people who care about microseconds) what options are there for sending packets from a C++ program on two Unix computers?
I have heard about kernel bypass network cards, but does this mean you program against some sort of API for the card? I presume this would be a faster option in comparison to using the standard Unix berkeley sockets?
I would really appreciate any contribution, especially from persons who are involved in this area.
EDITED from milliseconds to microseconds
EDITED I am kinda hoping to receive answers based more upon C/C++, rather than network hardware technologies. It was intended as a software question.
UDP sockets are fast, low latency, and reliable enough when both machines are on the same LAN.
TCP is much slower than UDP but when the two machines are not on the same LAN, UDP is not reliable.
Software profiling will stomp obvious problems with your program. However, when you are talking about network performance, network latency is likely to be you largest bottleneck. If you are using TCP, then you want to do things that avoid congestion and loss on your network to prevent retransmissions. There are a few things to do to cope:
Use a network with bandwidth and reliability guarantees.
Properly size your TCP parameters to maximize utilization without incurring loss.
Use error correction in your data transmission to correct for the small amount of loss you might encounter.
Or you can avoid using TCP altogether. But if reliability is required, you will end up implementing much of what is already in TCP.
But, you can leverage existing projects that have already thought through a lot of these issues. The UDT project is one I am aware of, that seems to be gaining traction.
At some point in the past, I worked with a packet sending driver that was loaded into the Windows kernel. Using this driver it was possible to generate stream of packets something 10-15 times stronger (I do not remember exact number) than from the app that was using the sockets layer.
The advantage is simple: The sending request comes directly from the kernel and bypasses multiple layers of software: sockets, protocol (even for UDP packet simple protocol driver processing is still needed), context switch, etc.
Usually reduced latency comes at a cost of reduced robustness. Compare for example the (often greatly advertised) fastpath option for ADSL. The reduced latency due to shorter packet transfer times comes at a cost of increased error susceptibility. Similar technologies migt exist for a large number of network media. So it very much depends on the hardware technologies involved. Your question suggests you're referring to Ethernet, but it is unclear whether the link is Ethernet-only or something else (ATM, ADSL, …), and whether some other network technology would be an option as well. It also very much depends on geographical distances.
EDIT:
I got a bit carried away with the hardware aspects of this question. To provide at least one aspect tangible at the level of application design: have a look at zero-copy network operations like sendfile(2). They can be used to eliminate one possible cause of latency, although only in cases where the original data came from some source other than the application memory.
As my day job, I work for a certain stock exchange. Below answer is my own opinion from the software solutions which we provide exactly for this kind of high throughput low latency data transfer. It is not intended in any way to be taken as marketing pitch(please i am a Dev.)This is just to give what are the Essential components of the software stack in this solution for this kind of fast data( Data could be stock/trading market data or in general any data):-
1] Physical Layer - Network interface Card in case of a TCP-UDP/IP based Ethernet network, or a very fast / high bandwidth interface called Infiniband Host Channel Adaptor. In case of IP/Ethernet software stack, is part of the OS. For Infiniband the card manufacturer (Intel, Mellanox) provide their Drivers, Firmware and API library against which one has to implement the socket code(Even infiniband uses its own 'socketish' protocol for network communications between 2 nodes.
2] Next layer above the physical layer we have is a Middleware which basically abstracts the lower network protocol nittigritties, provides some kind of interface for data I/O from physical layer to application layer. This layer also provides some kind of network data quality assurance (IF using tCP)
3] Last layer would be a application which we provide on top of middleware. Any one who gets 1] and 2] from us, can develop a low latency/hight throughput 'data transfer of network' kind of app for stock trading, algorithmic trading kind os applications using a choice of programming language interfaces - C,C++,Java,C#.
Basically a client like you can develop his own application in C,C++ using the APIs we provide, which will take care of interacting with the NIC or HCA(i.e. the actual physical network interface) to send and receive data fast, really fast.
We have a comprehensive solution catering to different quality and latency profiles demanded by our clients - Some need Microseconds latency is ok but they need high data quality/very little errors; Some can tolerate a few errors, but need nano seconds latency, Some need micro seconds latency, no errors tolerable, ...
If you need/or are interested in any way in this kind of solution , ping me offline at my contacts mentioned here at SO.

MPI or Sockets?

I'm working on a loosely coupled cluster for some data processing. The network code and processing code is in place, but we are evaluating different methodologies in our approach. Right now, as we should be, we are I/O bound on performance issues, and we're trying to decrease that bottleneck. Obviously, faster switches like Infiniband would be awesome, but we can't afford the luxury of just throwing out what we have and getting new equipment.
My question posed is this. All traditional and serious HPC applications done on clusters is typically implemented with message passing versus sending over sockets directly. What are the performance benefits to this? Should we see a speedup if we switched from sockets?
MPI MIGHT use sockets. But there are also MPI implementation to be used with SAN (System area network) that use direct distributed shared memory. That of course if you have the hardware for that. So MPI allows you to use such resources in the future. On that case you can gain massive performance improvements (on my experience with clusters back at university time, you can reach gains of a few orders of magnitude). So if you are writting code that can be ported to higher end clusters, using MPI is a very good idea.
Even discarding performance issues, using MPI can save you a lot of time, that you can use to improve performance of other parts of your system or simply save your sanity.
I would recommend using MPI instead of rolling your own, unless you are very good at that sort of thing. Having wrote some distributed computing-esque applications using my own protocols, I always find myself reproducing (and poorly reproducing) features found within MPI.
Performance wise I would not expect MPI to give you any tangible network speedups - it uses sockets just like you. MPI will however provide you with much the functionality you would need for managing many nodes, i.e. synchronisation between nodes.
Performance is not the only consideration in this case, even on high performance clusters. MPI offers a standard API, and is "portable." It is relatively trivial to switch an application between the different versions of MPI.
Most MPI implementations use sockets for TCP based communication. Odds are good that any given MPI implementation will be better optimized and provide faster message passing, than a home grown application using sockets directly.
In addition, should you ever get a chance to run your code on a cluster that has InfiniBand, the MPI layer will abstract any of those code changes. This is not a trivial advantage - coding an application to directly use OFED (or another IB Verbs) implementation is very difficult.
Most MPI applications include small test apps that can be used to verify the correctness of the networking setup independently of your application. This is a major advantage when it comes time to debug your application. The MPI standard includes the "pMPI" interfaces, for profiling MPI calls. This interface also allows you to easily add checksums, or other data verification to all the message passing routines.
Message Passing is a paradigm not a technology. In the most general installation, MPI will use sockets to communicate. You could see a speed up by switching to MPI, but only in so far as you haven't optimized your socket communication.
How is your application I/O bound? Is it bound on transferring the data blocks to the work nodes, or is it bound because of communication during computation?
If the answer is "because of communication" then the problem is you are writing a tightly-coupled application and trying to run it on a cluster designed for loosely coupled tasks. The only way to gain performance will be to get better hardware (faster switches, infiniband, etc. )... maybe you could borrow time on someone else's HPC?
If the answer is "data block" transfers then consider assigning workers multiple data blocks (so they stay busy longer) & compress the data blocks before transfer. This is a strategy that can help in a loosely coupled application.
MPI has the benefit that you can do collective communications. Doing broadcasts/reductions in O(log p) /* p is your number of processors*/ instead of O(p) is a big advantage.
I'll have to agree with OldMan and freespace. Unless you know of a specific and improvement to some useful metric (performance, maintainability, etc.) over MPI, why reinvent the wheel. MPI represents a large amount of shared knowledge regarding the problem you are trying to solve.
There are a huge number of issues you need to address which is beyond just sending data. Connection setup and maintenance will all become your responsibility. If MPI is the exact abstraction (it sounds like it is) you need, use it.
At the very least, using MPI and later refactoring it out with your own system is a good approach costing the installation and dependency of MPI.
I especially like OldMan's point that MPI gives you much more beyond simple socket communication. You get a slew of parallel and distributed computing implementation with a transparent abstraction.
I have not used MPI, but I have used sockets quite a bit. There are a few things to consider on high performance sockets. Are you doing many small packets, or large packets? If you are doing many small packets consider turning off the Nagle algorithm for faster response:
setsockopt(m_socket, IPPROTO_TCP, TCP_NODELAY, ...);
Also, using signals can actually be much slower when trying to get a high volume of data through. Long ago I made a test program where the reader would wait for a signal, and read a packet - it would get a bout 100 packets/sec. Then I just did blocking reads, and got 10000 reads/sec.
The point is look at all these options, and actually test them out. Different conditions will make different techniques faster/slower. It's important to not just get opinions, but to put them to the test. Steve Maguire talks about this in "Writing Solid Code". He uses many examples that are counter-intuitive, and tests them to find out what makes better/faster code.
MPI uses sockets underneath, so really the only difference should be the API that your code interfaces with. You could fine tune the protocol if you are using sockets directly, but thats about it. What exactly are you doing with the data?
MPI Uses sockets, and if you know what you are doing you can probably get more bandwidth out of sockets because you need not send as much meta data.
But you have to know what you are doing and it's likely to be more error prone. essentially you'd be replacing mpi with your own messaging protocol.
For high volume, low overhead business messaging you might want to check out
OAMQ with several products. The open source variant OpenAMQ supposedly runs the trading at JP Morgan, so it should be reliable, shouldn't it?