In terms of low latency (I am thinking about financial exchanges/co-location- people who care about microseconds) what options are there for sending packets from a C++ program on two Unix computers?
I have heard about kernel bypass network cards, but does this mean you program against some sort of API for the card? I presume this would be a faster option in comparison to using the standard Unix berkeley sockets?
I would really appreciate any contribution, especially from persons who are involved in this area.
EDITED from milliseconds to microseconds
EDITED I am kinda hoping to receive answers based more upon C/C++, rather than network hardware technologies. It was intended as a software question.
UDP sockets are fast, low latency, and reliable enough when both machines are on the same LAN.
TCP is much slower than UDP but when the two machines are not on the same LAN, UDP is not reliable.
Software profiling will stomp obvious problems with your program. However, when you are talking about network performance, network latency is likely to be you largest bottleneck. If you are using TCP, then you want to do things that avoid congestion and loss on your network to prevent retransmissions. There are a few things to do to cope:
Use a network with bandwidth and reliability guarantees.
Properly size your TCP parameters to maximize utilization without incurring loss.
Use error correction in your data transmission to correct for the small amount of loss you might encounter.
Or you can avoid using TCP altogether. But if reliability is required, you will end up implementing much of what is already in TCP.
But, you can leverage existing projects that have already thought through a lot of these issues. The UDT project is one I am aware of, that seems to be gaining traction.
At some point in the past, I worked with a packet sending driver that was loaded into the Windows kernel. Using this driver it was possible to generate stream of packets something 10-15 times stronger (I do not remember exact number) than from the app that was using the sockets layer.
The advantage is simple: The sending request comes directly from the kernel and bypasses multiple layers of software: sockets, protocol (even for UDP packet simple protocol driver processing is still needed), context switch, etc.
Usually reduced latency comes at a cost of reduced robustness. Compare for example the (often greatly advertised) fastpath option for ADSL. The reduced latency due to shorter packet transfer times comes at a cost of increased error susceptibility. Similar technologies migt exist for a large number of network media. So it very much depends on the hardware technologies involved. Your question suggests you're referring to Ethernet, but it is unclear whether the link is Ethernet-only or something else (ATM, ADSL, …), and whether some other network technology would be an option as well. It also very much depends on geographical distances.
EDIT:
I got a bit carried away with the hardware aspects of this question. To provide at least one aspect tangible at the level of application design: have a look at zero-copy network operations like sendfile(2). They can be used to eliminate one possible cause of latency, although only in cases where the original data came from some source other than the application memory.
As my day job, I work for a certain stock exchange. Below answer is my own opinion from the software solutions which we provide exactly for this kind of high throughput low latency data transfer. It is not intended in any way to be taken as marketing pitch(please i am a Dev.)This is just to give what are the Essential components of the software stack in this solution for this kind of fast data( Data could be stock/trading market data or in general any data):-
1] Physical Layer - Network interface Card in case of a TCP-UDP/IP based Ethernet network, or a very fast / high bandwidth interface called Infiniband Host Channel Adaptor. In case of IP/Ethernet software stack, is part of the OS. For Infiniband the card manufacturer (Intel, Mellanox) provide their Drivers, Firmware and API library against which one has to implement the socket code(Even infiniband uses its own 'socketish' protocol for network communications between 2 nodes.
2] Next layer above the physical layer we have is a Middleware which basically abstracts the lower network protocol nittigritties, provides some kind of interface for data I/O from physical layer to application layer. This layer also provides some kind of network data quality assurance (IF using tCP)
3] Last layer would be a application which we provide on top of middleware. Any one who gets 1] and 2] from us, can develop a low latency/hight throughput 'data transfer of network' kind of app for stock trading, algorithmic trading kind os applications using a choice of programming language interfaces - C,C++,Java,C#.
Basically a client like you can develop his own application in C,C++ using the APIs we provide, which will take care of interacting with the NIC or HCA(i.e. the actual physical network interface) to send and receive data fast, really fast.
We have a comprehensive solution catering to different quality and latency profiles demanded by our clients - Some need Microseconds latency is ok but they need high data quality/very little errors; Some can tolerate a few errors, but need nano seconds latency, Some need micro seconds latency, no errors tolerable, ...
If you need/or are interested in any way in this kind of solution , ping me offline at my contacts mentioned here at SO.
Related
I am learning about servers and data distribution. Much of what I have read from various sources (here is just one) talks about how market data is distributed over UDP to take advantage of multicasting. Indeed, in this video at this point about building a trading exchange, the presenter mentions how TCP is not the optimal choice to distribute data because it means having to "loop over" every client then send the data to each in turn, meaning that the "first in the list" of clients has a possibly unfair advantage.
I was very surprised then when I learned that I could connect to the Binance feed of market data using a websocket connection, which is TCP, using a command such as
websocat_linux64 wss://stream.binance.com:9443/ws/btcusdt#trade --protocol ws
Many other sources mention Websockets, so they certainly seem to be a common method of delivering market data, indeed this states
"Cryptocurrency trading applications often have real-time market data
streamed to trader front-ends via websockets"
I am confused. If Binance distributes over TCP, is "fairness" really a problem as the YouTube video seems to suggest?
So, overall, my main question is that if I want to distribute data (of any kind generally, but we can keep the market data theme if it helps) to multiple clients (possibly thousands) over the internet, should I use UDP or TCP, and is there any specific technique that could be employed to ensure "fairness" if that is relevant?
I've added the C++ tag as I would use C++, lots of high performance servers are written in C++, and I feel there's a good chance that someone will have done something similar and/or accessed the Binance feeds using C++.
The argument on fairness due to looping, in code, is ridiculous.
The whole field of trading where decisions need to be made quickly, where you need to use new information before someone else does is called: low-latency trading.
This tells you what's important: reducing the latency to a minimum. This is why UDP is used over TCP. TCP has flow control, re-sends data and buffers traffic to deliver it in order. This would make it terrible for low-latency trading.
WebSockets, in addition to being built on top of TCP are heavier and slower simply due to the extra amount of data (and needed processing to read/write it).
So even though the looping would be a tiny marginal latency cost, there's plenty of other reasons to pick UDP over TCP and even more over WebSockets.
So why does Binance does it? Their market is not institutional traders with hardware located at the exchanges. It's for traders that are willing to accept some latency. If you don't trade to the millisecond, then some extra latency is acceptable. It makes it much easier to integrate different piece of software together. It also makes fairness, in latency, not so important. If Alice is 0.253 seconds away and Bob is 0.416 seconds away, does it make any difference who I tell first (by a few microseconds)? Probably not.
Can Boost ASIO be used to build low-latency applications, such as HFT (High Frequency Trading)?
So Boost.ASIO uses platform-specific optimal demultiplexing mechanism: IOCP, epoll, kqueue, poll_set, /dev/poll
Also can be used Ethernet-Adapter with TOE (TCP/IP offload engine) and OpenOnload (kernel-bypass BSD sockets).
But can Low-latency application be built by using Boost.ASIO + TOE + OpenOnload?
This is the advice from the Asio author, posted to the public SG-14 Google Group (which unfortunately is having issues, and they have moved to another mailing list system):
I do work on ultra low latency financial markets systems. Like many
in the industry, I am unable to divulge project specifics. However, I
will attempt to answer your question.
In general:
At the lowest latencies you will find hardware based solutions.
Then: Vendor-specific kernel bypass APIs. For example where you encode and decode frames, or use a (partial) TCP/IP stack
implementation that does not follow the BSD socket API model.
And then: Vendor-supplied drop-in (i.e. LD_PRELOAD) kernel bypass libraries, which re-implement the BSD socket API in a way that is
transparent to the application.
Asio works very well with drop-in kernel bypass libraries. Using
these, Asio-based applications can implement standard financial
markets protocols, handle multiple concurrent connections, and expect
median 1/2 round trip latencies of ~2 usec, low jitter and high
message rates.
My advice to those using Asio for low latency work can be summarised
as: "Spin, pin, and drop-in".
Spin: Don't sleep. Don't context switch. Use io_service::poll()
instead of io_service::run(). Prefer single-threaded scheduling.
Disable locking and thread support. Disable power management. Disable
C-states. Disable interrupt coalescing.
Pin: Assign CPU affinity. Assign interrupt affinity. Assign memory to
NUMA nodes. Consider the physical location of NICs. Isolate cores from
general OS use. Use a system with a single physical CPU.
Drop-in: Choose NIC vendors based on performance and availability of
drop-in kernel bypass libraries. Use the kernel bypass library.
This advice is decoupled from the specific protocol implementation
being used. Thus, as a Beast user you could apply these techniques
right now, and if you did you would have an HTTP implementation with
~10 usec latency (N.B. number plucked from air, no actual benchmarking
performed). Of course, a specific protocol implementation should still
pay attention to things that may affect latency, such as encoding and
decoding efficiency, memory allocations, and so on.
As far as the low latency space is concerned, the main things missing
from Asio and the Networking TS are:
Batching datagram syscalls (i.e. sendmmsg, recvmmsg).
Certain socket options.
These are not included because they are (at present) OS-specific and
not part of POSIX. However, Asio and the Networking TS do provide an
escape hatch, in the form of the native_*() functions and the
"extensible" type requirements.
Cheers, Chris
I evaluated Boost Asio for use in high frequency trading a few years ago. To the best of my knowledge the basics are still the same today. Here are some reasons why I decided not to use it:
Asio relies on bind() style callbacks. There is some overhead here.
It is not obvious how to arrange certain low-level operations to occur at the right moment or in the right way.
There is rather a lot of complex code in an area which is important to optimize. It is harder to optimize complex, general code for specific use cases. Thinking that you will not need to look under the covers would be a mistake.
There is little to no need for portability in HFT applications. In particular, having "automatic" selection of a multiplexing mechanism is contrary to the mission, because each mechanism must be tested and optimized separately--this creates more work rather than reducing it.
If a third-party library is to be used, others such as libev, libevent, and libuv are more battle-hardened and avoid some of these downsides.
Related: C++ Socket Server - Unable to saturate CPU
My application aims to detect the network throughput. Despite the C++ code, I am seeking for a reliable theory of throttling my network which returns exact value of maximum accepted baudrate. Indeed, I could write the code later.
After checking several ideas from internet I didn't find which suits my case.
I tried to send data as much as possible using TCP/IP and then check the baudrate on each sending of 10MB. Please find here the pseudo code for my algorithm:
while (send(...)){
if (tempSendBytes > 10Mb)
if (baudrate > predefinedThreshold)
usleep(calculateNeededTime());
tempSendBytes += sentBytes;
}
But, when the predefinedThreshold is acheived, buffers full up and my program get stuck without returning any error. As a matter of fact, checking the baudrate on each sent message will decrease my bandwidth to its minimum. So, I preferred to check each 10MB.
PS: There are no other technical problems in my code neither a memory leak. In addition, my program runs normally (sending and receiving data 100%) if I decrease predefinedThreshold.
My question:
Is there a way to detect the maximum bandwidth (on both loopback and real network) without buffers overflow neither getting stucked?
Yes, you can detect maximum throughput on both loopback and real interface. The real interface may require a TCP server running remotely with sufficient bandwidth to provide an accurate estimate of max throughput.
If you are strictly looking for theoretical, you may be able to run the test on the real interface the same way you run it on localhost with the server bound to the real interface's IP and the client running on the same computer. I'm not sure though what your OS's networking stack will do to this traffic but it should treat it like it was coming off box.
A lot of factors contribute to max theoretical throughput for TCP. TCP MTU, OS send buffer, OS receive buffer, etc. Wikipedia has a good high level overview, but it sounds like you may have already read it. http://en.wikipedia.org/wiki/Measuring_network_throughput You might also find this TCP tuning overview helpful, http://en.wikipedia.org/wiki/TCP_tuning
IPerf is commonly used to accurately measure bandwidth and the techniques it uses are rather comprehensive. It is written in C++ and its code base may be a good starting point for you. https://github.com/esnet/iperf
I know none of this provides an exact discussion of the theory, but hopefully helps clarify some things.
We have a set of server applications which receive measurement data from equipment/tools. Message transfer time is currently our main bottleneck, so we are interested in reducing it to improve the process. The communication between the tools and server applications is via TCP/IP sockets made using C++ on Redhat Linux.
Is it possible to reduce the message transfer time using hardware, by changing the TCP/IP configuration settings or by tweaking tcp kernel functions? (we can sacrifice security for speed, since communication is on a secure intranet)
Depending on the workload, disabling Nagle's Algorithm on the socket connection can help a lot.
When working with high volumes of small messages, i found this made a huge difference.
From memory, I believe the socket option for C++ was called TCP_NODELAY
As #Jerry Coffin proposed, you can switch to UDP. UDP is unreliable protocol, this means you can lose your packets, or they can arrive in wrong order, or be duplicated. So you need to handle these cases on application level. Since you can lose some data (as you stated in your comment) no need for retransmission (the most complicated part of any reliable protocol). You just need to drop outdated packets. Use simple sequence numbering and you're done.
Yes, you can use RTP (it has sequence numbering) but you don't need it. RTP looks like an overkill for your simple case. It has many other features and is used mainly for multimedia streaming.
[EDIT] and similar question here
On the hardware side try Intel Server NICs and make sure the TCP offload Engine (ToE) is enabled.
There is also an important decision to make between latency and goodput, if you want better latency at expense of goodput consider reducing the interrupt coalescing period. Consult the Intel documentation for further details as they offer quite a number of configurable parameters.
If you can, the obvious step to reduce latency would be to switch from TCP to UDP.
Yes.
Google "TCP frame size" for details.
I'm looking for some data to help me decide which would be the better/faster for communication between two independent processes on Linux:
TCP
Named Pipes
Which is worse: the system overhead for the pipes or the tcp stack overhead?
Updated exact requirements:
only local IPC needed
will mostly be a lot of short messages
no cross-platform needed, only Linux
In the past I've used local domain sockets for that sort of thing. My library determined whether the other process was local to the system or remote and used TCP/IP for remote communication and local domain sockets for local communication. The nice thing about this technique is that local/remote connections are transparent to the rest of the application.
Local domain sockets use the same mechanism as pipes for communication and don't have the TCP/IP stack overhead.
I don't really think you should worry about the overhead (which will be ridiculously low). Did you make sure using profiling tools that the bottleneck of your application is likely to be TCP overhead?
Anyways as Carl Smotricz said, I would go with sockets because it will be really trivial to separate the applications in the future.
I discussed this in an answer to a previous post. I had to compare socket, pipe, and shared memory communications. Pipes were definitely faster than sockets (maybe by a factor of 2 if I recall correctly ... I can check those numbers when I return to work). But those measurements were just for the pure communication. If the communication is a very small part of the overall work, then the difference will be negligible between the two types of communication.
Edit
Here are some numbers from the test I did a few years ago. Your mileage may vary (particularly if I made stupid programming errors). In this specific test, a "client" and "server" on the same machine echoed 100 bytes of data back and forth. It made 10,000 requests. In the document I wrote up, I did not indicate the specs of the machine, so it is only the relative speeds that may be of any value. But for the curious, the times given here are the average cost per request:
TCP/IP: .067 ms
Pipe with I/O Completion Ports: .042 ms
Pipe with Overlapped I/O: .033 ms
Shared Memory with Named Semaphore: .011 ms
There will be more overhead using TCP - that will involve breaking the data up into packets, calculating checksums and handling acknowledgement, none of which is necessary when communicating between two processes on the same machine. Using a pipe will just copy the data into and out of a buffer.
I don't know if this suites you, but a very common way of IPC (interprocess communication) under linux is by using the shared memory. It's actually ultra fast (I didn't profiled this, but this is just shared data on RAM with strong processing around it).
The main problem around this approuch is the semaphore, you must build a little system around it so you must make sure a process is not writing at the same time the other one is trying to read.
A very simple starter tutorial is at here
This is not as portable as using sockets, but the concept would be the same, so if you're migrating this to Windows, you will just have to change the shared memory create/attach layer.
Two things to consider:
Connection setup cost
Continuous Communication cost
On TCP:
(1) more costly - 3way handshake overhead required for (potentially) unreliable channel.
(2) more costly - IP level overhead (checksum etc.), TCP overhead (sequence number, acknowledgement, checksum etc.) pretty much all of which aren't necessary on the same machine because the channel is supposed to be reliable and not introduce network related impairments (e.g. packet reordering).
But I would still go with TCP provided it makes sense (i.e. depends on the situation) because of its ubiquity (read: easy cross-platform support) and the overhead shouldn't be a problem in most cases (read: profile, don't do premature optimization).
Updated: if cross-platform support isn't required and the accent is on performance, then go with named/domain pipes as I am pretty sure the platform developers will have optimize-out the unnecessary functionality deemed required for handling network level impairments.
unix domain socket is a very goog compromise. Not the overhead of tcp, but more evolutive than the pipe solution. A point you did not consider is that socket are bidirectionnal, while named pipes are unidirectionnal.
I think the pipes will be a little lighter, but I'm just guessing.
But since pipes are a local thing, there's probably a lot less complicated code involved.
Other people might tell you to try and measure both to find out. It's hard to go wrong with this answer, but you may not be willing to invest the time. That would leave you hoping my guess is correct ;)