As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
If I have two Linux boxes and I am writing a C/C++ program to send a message on one box and receive on the other box, what is the fastest approach?
I am not sure if the various socket/networking technologies I hear banded around are simply wrappers around an underlying technology, or if they are alternative possibilities. I just want to know what would be the closest to "bare metal" which I could implement from my application.
I was thinking the fastest method would including writing my program as a driver and load this into the kernel. However, I would still need to know the fastest socket implementation to use with this idea.
Any modern PC will be able to keep the ethernet chip buffers fully loaded so "bare metal" programming will provide no benefit. The added latency through the kernel is so small compared to the network latency (i.e. speed of light limitations) that it's not worth optimizing.
For "fast" as in high-bandwidth data-moving between two connected Linux boxes, TCP is your friend as it will optimize itself to the maximum network ability without you having to detect and adjust yourself. Direct connect will have negligible packet-loss and generally low latency so you don't have to worry about window sizes, etc.
If you want "fast" as in quick turnaround to small requests, use UDP.
If you have some other definition of "fast", well, you need to elaborate.
The question is incomplete, as you do not specify any requirements except that it must be fast. There are lots of aspects to consider here, such as the protocol to use (TCP for reliability, UDP for streaming, etc), serialization (what kind of data do you plan to send over the network, can you use a serialization library such as Google Protobuf?), and so on.
My suggestion would be to have a look at various RPC frameworks such as Apache Thrift, Apace Etch or ZeroC Ice and benchmark them before you decide that you really need to use tbe BSD sockets API or a similar low level abstraction.
Well, unless you want to build a kernel module for custom communications over Ethernet, the fastest userspace API from libc is the Berkley Sockets API. Yes, that is a wrapper over the kernels TCP/IP and UDP/IP, which is a layer over IP, which is a layer of WWAN, LAN, and Ethernet, which is a layer over something else, but unless you need such incredible and exact performance, I suggest staying in the simple stuff in userland rather then writing the kernel modules you'd need to use anything lower. Unless I'm completely wrong, there is no way to access raw Ethernet, WWAN, or LAN from userspace, let alone actually accessing the hardware.
Note: If you have a few years to rewrite the entire UNIX networking stack and networking card drivers, you can get x86 I/O port access from userspace when running as root with the ioperm() call, but I don't suggest rewriting the entire UNIX networking stack. That's almost 2 decades of work. Also, direct hardware access from a 3-d party application is a security disaster waiting to happen.
Note: If you are OK with not using any traditional hardware for networking, you could write a custom driver for double ended USB cables and create a custom network protocol over that, as writing Linux USB device drivers is probably the easiest kind of driver to write, as there is a large API for it. I really don't know how the speed will stack up here though, as USb 2.0 is faster then older Ethernet standards, but then they are starting to have 1 Gbps Ethernet, be now there's SUB 3.0, so this could be faster or slower, depending on available hardware. This is more about ease of use.
EDIT: Please, never, ever, ever put code in the kernel for the sake of speed. Please. The huge security hole you put in a machine is not worth the small boost in performance. There was a time when system calls were very expensive, and you wanted to minimize and adding to the kernel was an option, but with newer standards like Intel's sysenter/sysexit, and AMD's syscall/sysret, they are cheep enough to not warrant the security hole.
Related
I've to develop a server that has to make a lot of connections to receive and send small files. The question is if the increment of performance with C++ worth the time to spend on develop the code or if is better to use Python and debug the code time to time to speed it up. Maybe is a little abstract question without giving a number of connections but I don't really know. At least 10,000 connections/minute to update clients status.
With that many connections, your server will be I/O bound. The frequently cited speed differences between languages like C and C++ and languages like Python and (say) Ruby lie in the interpreter and boxing overhead which slow down computation, not in the realm of I/O.
Not only can use make good and reasonably use of concurrency (both via processes and threads, the GIL is released during I/O and thus does not matter much for I/O-bound programs), there is also a wealth of asynchronous servers. In addition, web servers in general have much better Python integration (e.g. mod_wsgi for Apache) than C and C++. This frees you from writing your own server loop, socket management, etc. which you likely won't do as well as the major servers anyway. This is assuming we're talking about a web service, and not something more arcane which Apache etc. cannot do out of the box.
I'd expect that the server time would be dominated by I/O- network, disk, etc. You'd want to prove that the CPU consumption of the Python program is problematic and that you've grasped all the low-hanging CPU fruit before considering a change.
In terms of low latency (I am thinking about financial exchanges/co-location- people who care about microseconds) what options are there for sending packets from a C++ program on two Unix computers?
I have heard about kernel bypass network cards, but does this mean you program against some sort of API for the card? I presume this would be a faster option in comparison to using the standard Unix berkeley sockets?
I would really appreciate any contribution, especially from persons who are involved in this area.
EDITED from milliseconds to microseconds
EDITED I am kinda hoping to receive answers based more upon C/C++, rather than network hardware technologies. It was intended as a software question.
UDP sockets are fast, low latency, and reliable enough when both machines are on the same LAN.
TCP is much slower than UDP but when the two machines are not on the same LAN, UDP is not reliable.
Software profiling will stomp obvious problems with your program. However, when you are talking about network performance, network latency is likely to be you largest bottleneck. If you are using TCP, then you want to do things that avoid congestion and loss on your network to prevent retransmissions. There are a few things to do to cope:
Use a network with bandwidth and reliability guarantees.
Properly size your TCP parameters to maximize utilization without incurring loss.
Use error correction in your data transmission to correct for the small amount of loss you might encounter.
Or you can avoid using TCP altogether. But if reliability is required, you will end up implementing much of what is already in TCP.
But, you can leverage existing projects that have already thought through a lot of these issues. The UDT project is one I am aware of, that seems to be gaining traction.
At some point in the past, I worked with a packet sending driver that was loaded into the Windows kernel. Using this driver it was possible to generate stream of packets something 10-15 times stronger (I do not remember exact number) than from the app that was using the sockets layer.
The advantage is simple: The sending request comes directly from the kernel and bypasses multiple layers of software: sockets, protocol (even for UDP packet simple protocol driver processing is still needed), context switch, etc.
Usually reduced latency comes at a cost of reduced robustness. Compare for example the (often greatly advertised) fastpath option for ADSL. The reduced latency due to shorter packet transfer times comes at a cost of increased error susceptibility. Similar technologies migt exist for a large number of network media. So it very much depends on the hardware technologies involved. Your question suggests you're referring to Ethernet, but it is unclear whether the link is Ethernet-only or something else (ATM, ADSL, …), and whether some other network technology would be an option as well. It also very much depends on geographical distances.
EDIT:
I got a bit carried away with the hardware aspects of this question. To provide at least one aspect tangible at the level of application design: have a look at zero-copy network operations like sendfile(2). They can be used to eliminate one possible cause of latency, although only in cases where the original data came from some source other than the application memory.
As my day job, I work for a certain stock exchange. Below answer is my own opinion from the software solutions which we provide exactly for this kind of high throughput low latency data transfer. It is not intended in any way to be taken as marketing pitch(please i am a Dev.)This is just to give what are the Essential components of the software stack in this solution for this kind of fast data( Data could be stock/trading market data or in general any data):-
1] Physical Layer - Network interface Card in case of a TCP-UDP/IP based Ethernet network, or a very fast / high bandwidth interface called Infiniband Host Channel Adaptor. In case of IP/Ethernet software stack, is part of the OS. For Infiniband the card manufacturer (Intel, Mellanox) provide their Drivers, Firmware and API library against which one has to implement the socket code(Even infiniband uses its own 'socketish' protocol for network communications between 2 nodes.
2] Next layer above the physical layer we have is a Middleware which basically abstracts the lower network protocol nittigritties, provides some kind of interface for data I/O from physical layer to application layer. This layer also provides some kind of network data quality assurance (IF using tCP)
3] Last layer would be a application which we provide on top of middleware. Any one who gets 1] and 2] from us, can develop a low latency/hight throughput 'data transfer of network' kind of app for stock trading, algorithmic trading kind os applications using a choice of programming language interfaces - C,C++,Java,C#.
Basically a client like you can develop his own application in C,C++ using the APIs we provide, which will take care of interacting with the NIC or HCA(i.e. the actual physical network interface) to send and receive data fast, really fast.
We have a comprehensive solution catering to different quality and latency profiles demanded by our clients - Some need Microseconds latency is ok but they need high data quality/very little errors; Some can tolerate a few errors, but need nano seconds latency, Some need micro seconds latency, no errors tolerable, ...
If you need/or are interested in any way in this kind of solution , ping me offline at my contacts mentioned here at SO.
I'm looking for some data to help me decide which would be the better/faster for communication between two independent processes on Linux:
TCP
Named Pipes
Which is worse: the system overhead for the pipes or the tcp stack overhead?
Updated exact requirements:
only local IPC needed
will mostly be a lot of short messages
no cross-platform needed, only Linux
In the past I've used local domain sockets for that sort of thing. My library determined whether the other process was local to the system or remote and used TCP/IP for remote communication and local domain sockets for local communication. The nice thing about this technique is that local/remote connections are transparent to the rest of the application.
Local domain sockets use the same mechanism as pipes for communication and don't have the TCP/IP stack overhead.
I don't really think you should worry about the overhead (which will be ridiculously low). Did you make sure using profiling tools that the bottleneck of your application is likely to be TCP overhead?
Anyways as Carl Smotricz said, I would go with sockets because it will be really trivial to separate the applications in the future.
I discussed this in an answer to a previous post. I had to compare socket, pipe, and shared memory communications. Pipes were definitely faster than sockets (maybe by a factor of 2 if I recall correctly ... I can check those numbers when I return to work). But those measurements were just for the pure communication. If the communication is a very small part of the overall work, then the difference will be negligible between the two types of communication.
Edit
Here are some numbers from the test I did a few years ago. Your mileage may vary (particularly if I made stupid programming errors). In this specific test, a "client" and "server" on the same machine echoed 100 bytes of data back and forth. It made 10,000 requests. In the document I wrote up, I did not indicate the specs of the machine, so it is only the relative speeds that may be of any value. But for the curious, the times given here are the average cost per request:
TCP/IP: .067 ms
Pipe with I/O Completion Ports: .042 ms
Pipe with Overlapped I/O: .033 ms
Shared Memory with Named Semaphore: .011 ms
There will be more overhead using TCP - that will involve breaking the data up into packets, calculating checksums and handling acknowledgement, none of which is necessary when communicating between two processes on the same machine. Using a pipe will just copy the data into and out of a buffer.
I don't know if this suites you, but a very common way of IPC (interprocess communication) under linux is by using the shared memory. It's actually ultra fast (I didn't profiled this, but this is just shared data on RAM with strong processing around it).
The main problem around this approuch is the semaphore, you must build a little system around it so you must make sure a process is not writing at the same time the other one is trying to read.
A very simple starter tutorial is at here
This is not as portable as using sockets, but the concept would be the same, so if you're migrating this to Windows, you will just have to change the shared memory create/attach layer.
Two things to consider:
Connection setup cost
Continuous Communication cost
On TCP:
(1) more costly - 3way handshake overhead required for (potentially) unreliable channel.
(2) more costly - IP level overhead (checksum etc.), TCP overhead (sequence number, acknowledgement, checksum etc.) pretty much all of which aren't necessary on the same machine because the channel is supposed to be reliable and not introduce network related impairments (e.g. packet reordering).
But I would still go with TCP provided it makes sense (i.e. depends on the situation) because of its ubiquity (read: easy cross-platform support) and the overhead shouldn't be a problem in most cases (read: profile, don't do premature optimization).
Updated: if cross-platform support isn't required and the accent is on performance, then go with named/domain pipes as I am pretty sure the platform developers will have optimize-out the unnecessary functionality deemed required for handling network level impairments.
unix domain socket is a very goog compromise. Not the overhead of tcp, but more evolutive than the pipe solution. A point you did not consider is that socket are bidirectionnal, while named pipes are unidirectionnal.
I think the pipes will be a little lighter, but I'm just guessing.
But since pipes are a local thing, there's probably a lot less complicated code involved.
Other people might tell you to try and measure both to find out. It's hard to go wrong with this answer, but you may not be willing to invest the time. That would leave you hoping my guess is correct ;)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Over the last couple of months I've been working on some implementations of sockets servers in C++ and Java. I wrote a small server in Java that would handle & process input from a flash application hosted on a website and I managed to successfully write a server that handles input from a 2D game client with multiple players in C++. I used TCP in one project & UDP in the other one. Now, I do have some questions that I couldn't really find on the net and I hope that some of the experts could help me. :)
Let's say I would like to build a server in C++ that would handle the input from thousands of standalone and/or web applications, how should I design my server then? So far, I usually create a new & unique thread for each user that connects, but I doubt this is the way to go.
Also, How does one determine the layout of packets sent over the network; is data usually sent over the network in a binary or text state? How do you handle serializated objects when you send data to different media (eg C++ server to flash application)?
And last, is there any easy to use library which is commonly used that supports portability (eg development on a windows machine & deployment on a linux box) other than boost asio.
Thank you.
Sounds like you have a couple of questions here. I'll do my best to answer what I can see.
1. How should I handle threading in my network server?
I would take a good look at what kind of work you're doing on the worker threads that are being spawned by your server. Spawning a new thread for each request isn't a good idea...but it might not hurt anything if the number of parallel requests is small and and tasks performed on each thread are fast running.
If you really want to do things the right way, you could have a configurable/dynamic thread pool that would recycle the worker threads as they became free. That way you could set a max thread pool size. Your server would then work up to the pool size...and then make further requests wait until a worker thread was available.
2. How do I format the data in my packets?
Unless you're developing an entirely new protocol...this isn't something you really need to worry about. Unless you're dealing with streaming media (or another application where packet loss/corruption is acceptable), you probably won't be using UDP for this application. TCP/IP is probably going to be your best bet...and that will dictate the packet design for you.
3. Which format do I use for serialization?
The way you serialize your data over the wire depends on what kind of applications are going to be consuming your service. Binary serialization is usually faster and results in a smaller amount of data that needs to be transfered over the network. The downside to using binary serialization is that the binary serialization in one language may not work in another. Therefore the clients connecting to your server are, most likely, going to have to be written in the same language you are using.
XML Serialization is another option. It will take longer and have a larger amount of data to be transmitted over the network. The upside to using something like XML serialization is that you won't be limited to the types of clients that can connect to your server and consume your service.
You have to choose what fits your needs the best.
...play around with the different options and figure out what works best for you. Hopefully you'll find something that can perform faster and more reliably than anything I've mentioned here.
As far as server design concern, I would say that you are right: although ONE-THREAD-PER-SOCKET is a simple and easy approach, it is not the way to go since it won't scale as well as other server design patterns.
I personally like the COMMUNICATION-THREADS/WORKER-THREADS approach, where a pool of a dynamic number of worker threads handle all the work generated by producer threads.
In this model, you will have a number of threads in a pool waiting for tasks that are going to be generated from another set of threads handling network I/O.
I found UNIX Network Programming by Richard Stevens and amazing source for this kind on network programming approaches. And, despite its name, it will be very useful in windows environments as well.
Regarding the layout of the packets (you should have post a different question for this since it is a totally different question, in my opinion), there are tradeoffs when selecting TEXT vs BINARY approach.
TEXT (i.e. XML) is probably easier to parse and document, and more simple in general, while a BINARY protocol should give you better performance in terms of speed of processing and size of network packets, but you will have to deal with more complicated issues such as ENDIANNES of the words and stuff like that.
Hope it helps.
Though previous answers provide good direction, just for completeness, I'd like to point out that threads are not an absolute requirement for great socket server performance. Some examples are here. There are many approaches to scalability too - thread pools, pre-forked processes, server pools, etc.
1) And last, is there any easy to use library which is commonly used that supports portability (eg development on a windows machine & deployment on a linux box) other than boost asio.
The ACE library is another alternative. It's very mature (been around since the early 90s) and widely deployed. A brief discussion about how it compares to Boost ASIO is available on the Riverace website here. Keep in mind that ACE has had to support a large number of legacy platforms for long time so it doesn't utilize modern C++ features as much as Boost ASIO, for example.
2) Let's say I would like to build a server in C++ that would handle the input from thousands of standalone and/or web applications, how should I design my server then? So far, I usually create a new & unique thread for each user that connects, but I doubt this is the way to go.
There are a number of commonly used approaches including but not limited to: thread-per-connection (the approach you describe) and thread pool (the approach Justin described). Each have their pros and cons. Many have a looked at the trade-offs. A good starting point might be the links on the Thread Pool Pattern Wikipedia page.
Dan Kegel's "The C10K Problem" web page has lots of useful notes about improving scalability as well.
3) Also, How does one determine the layout of packets sent over the network; is data usually sent over the network in a binary or text state? How do you handle serializated objects when you send data to different media (eg C++ server to flash application)?
I agree with others that sending binary data is generally going to be most efficient. The boost serialization library can be used to marshal data into a binary form (as well as text). Mature binary formats include XDR and CDR. CDR is the format used by CORBA, for instance. The company ZeroC defines the ICE encoding, which is supposed to be much more efficient than CDR.
There are lots of binary formats to choose from. My suggestion would be to avoid reinventing the wheel by at least reading about some of these binary formats so that you don't end up running into the same pitfalls these existing binary formats were designed to address.
That said, lots of middleware exists that already provides a canned solution for most of your needs. For example, OpenSplice and OpenDDS are both implementations of the OMG Data Distribution Service standard. DDS focuses on efficient distribution of data such as through a publish-subscribe model, rather than remote invocation of functions. I'm more familiar with the OMG defined technologies but I'm sure there are other middleware implementations that will fit your needs.
you're still going to need a socket to handle every client, but the idea would be to create a pool of X sockets (say 50) and then, when you get close (say 90%) to consuming all those sockets, create another pool of X sockets. At some point, after clients have connected, sent data and disconnected, some of your sockets will be available for use and you can use them (google socket pools for this info)
The layout of data is always difficult. If all your clients and servers will be using the same hardware and operating system, you can send data in binary format, but there are many trips and traps there (byte alignment is at the top of the list). sending formatted text is always easier, but certainly more expensive in terms of bandwidth and processing power because you have to change format from machine to text before sending and, of course, back again at the receiver.
re: serialized, I'm sorry, I can't help you, nor with libraries (I'm too embedded to have used much of these)
About server sockets and serialization(marshaling). The most important problem is growing sockets number is readable and writable state in select. I am not about limitation in the FD_SET. This is solvable simply. I am about growth of time of signaling and problem data accumulation in not read sockets while processing data available in evaluated socket. So the solution may be even out of SW boundaries and require multiple processor model,when roles of processors are limited: one reads and writes, N are processing. In this case all available socket data should has been read when select returned and sent to another processing units.
The same is about incoming data.
About marshaling. Of coarse a binary format is preferable because performance.By the way XML in the terms of UNICODE has the same problem. But,... comrades, it is not simply copying long or integer value into a socket stream. But in this case even htons, htonl could help (it sends/receives in NW format and OS is responsible for data convert). But it is safe more sending data following representation header, where exposed format of most/least significant bits placed, bytes order and IEEE data type. This works, I had not a case when not.
Kind regards and great success for everyone.
Simon Cantor
I'm working on a loosely coupled cluster for some data processing. The network code and processing code is in place, but we are evaluating different methodologies in our approach. Right now, as we should be, we are I/O bound on performance issues, and we're trying to decrease that bottleneck. Obviously, faster switches like Infiniband would be awesome, but we can't afford the luxury of just throwing out what we have and getting new equipment.
My question posed is this. All traditional and serious HPC applications done on clusters is typically implemented with message passing versus sending over sockets directly. What are the performance benefits to this? Should we see a speedup if we switched from sockets?
MPI MIGHT use sockets. But there are also MPI implementation to be used with SAN (System area network) that use direct distributed shared memory. That of course if you have the hardware for that. So MPI allows you to use such resources in the future. On that case you can gain massive performance improvements (on my experience with clusters back at university time, you can reach gains of a few orders of magnitude). So if you are writting code that can be ported to higher end clusters, using MPI is a very good idea.
Even discarding performance issues, using MPI can save you a lot of time, that you can use to improve performance of other parts of your system or simply save your sanity.
I would recommend using MPI instead of rolling your own, unless you are very good at that sort of thing. Having wrote some distributed computing-esque applications using my own protocols, I always find myself reproducing (and poorly reproducing) features found within MPI.
Performance wise I would not expect MPI to give you any tangible network speedups - it uses sockets just like you. MPI will however provide you with much the functionality you would need for managing many nodes, i.e. synchronisation between nodes.
Performance is not the only consideration in this case, even on high performance clusters. MPI offers a standard API, and is "portable." It is relatively trivial to switch an application between the different versions of MPI.
Most MPI implementations use sockets for TCP based communication. Odds are good that any given MPI implementation will be better optimized and provide faster message passing, than a home grown application using sockets directly.
In addition, should you ever get a chance to run your code on a cluster that has InfiniBand, the MPI layer will abstract any of those code changes. This is not a trivial advantage - coding an application to directly use OFED (or another IB Verbs) implementation is very difficult.
Most MPI applications include small test apps that can be used to verify the correctness of the networking setup independently of your application. This is a major advantage when it comes time to debug your application. The MPI standard includes the "pMPI" interfaces, for profiling MPI calls. This interface also allows you to easily add checksums, or other data verification to all the message passing routines.
Message Passing is a paradigm not a technology. In the most general installation, MPI will use sockets to communicate. You could see a speed up by switching to MPI, but only in so far as you haven't optimized your socket communication.
How is your application I/O bound? Is it bound on transferring the data blocks to the work nodes, or is it bound because of communication during computation?
If the answer is "because of communication" then the problem is you are writing a tightly-coupled application and trying to run it on a cluster designed for loosely coupled tasks. The only way to gain performance will be to get better hardware (faster switches, infiniband, etc. )... maybe you could borrow time on someone else's HPC?
If the answer is "data block" transfers then consider assigning workers multiple data blocks (so they stay busy longer) & compress the data blocks before transfer. This is a strategy that can help in a loosely coupled application.
MPI has the benefit that you can do collective communications. Doing broadcasts/reductions in O(log p) /* p is your number of processors*/ instead of O(p) is a big advantage.
I'll have to agree with OldMan and freespace. Unless you know of a specific and improvement to some useful metric (performance, maintainability, etc.) over MPI, why reinvent the wheel. MPI represents a large amount of shared knowledge regarding the problem you are trying to solve.
There are a huge number of issues you need to address which is beyond just sending data. Connection setup and maintenance will all become your responsibility. If MPI is the exact abstraction (it sounds like it is) you need, use it.
At the very least, using MPI and later refactoring it out with your own system is a good approach costing the installation and dependency of MPI.
I especially like OldMan's point that MPI gives you much more beyond simple socket communication. You get a slew of parallel and distributed computing implementation with a transparent abstraction.
I have not used MPI, but I have used sockets quite a bit. There are a few things to consider on high performance sockets. Are you doing many small packets, or large packets? If you are doing many small packets consider turning off the Nagle algorithm for faster response:
setsockopt(m_socket, IPPROTO_TCP, TCP_NODELAY, ...);
Also, using signals can actually be much slower when trying to get a high volume of data through. Long ago I made a test program where the reader would wait for a signal, and read a packet - it would get a bout 100 packets/sec. Then I just did blocking reads, and got 10000 reads/sec.
The point is look at all these options, and actually test them out. Different conditions will make different techniques faster/slower. It's important to not just get opinions, but to put them to the test. Steve Maguire talks about this in "Writing Solid Code". He uses many examples that are counter-intuitive, and tests them to find out what makes better/faster code.
MPI uses sockets underneath, so really the only difference should be the API that your code interfaces with. You could fine tune the protocol if you are using sockets directly, but thats about it. What exactly are you doing with the data?
MPI Uses sockets, and if you know what you are doing you can probably get more bandwidth out of sockets because you need not send as much meta data.
But you have to know what you are doing and it's likely to be more error prone. essentially you'd be replacing mpi with your own messaging protocol.
For high volume, low overhead business messaging you might want to check out
OAMQ with several products. The open source variant OpenAMQ supposedly runs the trading at JP Morgan, so it should be reliable, shouldn't it?