I'm looking for a way to get two programs to efficiently transmit a large amount of data to each other, which needs to work on Linux and Windows, in C++. The context here is a P2P network program that acts as a node on the network and runs continuously, and other applications (which could be games hence the need for a fast solution) will use this to communicate with other nodes in the network. If there's a better solution for this I would be interested.
boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.
Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.
I have been using ICE by ZeroC (www.zeroc.com), and it has been fantastic. Super easy to use, and it's not only cross platform, but has support for many languages as well (python, java, etc) and even an embedded version of the library.
Well, if we can assume the two processes are running on the same machine, then the fastest way for them to transfer large quantities of data back and forth is by keeping the data inside a shared memory region; with that setup, the data is never copied at all, since both processes can access it directly. (If you wanted to go even further, you could combine the two programs into one program, with each former 'process' now running as a thread inside the same process space instead. In that case they would be automatically sharing 100% of their memory with each other)
Of course, just having a shared memory area isn't sufficient in most cases: you would also need some sort of synchronization mechanism so that the processes can read and update the shared data safely, without tripping over each other. The way I would do that would be to create two double-ended queues in the shared memory region (one for each process to send with). Either use a lockless FIFO-queue class, or give each double-ended queue a semaphore/mutex that you can use to serialize pushing data items into the queue and popping data items out of the queue. (Note that the data items you'd be putting into the queues would only be pointers to the actual data buffers, not the data itself... otherwise you'd be back to copying large amounts of data around, which you want to avoid. It's a good idea to use shared_ptrs instead of plain C pointers, so that "old" data will be automatically freed when the receiving process is done using it). Once you have that, the only other thing you'd need is a way for process A to notify process B when it has just put an item into the queue for B to receive (and vice versa)... I typically do that by writing a byte into a pipe that the other process is select()-ing on, to cause the other process to wake up and check its queue, but there are other ways to do it as well.
This is a hard problem.
The bottleneck is the internet, and that your clients might be on NAT.
If you are not talking internet, or if you explicitly don't have clients behind carrier grade evil NATs, you need to say.
Because it boils down to: use TCP. Suck it up.
I would strongly suggest Protocol Buffers on top of TCP or UDP sockets.
So, while the other answers cover part of the problem (socket libraries), they're not telling you about the NAT issue. Rather than have your users tinker with their routers, it's better to use some techniques that should get you through a vaguely sane router with no extra configuration. You need to use all of these to get the best compatibility.
First, ICE library here is a NAT traversal technique that works with STUN and/or TURN servers out in the network. You may have to provide some infrastructure for this to work, although there are some public STUN servers.
Second, use both UPnP and NAT-PMP. One library here, for example.
Third, use IPv6. Teredo, which is one way of running IPv6 over IPv4, often works when none of the above do, and who knows, your users may have working IPv6 by some other means. Very little code to implement this, and increasingly important. I find about half of Bittorrent data arrives over IPv6, for example.
Related
I am trying to setup a TCP communication framework between two computers. I would like each computer to send data to the other. So computer A would perform a calculation, and send it to computer B. Computer B would then read this data, perform a calculation using it, and send a result back to computer A. Computer A would wait until it receives something from computer B before proceeding with performing another calculation, and sending it to computer B.
This seems conceptually straightforward, but I haven't been able to locate an example that details two-way (bidirectional) communication via TCP. I've only found one-way server-client communication, where a server sends data to a client. These are some examples that I have looked at closely so far:
Server-Client communication
Synchronized server-client communication
I'm basically looking to have two "servers" communicate with each other. The synchronized approach above is, I believe, important for what I'm trying to do. But I'm struggling to setup a two-way communication framework via a single socket.
I would appreciate it greatly if someone could point me to examples that describe how to setup bidirectional communication with TCP, or give me some pointers on how to set this up, from the examples I have linked above. I am very new to TCP and network communication frameworks and there might be a lot that I could be misunderstanding, so it would be great if I could get some clear pointers on how to proceed.
This answer does not go into specifics, but it should give you a general idea, since that's what you really seem to be asking for. I've never used Qt before, I do all my networking code with BSD-style sockets directly or with my own wrappers.
Stuff to think about:
Protocol. Hand-rolled or existing?
Existing protocols can be heavyweight, depending on what your payload looks like. Examples include HTTP and Google ProtoBuf; there are many more.
Handrolled might mean more work, but more controlled. There are two general approaches: length-based and sentinel-based.
Length-based means embedding the length into the first bytes. Requires caring about endianness. Requires thinking about what if a message is longer than can be embedded in the length byte. If you do this, I strongly recommend that you define your packet formats in some data file, and then generate the low-level packet encoding logic.
Sentinel-based means ending the message when some character (or sequence) is seen. Common sentinels are '\0', '\n', and "\r\n". If the rest of your protocol is also text-based, this means it is much easier to debug.
For both designs, you have to think about what happens if the other side tries to send more data than you are willing (or able) to store in memory. In either case, limiting the payload size to a 16-bit unsigned integer is probably a good idea; you can stream replies with multiple packets. Note that serious protocols (based on UDP + crypto) typically have a protocol-layer size limit of 512-1500 bytes, though application-layer may be larger of course.
For both designs, EOF on the socket without having a sentinel means you must drop the message and log an error.
Main loop. Qt probably has one you can use, but I don't know about it.
It's possible to develop simple operations using solely blocking operations, but I don't recommend it. Always assume the other end of a network connection is a dangerous psychopath who knows where you live.
There are two fundamental operations in a main loop:
Socket events: a socket reports being ready for read, or ready to write. There are also other sorts of events that you probably won't use, since most useful information can be found separately in the read/write handlers: exceptional/priority, (write)hangup, read-hangup, error.
Timer events: when a certain time delta has passed, interrupt the wait-for-socket-events syscall and dispatch to the timer heap. If you don't have any, either pass the syscalls notion of "infinity". But if you have long sleeps, you might want some arbitrary, relatively number like "10 seconds" or "10 minutes" depending on your application, because long timer intervals can do all sorts of weird things with clock changes, hibernation, and such. It's possible to avoid those if you're careful enough and use the right APIs, but most people don't.
Choice of multiplex syscall:
The p versions below include atomic signal mask changing. I don't recommend using them; instead if you need signals either add signalfd to the set or else emulate it using signal handlers and a (nonblocking, be careful!) pipe.
select/pselect is the classic, available everywhere. Cannot have more than FD_SETSIZE file descriptors, which may be very small (but can be #defined on the command-line if you're careful enough. Inefficient with sparse sets. Timeout is microseconds for select and nanonseconds for pselect, but chances are you can't actually get that. Only use this if you have no other choice.
poll/ppoll solves the problems of sparse sets, and more significantly the problem of listening to more than FD_SETSIZE file descriptors. It does use more memory, but it is simpler to use. poll is POSIX, ppoll is GNU-specific. For both, the API provides nanosecond granularity for the timeout, but you probably can't get that. I recommend this if you need BSD compatibility and don't need massive scalability, or if you only have one socket and don't want to deal with epoll's headaches.
epoll solves the problem of having to respecify the file descriptor and event list every time. by keeping the list of file descriptors. Among other things, this means that when, the low-level kernel event occurs, the epoll can immediately be made aware, regardless of whether the user program is already in a syscall or not. Supports edge-triggered mode, but don't use it unless you're sure you understand it. Its API only provides millisecond granularity for the timeout, but that's probably all you can rely on anyway. If you are able to only target Linux, I strongly suggest you use this, except possibly if you can guarantee only a single socket at once, in which case poll is simpler.
kqueue is found on BSD-derived systems, including Mac OS X. It is supposed to solve the same problems as epoll, but instead of keeping things simple by using file descriptors, it has all sorts of strange structures and does not follow the "do only one thing" principle. I have never used it. Use this if you need massive scalability on BSD.
IOCP. This only exists on Windows and some obscure Unixen. I have never used it and it has significantly different semantics. Use this, but be aware that much of this post is not applicable because Windows is weird. But why would you use Windows for any sort of serious system?
io_uring. A new API in Linux 5.1. Significantly reducing the number of syscalls and memory copies. Worth it if you have a lot of sockets, but since it's so new, you must provide a fallback path.
Handler implementation:
When the multiplex syscall signifies an event, look up the handler for that file number (some class with virtual functions) and call the relevant events (note there may be more than one).
Make sure all your sockets have O_NONBLOCK set and also disable Nagle's algorithm (since you're doing buffering yourself), except possibly connect's before the connection is made, since that requires confusing logic, especially if you want to play nice with multiple DNS results.
For TCP sockets, all you need is accept in the listening socket's handler, and read/write family in the accept/connected handler. For other sorts of sockets, you need the send/recv family. See the "see also" in their man pages for more info - chances are one of them will be useful to you sometimes, do this before you hard-code too much into your API design.
You need to think hard about buffering. Buffering reads means you need to be able to check the header of a packet to see if there are enough bytes to do anything with it, or if you have to store the bytes until next time. Also remember that you might receive more than one packet at once (I suggest you rethink your design so that you don't mandate blocking until you get the reply before sending the next packet). Buffering writes is harder than you think, since you don't want to be woken when there is a "can write" even on a socket for which you have no data to write. The application should never write itself, only queue a write. Though TCP_CORK might imply a different design, I haven't used it.
Do not provide a network-level public API of iterating over all sockets. If needed, implement this at a higher level; remember that you may have all sorts of internal file descriptors with special purposes.
All of the above applies to both the server and the client. As others have said, there is no real difference once the connection is set up.
Edit 2019:
The documentation of D-Bus and 0MQ are worth reading, whether you use them or not. In particular, it's worth thinking about 3 kinds of conversations:
request/reply: a "client" makes a request and the "server" does one of 3 things: 1. replies meaningfully, 2. replies that it doesn't understand the request, 3. fails to reply (either due to a disconnect, or due to a buggy/hostile server). Don't let un-acknowledged requests DoS the "client"! This can be difficult, but this is a very common workflow.
publish/subscribe: a "client" tells the "server" that it is interested in certain events. Every time the event happens, the "server" publishes a message to all registered "clients". Variations: , subscription expires after one use. This workflow has simpler failure modes than request/reply, but consider: 1. the server publishes an event that the client didn't ask for (either because it didn't know, or because it doesn't want it yet, or because it was supposed to be a oneshot, or because the client sent an unsubscribe but the server didn't process it yet), 2. this might be a magnification attack (though that is also possible for request/reply, consider requiring requests to be padded), 3. the client might have disconnected, so the server must take care to unsubscribe them, 4. (especially if using UDP) the client might not have received an earlier notification. Note that it might be perfectly legal for a single client to subscribe multiple times; if there isn't naturally discriminating data you may need to keep a cookie to unsubscribe.
distribute/collect: a "master" distributes work to multiple "slaves", then collects the results, aka map/reduce any many other reinvented terms for the same thing. This is similar to a combination of the above (a client subscribes to work-available events, then the server makes a unique request to each clients instead of a normal notification). Note the following additional cases: 1. some slaves are very slow, while others are idle because they've already completed their tasks and the master might have to store the incomplete combined output, 2. some slaves might return a wrong answer, 3. there might not be any slaves, 4.
D-Bus in particular makes a lot of decisions that seem quite strange at first, but do have justifications (which may or may not be relevant, depending on the use case). Normally, it is only used locally.
0MQ is lower-level and most of its "downsides" are solved by building on top of it. Beware of the MxN problem; you might want to artificially create a broker node just for messages that are prone to it.
#include <QAbstractSocket>
#include <QtNetwork>
#include <QTcpServer>
#include <QTcpSocket>
QTcpSocket* m_pTcpSocket;
Connect to host: set up connections with tcp socket and implement your slots. If data bytes are available readyread() signal will be emmited.
void connectToHost(QString hostname, int port){
if(!m_pTcpSocket)
{
m_pTcpSocket = new QTcpSocket(this);
m_pTcpSocket->setSocketOption(QAbstractSocket::KeepAliveOption,1);
}
connect(m_pTcpSocket,SIGNAL(readyRead()),SLOT(readSocketData()),Qt::UniqueConnection);
connect(m_pTcpSocket,SIGNAL(error(QAbstractSocket::SocketError)),SIGNAL(connectionError(QAbstractSocket::SocketError)),Qt::UniqueConnection);
connect(m_pTcpSocket,SIGNAL(stateChanged(QAbstractSocket::SocketState)),SIGNAL(tcpSocketState(QAbstractSocket::SocketState)),Qt::UniqueConnection);
connect(m_pTcpSocket,SIGNAL(disconnected()),SLOT(onConnectionTerminated()),Qt::UniqueConnection);
connect(m_pTcpSocket,SIGNAL(connected()),SLOT(onConnectionEstablished()),Qt::UniqueConnection);
if(!(QAbstractSocket::ConnectedState == m_pTcpSocket->state())){
m_pTcpSocket->connectToHost(hostname,port, QIODevice::ReadWrite);
}
}
Write:
void sendMessage(QString msgToSend){
QByteArray l_vDataToBeSent;
QDataStream l_vStream(&l_vDataToBeSent, QIODevice::WriteOnly);
l_vStream.setByteOrder(QDataStream::LittleEndian);
l_vStream << msgToSend.length();
l_vDataToBeSent.append(msgToSend);
m_pTcpSocket->write(l_vDataToBeSent, l_vDataToBeSent.length());
}
Read:
void readSocketData(){
while(m_pTcpSocket->bytesAvailable()){
QByteArray receivedData = m_pTcpSocket->readAll();
}
}
TCP is inherently bidirectional. Get one way working (client connects to server). After that both ends can use send and recv in exactly the same way.
Have a look at QWebSocket, this is based on HTTP and it also allows for HTTPS
The boost::interprocess::message_queue mechanism seems primarily designed for just that: interprocess communication.
The problem is that it serializes the objects in the message:
"A message queue just copies raw bytes between processes and does not send objects."
This makes it completely unsuitable for fast and repeated interthread communication with large composite objects being passed.
I want to create a message with a ref/shared_ptr/pointer to a known and previously-created object and safely pass it from one thread to the next.
You CAN use asio::io_service and post with bind completions, but that's rather klunky AND requires that the thread in question be using asio, which seems a bit odd.
I've already written my own, sadly based on asio::io_service, but would prefer to switch over to a boost-supported general mechansim.
You need a mechanism, that designed for interprocess communication because separate processes has separate address space and you cannot simply pass pointers except very spacial cases. For thread communication you can use standard containers like std::stack, std::queue and std::priority_queue to communicate between threads, you just need to provide proper synchronization through mutexes. Or you can use lock-free containers, which also provided by boost. What else would you need for interthread communication?
Whilst I'm no expert in Boost per se, there is a fundamental difficulty in communicating between processes and threads via a pipe, message queue, etc, especially if it is assumed that a program's data is classes containing dynamically allocated memory (which is pretty much the case for things written with Boost; a string is not a simple object like it is in C...).
Copying of Data in Classes
Message queues and pipes are indeed just a way of passing a collection of bytes from one thread/process to another thread/process. Generally when you use them you're looking for the destination thread to end up with a copy of the original data, not just a copy of the references to the data (which would be pointing back at the original data).
With a simple C struct containing no pointers at all it's easy; a copy of the struct contains all the data, no problem. But a C++ class with complex data types like strings is now a structure containing references / pointers to allocated memory. Copy that structure and you haven't actually copied the data in the allocated memory.
That's where serialisation comes in. For interprocess communications where both processes can't ordinarily share the same memory serialisation serves as a way of parcelling up the structure to be sent plus all the data it refers to into a stream of bytes that can be unpacked at the other end. For threads it's no different if you don't want the two threads accessing the same memory at the same time. Serialisation is a convenient way of saving yourself having to navigating through a class to see exactly what needs to be copied.
Efficiency
I don't know what Boost uses for serialisation, but clearly serialising to XML would be painfully inefficient. A binary serialisation like ASN.1 BER would be much faster.
Also, copying data through pipes, message queues is no longer as inefficient as it used to be. Traditionally programmers don't do it because of the perceived waste of time spent copying the data repeatedly just to share it with another thread. With a single core machine that involves a lot of slow and wasteful memory accesses.
However, if one considers what "memory access" is in these days of QPI, Hypertransport, and so forth, it's not so very different to just copying the data in the first place. In both cases it involves data being sent over a serial bus from one core's memory controller to another core's cache.
Today's CPUs are really NUMA machines with memory access protocols layered on top of serial networks to fake an SMP environment. Programming in the style of copying messages through pipes, message queues, etc. is definitely edging towards saying that one is content with the idea of NUMA, and that really you don't need SMP at all.
Also, if you do all your inter-thread communications as message queues, they're not so very different to pipes, and pipes aren't so different to network sockets (at least that's the case on Not-Windows). So if you write your code carefully you can end up with a program that can be redeployed across a distributed network of computers or across a number of threads within a single process. That's a nice way of getting scalability because you're not changing the shape or feel of your program in any significant way when you scale up.
Fringe Benefits
Depending on the serialisation technology used there can be some fringe benefits. With ASN.1 you specify a message schema in which you set out the valid ranges of the message's contents. You can say, for example, that a message contains an integer, and it can have values between 0 and 10. The encoders and decoders generated by decent ASN.1 tools will automatically check that the data you're sending or receiving meets that constraint, and returns errors if not.
I would be surprised if other serialisers like Google Protocol Buffers didn't do a similar constraints check for you.
The benefit is that if you have a bug in your program and you try and send an out of spec message, the serialiser will automatically spot that for you. That can save a ton of time in debugging. Also it is something you definitely don't get if you share a memory buffer and protect it with a semaphore instead of using a message queue.
CSP
Communicating Sequential Processes and the Actor model are based on sending copies of data through message queues, pipes, etc. just like you're doing. CSP in particular is worth paying attention to because it's a good way of avoiding a lot of the pitfalls of multi-threaded software that can lurk undetected in source code.
There are some CSP implementations you can just use. There's JCSP, a class library for Java, and C++CSP, built on top of Boost to do CSP for C++. They're both from the University of Kent.
C++CSP looks quite interesting. It has a template class called csp::mobile, which is kind of like a Boost smart pointer. If you send one of these from one thread to another via a channel (CSP's word for a message queue) you're sending the reference, not the data. However, the template records which thread 'owns' the data. So a thread receiving a mobile now owns the data (which hasn't actually moved), and the thread that sent it can no longer access it. So you get the benefits of CSP without the overhead of copying the data.
It also looks like C++CSP is able to do channels over TCP; that's a very attractive feature, up scaling is a really simple possibility. JCSP works over network connections too.
I am attempting to rewrite my current project to include more features and stability, and need some help designing it. Here is the jist of it (for linux):
TCP_SERVER receives connection (auth packet)
TCP_SERVER starts a new (thread/fork) to handle the new client
TCP_SERVER will be receiving many packets from client > which will be added to a circular buffer
A separate thread will be created for that client to process those packets and build a list of objects
Another thread should be created to send parts of the list of objects to another client
The reason to separate all the processing into threads is because server will be getting many packets and the processing wont be able to keep up (which needs to be quick, as its time sensitive) (im not sure if tcp will drop packets if the internal buffer gets too large?), and another thread to send to another client to keep the processing fast as possible.
So for each new connection, 3 threads should be created. 1 to receive packets, 1 to process them, and 1 to send the processed data to another client (which is technically the same person/ip just on a different device)
And i need help designing this, as how to structure this, what to use (forks/threads), what libraries to use.
Trying to do this yourself is going to cause you a world of pain. Focus on your actual application, and leverage an existing socket handling framework. For example, you said:
for each new connection, 3 threads should be created
That statement says the following:
1. You haven't done this before, at scale, and haven't realized the impact all these threads will have.
2. You've never benchmarked thread creation or synchronous operations.
3. The number of things that can go wrong with this approach is pretty overwhelming.
Give some serious thought to using an existing library that does most of this for you. Getting the scaffolding right around this can literally take years, and you're better off focusing on your code rather than all the random plumbing.
The Boost C++ libraries seem to have a nice Async C++ socket handling infrastructure. Combine this with some of the existing C++ thread pools and you could likely have a highly performant solution up fairly quickly.
I would also question your use of C++ for this. Java and C# both do highly scalable socket servers pretty well, and some of the higher level language tooling (Spring, Guarva, etc) can be very, very valuable. If you ever want to secure this, via TLS or another mechanism, you'll also probably find this much easier in Java or C# than in C++.
Some of the major things you'll care about:
1. True Async I/O will be a huge perf and scalability win. Try really hard to do this. The boost asio library looks pretty nice.
2. Focus on your features and stability, rather than building a new socket handling platform.
3. Threads are expensive, avoid creating them. Thread pools are your friend.
You plan to create one-or-more threads for every connection your server handles. Threads are not free, they come with a memory and CPU overhead, and when you have many active threads you also begin to have resource contention.
What usage pattern do you anticipate? Do you expect that when you have 8 connections, all 8 network threads will be consuming 100% of a cpu core pushing/pulling packets? Or do you expect them to have a relatively low turn-around?
As you add more threads, you will begin to have to spend more time competing for resources in things like mutexes etc.
A better pattern is to have one or more thread for network io - most os'es have mechanisms for saying "tell me when one or more of these network connections has io" which is an efficiency saving over having lots of individual threads all doing the same thing for just one connection.
Then for actual processing, spin up a pool of worker threads to do actual work, allowing you to minimize the competition for resources. You can monitor work load to determine if you need to spin up more to meet delivery requirements.
You might also want to look into something to implement the network IO infrastructure for you; I've had really good performance results with libevent but then I've only had to deal with very high performance/reliability networking systems.
In C you can create multi process application using fork() and you can then communicate using a FIFO pipe. I have learned that C++ only supports multi threaded applications and if you want a multi-process application you have to rely on fork().
But in C++ type checking is crucial so I can't just pipe objects through a pipe without any risks. You could cast to void* and ask sizeof and the send everything through a pipe to typecast it back to the original object.
So why does this feel so wrong? Is multi-process architecture not used in C++ or are there libraries or better ways of doing things. I have googled a little and the only thing you find is multithreaded C++ or multi-process C.
The reason for wanting more processes is that I want my application to be as robust as possible. If my web service crashes I want my main process to restart it. There is no way of doing this multithreaded since you never know if 1 thread hasn't corrupted memory in another thread so if you get an error in one thread you have to restart out of safety.
I assume by multiprocess programs you mean concurrently running two separate instances of a program with some shared data between them. Communicating between processes can be tricky. I have always gotten by with the pipe-typecasting method you described, or by using a local socket system to send data back and forth, however there do exist libraries for higher-level communication between processes, see boost.interprocess I would give that a look, and see if it can fit your needs.
As far as I understand your question right your main problem is passing data from one process to another via some kind of serial connection.
If this is the case and you are just passing flat data structures up and down the line there should be no problem using the method you already described. Just shot the whole bunch of bits down the line and cast to the corresponding type on the other end of the line. As far as you're only communicating with processes running on the same machine with executables generated by the same compiler and with same compiler switches there won't be any problem with byte-order, member alignment or whatever.
If on the other hand your data is more complex containing some kind of references to other objects lists of dynamic length or the like then you'll soon get into big trouble using the simple cast to void* and pump the data bit by bit strategy. So you'll definately need a more sophisticated approach at "serialization" and "deserialization" of your objects.
These ("serialization" and "deserialization") are the two terms you might want to investigate further on to find the approach that best fits your problem.
As you'll soon find out these are problems to which "solutions" are invented over and over again with such a zoo of standards like XDR used by sun RPC and ASN.1 to name just a few that it is hard to tell which one will best fit to your use case.
If you're going with C++ you might want to take a look at the solution the boost has to offer (see: http://www.boost.org/doc/libs/1_39_0/libs/serialization/doc/index.html)
Yet again - if it's just flat data structures you're passing back and forth don't bother with any overhead of that kind and just shoot the data down the line bit by bit.
I am currently involved in the development of a software using distributed computing to detect different events.
The current approach is : a dozen of threads are running simultaneously on different (physical) computers. Each event is assigned a number ; and every thread broadcasts its detected events to the other and filters the relevant events from the incoming stream.
I feel very bad about that, because it looks awful, is hard to maintain and could lead to performance issues when the system will be upgraded.
So I am looking for a flexible and elegant way to handle this IPC, and I think Boost::Signals seems a good candidate ; but I never used it, and I would like to know whether it is possible to provide encapsulation for network communication.
Since I don't know any solution that will do that, other then Open MPI, if I had to do that, I would first use Google's Protocol Buffer as my message container. With it, I could just create an abstract base message with stuff like source, dest, type, id, etc. Then, I would use Boost ASIO to distribute those across the network, or over a Named PIPE/loopback for local messages. Maybe, in each physical computer, a dedicated process could be running just for distribution. Each thread registers with it which types of messages it is interested in, and what its named pipe is called. This process would know the IP of all the other services.
If you need IPC over the network then boost::signals won't help you, at least not entirely by itself.
You could try using Open MPI.