Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.
Related
I'm using C++ and have a simple client .exe that when handed a file name, it does process it and return success or error code. I want to create a Windows C++ .exe that does the following and was looking for sample code to do it:
Start 4 (or x) client .exe as separate process (for ex. using CreateProcess)
While the list of the files is not empty
Send work to clients: Each client will process a sent file name and return either success
or error code
Once the list of files to process is empty (or the producer .exe shutdown) close the
4 clients (so they shutdown).
I did some research on this and found that pipes can be used to communicate between process. I found this sample app that does a communication between a server and client in c++: https://code.msdn.microsoft.com/windowsapps/CppNamedPipeServer-d1778534
The sample app does however sends request from client to server and get a response and I wanted to modify it or use a different sample app to do batch processing through having a common queue of work (or pipe that store this queue or batch of work) and send work to clients. I want to synchronize this work so as soon as client is done with one file, I'll send it another file to process.
Basically I want to create a sample application .exe that start multiple clients and send them work through inter-process communication. Any sample C++ code to do this is appreciated.
Thanks
Jeff Lacoste
You could have a look at boost. It has boost::interprocess where you can read about alot of ideas of what methods there are for IPC.
I personally never use boost::interprocess as I'm a huge fan of boost::asio, and just like for your purposes, it has everything you need ( except creating a process ).
And there is many many more to be found on google, and it is entirely opinion based what library to use or to directly use the native OS API, which is why I wonder this question is not closed yet.
As for your request to give "code samples", those 2 links contain samples to everything you listed regarding IPC, and it's open source, so you can look how the libraries communicate with the native OS API.
I'm working on a wide program (C++/Qt on linux) organized in different parts: from an inner engine toward different UI (some of them graphical). So far I've organized this division creating a number of different processes equal to the number of different UI and engine. Every "user" process communicates with the core engine via two pipes (opposite directions).
What I would like to obtain is to have every single process running as a stand alone one that doesn't block while communicating with the engine process but simply using an internal custom "message buffer" (already built and tested) to store messages and process them when free.
The solution is (I guess) to design every process to spawn an additional thread which takes care of communicating to engine process (and another one for GUI). I am using pthread.h library (POSIX). Is it right? Could someone provide a simple example of how to achieve a communication between a single couple of processes?
Thanks in advance.
I am trying to make an AIR application, that needs to pass an image (.jpg/.png) to a C++ app, that does number crunching.(this needs to be done very often, like every 2-3 seconds.) I've managed to pass the image by saving it to disk via AIR, then opening this file with the C++ program (and passing the filename as an argument to the C++ program), but this method is really slow, because it involves lots of disk I/O.
Is there a method to send an image directly to a native process?
Edit: There is a good Flash-C++ communication example at http://www.marijnspeelman.nl/blog/2008/03/06/face-detection-using-flash-and-c-revisited/ using sockets. The big problem with this method is, that some firewall settings can block the communication (i get a windows firewall warning, when i start the app).
There are several ways to transmit data between two processes.
One of the most efficient, and easy to setup, is to use TCP sockets.
It means that your C/C++ will for (TCP/HTTP) requests, and that your AIR program will send the request with all data inside.
I have a C++ program that is constantly generating a large amount of data that needs to be sent to a Rails server. Both the program and the server are on the same machine running Suse Linux.
What is the most efficient and simple solution for this?
Sockets are the way to go. If you want some good asynchronous and cross-platform sockets in C++ your best bet, probably, will be boost::asio.
You could store the data the way you want (file or database).
The only tough point is to make your Rails app aware the C++ program is completed.
I'd strongly advise you to store this information in cache so that it won't cost much to check this every period you need.
You could use sockets since both your programs are residing on the same local machine, and in general it should be pretty straight-forward to send the serialized data over a local socket. Since the socket is using an internal buffer, the transfer time should be very fast. Your C++ program can either push data to the Rails server, or you can have the Rails server poll the C++ program providing you're setting up a cache in your C++ program to store the data in between polling calls. The push method would probably work best though.
I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?