Decoupling IPC in C++ - c++

I'm working on a wide program (C++/Qt on linux) organized in different parts: from an inner engine toward different UI (some of them graphical). So far I've organized this division creating a number of different processes equal to the number of different UI and engine. Every "user" process communicates with the core engine via two pipes (opposite directions).
What I would like to obtain is to have every single process running as a stand alone one that doesn't block while communicating with the engine process but simply using an internal custom "message buffer" (already built and tested) to store messages and process them when free.
The solution is (I guess) to design every process to spawn an additional thread which takes care of communicating to engine process (and another one for GUI). I am using pthread.h library (POSIX). Is it right? Could someone provide a simple example of how to achieve a communication between a single couple of processes?
Thanks in advance.

Related

TraCIScenarioManagerForker vs veins-launchd

I currently use TraCIScenarioManagerForker to spawn SUMO for each simulation, the "forker" method. However, the official VEINS documentation recommends launching the SUMO daemon separately using the veins-launchd script and then run simulations, the "launchd" method.
Using the forker method makes running simulations just a one command job since SUMO is killed when simulation ends. However, with the launchd method, one has to take care of setting up the SUMO daemon and killing it when simulation ends.
What are the advantages and disadvantages of each method? I'm trying to understand the recommended best practices when using VEINS.
Indeed, Veins 5.1 provides three (four, if you count an experimental one) ways of connecting a running OMNeT++ to SUMO:
assuming SUMO is already running and connecting there directly (TraCIScenarioManager)
running SUMO directly from the process - on Linux: as a fork, on Windows: as a process in the same context (TraCIScenarioManagerForker)
connecting to a Proxy (veins_launchd) that launches an isolated instance of SUMO for every client that connects to it (TraCIScenarioManagerLaunchd)
if you are feeling adventurous, the veins_libsumo fork of Veins offers a fourth option: including the SUMO engine directly in your OMNeT++ simulation and using it via method calls (instead of remote procedure calls via a network socket). Contrast, for example, TraCI based code vs. libsumo based code. This can be orders of magnitude faster with none of the drawbacks discussed below. At the time of writing (Mar 2021) this fork is just a proof of concept, though.
Each of these has unique benefits and drawbacks:
is the most flexible: you can connect to a long-running instance of SUMO which is only rolled backwards/forwards in time through the use of snapshots, connect multiple clients to a single instance, etc but
requires you to manually take care of running exactly as many instances of SUMO as you need at exactly the time when you need them
is very convenient, but
requires the simulation (as opposed to the person running the simulation) to "know" how to launch SUMO - so a simulation that works on one machine won't work on another because SUMO might be installed in a different path there etc.
launches SUMO in the directory of the simulation, so file output from multiple SUMO instances overwrites each other and file output is stored in the directory storing the simulation (which might be a slow or write protected disk, etc.)
results in both SUMO and OMNeT++ writing console output into what is potentially the same console window, requiring experience in telling one from the other when debugging crashes (and things get even more messy if one wants to debug SUMO)
does not suffer from any of these problems, but
requires the user to remember starting the proxy before starting the simulations

What's the best way to connect a Qt4 and a Qt5 process by IPC?

I want to build an application which is based on two separate processes. One of them (Process 1) is using Qt4 for accessing the functionalities of a legacy code base. The other one (Process 2) is the UI layer of the application using Qt5.
I'll need to access the functions of Process 1 from Process 2, and I'll need to access the results of Process 2 from Process 1.
Can anyone suggest a best practice for connecting the two processes via IPC?
http://doc.qt.io/qt-4.8/ipc.html
According to the link you have to choose between TCP/IP (QNetworkAccessManager etc.) or Shared Memory with (QSharedMemory). In your case DBUS would not be a good idea as you are working on windows.
I can also suggest to have a look at QProcess, through that you can make your QT5 application execute your QT4 application and collect the result from standard output.
It depends a lot on how much data you need to exchange and how flxible you are with your legacy stuff.
Personally if it is possible I would go for the QProcess.

How to communicate between two processes

Hi I'm working on a c++ project that I'm trying to keep OS independent and I have two processes which need to communicate. I was thinking about setting up a 3rd process (possibly as a service?) to coordinate the other two, asynchronously.
Client 1 will tell the intermediate process when data is ready, and send the data to it. The intermediate process will then hold this data until client 2 tells it that it is ready for the data. If the intermediate process has not received new data from client 1, it will tell client 2 to wait.
Since I am trying to keep this OS independent I don't really know what to use. I have looked into using MPI but it doesn't really seem to fit this purpose. I have also looked into Boost.ASIO, Named Pipes, RPC's and RCF. Im currently programming in Windows but I'd like to avoid using the WIN_API so that the code could potentially be compiled in Linux.
Here's a little more detail on the two processes.
We have a back end process/model (client 1) that will receive initial inputs from a GUI (client 2, written in Qt) via the intermediate process. The model will then proceed to work until the end condition is met, sending data to the server as it becomes ready. The GUI will ask the intermediate process for data on regular intervals and will be told to wait if the model has not updated the data. As the data becomes available from the model we also want to be able to keep any previous data from the current session for exporting to a file if the user chooses to do so (i.e., we'll want the GUI to issue a command to the interface to export (or load) the data).
My modification privleges of the the back end/model are minimal, other than to adhere to the design outlined above. I have a decent amount of c++ experience but not much parallel/asynchronous application experience. Any help or direction is greatly appreciated.
Standard BSD TCP/IP socket are mostly platform independent. They work with some minor differences on both windows and Unices (like linux).
PS windows does not support AF_UNIX sockets.
I'd checkout the boost.interprocess library. If the two processes are on the same machine it has a number of different ways to communicate between processes, and do so in an platform independent manner.
I am not sure if you have considered the messaging system but if you are sending structured data between processes you should consider looking at google protocol buffers.
These related to the content of the messaging (what is passed) rather than how they are passed.
boost::asio is platform independent although it doesn't imply C++ at both ends. Of course, when you are using C++ you can use boost::asio as your form of transport.

Crossplatform background service + GUI

This seems to be typical application:
1. One part of the program should scan for audio files in background and write tags to the database.
2. The other part makes search queries and shows results.
The application should be crossplatform.
So, the main search loop, including adding data to database is not a problem. The questions are:
1. What is the best way to implement this background working service? Boost(asio) or Qt(services framework?)?
2. What is the best approach, to make a native service wrapper using mentioned libraries or emulate it using non gui application?
3. Should I connect gui to the service(how they will communicate using boost or qt?) or directly to the database (could locks be there?)?
4. Will decsision in point 1 consume all CPU usage? And how to avoid that? How to implement scanning for files less cpu consumable?S
I like to use Poco which has a convenient ServerApplication class, which can be used in an application that can be run as either a normal command-line application, or as a Windows service, or as a *nix daemon without having to touch the code.
If you use a "real" database (MySQL, PostgreSQL, SQL Server), then querying the database from the GUI application is probably fine and easier to do. If you use another type of database that isn't necessarily multi-user friendly, then you should communicate with the service using loopback sockets or pipes.
As far as CPU usage, you could just use a bunch of "sleep" calls within your code that searches files to make sure it doesn't hog the CPU and IO ports. Or use some kind of interval notification to allow it to search in chunks periodically.

DLL Injection/IPC question

I'm work on a build tool that launches thousands of processes (compiles, links etc). It also distributes executables to remote machines so that the build can be run accross 100s of slave machines. I'm implementing DLL injection to monitor the child processes of my build process so that I can see that they opened/closed the resources I expected them to. That way I can tell if my users aren't specifying dependency information correctly.
My question is:
I've got the DLL injection working but I'm not all that familiar with windows programming. What would be the best/fastest way to callback to the parent build process with all the millions of file io reports that the children will be generating? I've thought about having them write to a non-blocking socket, but have been wondering if maybe pipes/shared memory or maybe COM would be better?
First, since you're apparently dealing with communication between machines, not just within one machine, I'd rule out shared memory immediately.
I'd think hard about trying to minimize the amount of data instead of worrying a lot about how fast you can send it. Instead of sending millions of file I/O reports, I'd batch together a few kilobytes of that data (or something on that order) and send a hash of that packet. With a careful choice of packet size, you should be able to reduce your data transmission to the point that you can simply use whatever method you find most convenient, rather than trying to pick the one that's the fastest.
If you stay in the windows world (None of your machines is linux or whatever) named pipes is a good choice, because it is fast and can be accessed across the machine boundary. I think shared memory is out of the race, because it can't cross the machine boundary. Distributed com allows to formulate the contract in IDL, but i think XML Messages via pipes are also ok. The xml messages have the benefit to work completely independent from the channel. If yo need linux later you can switch to tcp/ip transport and send your xml messages.
Some additional techniques with limitations:
Another forgotten but hot candidate is RPC (remote procedure calls). Lot of windows services rely on this. But i think it is hard to program RPC
If you are on the same machine and you only need to send some status information, you can regisier a windows message via RegisterWindowMessage() and send messages vie SendMessage()
apart from all the suggestions from thomas, you might also just use a common database to store the results. And if that is too slow use one of the more modern(and fast) key/value databases (like tokyo cabinet/memcachedb/etc).
This sounds like a lot of overkill for the task of verifying the files used in a build. How about, just scanning the build files? or capturing the output from the build tools?