IPC between Qt and C/C++ - c++

I need to send/receive data between two processes. One of them will be using Qt (4 or 5).
That process will be running all the time (like a background process).
The other process will be launched and then it should be able to send argv to the
first process and receive some answer from it.
The second process must startup as fast as possible so using QtCore is kind of a last resourse. Meaning I need it to be as small and fast as possible, so I'd need to use plain
C/C++ without any external libraries.
Any ideas how it could be done?
If that's not possible, I'll have to use QtCore in the second process. Do you know how much
slower it would be because of QtCore vs plain C/C++? (in terms of startup time).
Regards
EDIT:
I can't use QBus as this must be Mac/Linux/Windows compatible.

If it needs to be fully cross platform compatible your best bet is likely to be named sockets/named pipes, which should work on each platform. Should take you to the information you need for the socket setup. You'll still need some network handling code in your pure C++ application, but it should be significantly less overhead than Qt-Core and Qt-Network.
You could also do it with shared memory, but I prefer the socket method for simplicity.

Related

FFTW reentrancy in plug-in based programs

I'm developing a cross-platform application (Win / Mac / Linux). This application loads plug-ins that I don't control as dynamic libraries, which may do various things, mostly audio and image processing.
Some of these plug-ins may use FFTW as part of their implementation details. (This is not an hypothetical case - I already have three of those).
But, FFTW's fftw_plan family of function is not reentrant per the docs - they can only be called by a single thread. The problem is that some of the plug-ins I could load may call fftw_plan deep inside some thread that they would create themselves.
Is there something I can do to still make sure that things work in that case, or should I just accept that this will end up crashing ? (Putting each plug-in in its own process is not an acceptable solution for me sadly).
It turns out that FFTW provides the void fftw_make_planner_thread_safe(void) function which does ensure that plug-ins will be able to run plans in separate threads.
Calling it at the beginning of the program is enough.

Capping allocated memory in multi-threaded C++ library

I've developed a library in C++ that allows multi-threaded usage. I want to support an option for the caller to specify a cap on the memory allocated by a given thread. (We can ignore the case of one thread allocating memory and others using it.)
Possibly making this more complicated is that my library uses various open source components (boost, ICU, etc), some of which are statically linked and others dynamically.
One option I've been looking into is overriding the allocation functions (new/delete/etc) to do the bookkeeping per thread ID. Natural concerns come up around the bookkeeping: performance, etc.
But an even bigger question/concern is whether this approach will work with the open source components without code changes to them?
I can't seem to find pre-existing solutions for this, though it seems to me like it's not very unusual.
Any suggestions on this approach, or another approach?
EDIT: More background: The library can allocate a significantly large range of memory per calling thread depending on the input provided (ie. KBs to GBs).
So the goal of this request is to (more graciously & deterministically) support running in RAM-constrained environments. This is not for a hard-real-time environment with strict memory limits--it's to support a number of concurrent threads which each have a "safe" allocation cap to avoid engaging the page/swap file.
Basic example use case: a system with 32GB RAM, 20GB free, the application using my library may configure itself to use a max of 10 threads and configure the library to use a max of 1GB per thread.
Upon hitting the cap the current thread's call into the library will cease further work and return a suitable error. (The code is already fully RAII so unwinding cleanly is easy.)
BTW I found some interesting content on the web already, sadly none provide a lot of hope for a "simple & effective" solution. But this one is especially insightful.

How a process can broadcast data locally

I am looking for some existing way of broadcasting data localy (like IPC, but in in an unconnected way).
The need:
I am currently having a computation program that has no HMI (and won't have) and I would like this program to send information about its progress so another one can display it (for example in an HMI). But if there is no other program "listening", the comptation is not interrupted. And I would like to have the minimum logic embeded in the computation program.
I have found things about IPC, but it seems to work only in a client-server configruation.
So I have identified that my need is to find a way of broadcasting the data, and clients may or may not listen to this broadcast.
How can i do this ?
EDIT:
I would like or a very light solution (like a standalone set for .h files (not more than 5)) or even a way of doing it by myself : as I said, IPC seems ok but it is working in a connected way.
For example, the 0MQ (http://zguide.zeromq.org/page:all#Getting-the-Message-Out) is doing exactly what I need, but is embeding to much functionalities.
You can try with MPI library this purpose.
Have a look at this
For now, the Shared memory (on UNIX) seems to do the job.
It remains several points I have not investigated yet:
compatibility between OS (it's C++ and I would like it to be
build-able under any platform without having to change the code)
Sharing complex objects with undetermined size at compilation time.
Dynamic size => might be really complicated to have something
efficient.
So I am still open and waiting for a better solution.

MPI Fundamentals

I have a basic question regarding MPI, to get a better understanding of it (I am new to MPI and multiple processes so please bear with me on this one). I am using a simulation environment in C++ (RepastHPC) that makes extensive use of MPI (using the Boost libraries) to allow parallel operations. In particular, the simulation consists of multiple instances of the respective classes (i.e. agents), that are supposed to interact with each other, exchange information etc. Now given that this takes place on multiple processes (and given my rudimentary understanding of MPI) the natural question or fear I have is, that agents on different processes don't intereact with each other anymore because they cannot connect (I know, this contradicts the entire idea of MPI).
After reading the manual my understanding is this: the available libraries of Boost.MPI (and also the libaries of the above mentionend package) take care of all of the communication and sending packages back and forth between processes, i.e. each process has copies of the instances from other processes (I guess this is some form of call by value, b/c the original instance cannot be changed from a process that has only a copy), then an updating takes place, to ensure that the copies of the instances have the same information as the originals and so on.
Does this mean, that in terms of the final outcomes of the simulations runs, I get the same as if I would be doing the entire thing on one process? Put differently, the multiple processes are just supposed to speed up things but not change the design of simulation (thus I don't have to worry about it)?
I think you have a fundamental misunderstanding of MPI here. MPI is not an automatic parallelization library. It isn't a distributed shared memory mechanism. It doesn't do any magic for you.
What it does do is make it simpler to communicate between different processes on the same or different machines. Each process has its own address space which does not overlap with the other processes (unless you're doing something else outside of MPI). Assuming you set up your MPI installation correctly, it will do all of the pain of setting up the communication channels between your processes for you. It also gives you some higher level abstractions like collective communication.
When you use MPI, you compile your code differently than normal. Instead of using g++ -o code code.cpp (or whatever your compiler is), you use mpicxx -o code code.cpp. This will automatically link with all of the MPI stuff necessary. Then when you run your application, you use mpiexec -n <num_processes> ./code (other arguments aren't required, but are probably necessary) . The argument num_processes will tell MPI how many processes to launch. This isn't done at compile/link time.
You will also have to rewrite your code to use MPI. MPI has lots of functions (the standard is available here and there are lots of tutorials available on the web that are easier to understand) that you can use. The basics are MPI_Send() and MPI_Recv(), but there's lots and lots more. You'll have to find a tutorial for that.

Interaction of two c/c++ programs

I'm in complete lack of understanding in this. Maybe this is too broad for stack, but here it goes:
Suppose I have two programs (written in C/C++) running simultaneously, say A and B, with different PIDs.
What are the options to make then interact with each other. For instance, how do I pass information from one to another like having one being able to wait for a signal from the other, and respond accordingly.
I know MPI, but MPI normally works for programs that are compiled using the same source (so, it works more for parallel computing than just interaction from completely different programs built to interact with each other).
Thanks
You must lookout for "IPC" (inter process communication). There are several types:
pipes
signals
shared memory
message queues
semaphores
files (per suggestion of #JonathanLeffler :-)
RPC (suggested by #sftrabbit)
Which is usually more geared towards Client/Server
CORBA
D-Bus
You use one of the many interprocess communication mechanisms, like pipes (one applications writes bytes into a pipe, the other reads from it. Imagine stdin/stdout.) or shared memory (a region of memory is mapped into both programs virtual address space and they can communicate through it).
The same source doesn't matter - once your programs are compiled the system doesn't know or care where they came from.
There are different ways to communicate between them depending on how much data, how fast, one way or bidirectional, predicatable rate etc etc....
The simplest is possibly just to use the network - note that if you are on the same machine the network stack will automatically use some higher performance system to actually send the data (ie shared memory)