How to load a shared object multiple times independently - c++

I'm tasked with designing a small daemon (on Debian Linux) which will use a blackbox libfoo.so to communicate with an external EFT terminal. There are several, identical EFT terminals (around 100), and one libfoo.so instance can only communicate with a single terminal. There is an init call which essentially binds the instance to a terminal.
We're mainly using Java in our company, but this probably calls for a C++ implementation. The programming language is not yet defined.
As we'll need to handle concurrent communication with multiple terminals (around maybe 10 concurrent threads), we'll need to load several instances of the libfoo.so. I'm looking for design principles how to solve such a requirement (dlopen will only load an SO once, same thing for JNI). Do I need to spawn child processes? Copy/paste the SO and call it libfoo_1.so, libfoo_2.so etc. (aargh!) Are there other solutions?
Thanks
Simon

If the library has no API, meaning it runs its code using the .init mechanism, then you have no better choice than forking a parent process and dlopen the library in the child processes.
This is pretty simple actually, as long as you remember to wait for your child processes to terminate when needed.
If you need to handle communication between your parent and child processes, there are several Inter-process Communication methods available such as pipes.

Related

Difference between 2 apps running a secondary thread for communication and 2 apps using the same communication process

I am new in multi-processing. I am creating an app in modern c++ that communicates with PLCs by modbus TCP. Thus, I have created my library to achieve that and I run the communication process on a different thread and the main thread communicates with it by a shared queue and events. It works well.
Now, I would like to design another app that has a different purpose that would sometime run in parallel to the first one. It still needs to communication with the PLCs. Should I also implement it with multithreading, opening a new canal of communication in a thread? Or should I implement some unique communication process and that every executables would connect to via some pipe? What is the difference between the two and is there a logic to use/exclude one of those solutions?
I see advantages in the multi-threading solution:
If the client fails, it only affects one program until reconnexion,
Shared memory is useful,
It is already implemented in my case
But I also see several advantages for the multi-processing approach. One communication client could be used by many apps. Also among them, it appears to be better for optimization.
I thank you in advance for your help.

C++ best way to launch another process?

Its been a while since I've had to do this and in the past I've used "spawn" to create processes.
Now I want to launch processes from my application asynchronously so my application continues to execute in the background and does not get held up by launching the process.
I also want to be able to communicate with the launched processes. When I launch the process I will send it the launchers process id so that the launched process can communicate with the launcher using it's pid.
What is the best method to use that is not specific to any platform / operating system, I'm looking for a solution that is multi-platform?
I'm writing this in C++, I don't want a solution that ties me to any third party licensed product.
I don't want to use threads, the solution must be for creating new processes.
A portable to launch a new process is std::system.
#include <cstdlib>
int main() {
std::system("./myapp");
return 0;
}
if you use linux and you want to share handles/memory between processes, fork is what you are looking for
Try Boost.Process.
Boost.Process provides a flexible framework for the C++ programming language to manage running programs, also known as processes. It empowers C++ developers to do what Java developers can do with java.lang.Runtime/java.lang.Process and .NET developers can do with System.Diagnostics.Process. Among other functionality, this includes the ability to manage the execution context of the currently running process, the ability to spawn new child processes, and a way to communicate with them them using standard C++ streams and asynchronous I/O.
The library is designed in a way to transparently abstract all process management details to the user, allowing for painless development of cross-platform applications. However, as such abstractions often restrict what the developer can do, the framework allows direct access to operating system specific functionality - obviously losing the portability features of the library.
Example code to run and wait to finish for child process from the site:
bp::child c(bp::search_path("g++"), "main.cpp");
while (c.running())
do_some_stuff();
c.wait(); //wait for the process to exit
int result = c.exit_code();
I'll plug my own little (single header) library:
PStreams allows you to run another program from your C++ application and to transfer data between the two programs similar to shell pipelines.
In the simplest case, a PStreams class is like a C++ wrapper for the POSIX.2 functions popen(3) and pclose(3), using C++ iostreams instead of C's stdio library.
The library provides class templates in the style of the standard iostreams that can be used with any ISO C++ compiler on a POSIX platform. The classes use a streambuf class that uses fork(2) and the exec(2) family of functions to create a new process and creates up to three pipes to write/read data to/from the process.

How to run a C++ program inside another C++ program?

I will sketch the scenario I would like to get working below.
I have one main application.
That application, based on user interactions, can load other applications inside a secure environment/shell. This means these child applications cannot interact with the OS anymore, nor with each other.
The parent program can at any time call functions of these child programs.
The child program can at any time call functions of these parent programs.
Does anyone know how to implement this in C++? Preferably both parent and child should be written in C++.
The performance of loading the child applications inside the parent application doesn't matter. The only thing that matters is the performance of the communication between child and parent when calling functions of each other.
You will have to write your own compiler.
Consider: No normal OS supports what you want. You want both executables to run inside a single process, yet that process may or may not make OS calls depending on some weirdness inside the process which the OS doesn't understand at all.
This is no longer a problem with your custom compiler, as it simply will not create the offending instructions. It's similar to Java and .Net, which also prevent such OS calls outside their control.
A portable solution: Google Native Client
One possible Linux solution:
Make AppArmor profile with "hats" (a "hat" is a sandboxing configuration to which the application can switch programmatically with libapparmor),
have the main application create a "pipe",
have the main application "fork",
change into a "hat" corresponding to the child application,
"exec" the child application,
the main application and the child application communicate via the "pipe" created earlier.
If you want a (semi)crossplatform way to do this you can use RPC to call functions in another process. It's going to work on anything that supports the distributed computing environment. It's been around for some time and the msdn documentation states that parts of windows use it for inter process communication so it's probably fast enough. Here's a tutorial on msdn that should get you up and running http://msdn.microsoft.com/en-us/library/windows/desktop/aa379010.aspx The bad part is that I haven't been able to find a tutorial about using it on linux.
If you don't want to use RPC or find it too hard to find good documentation on the subject, you can use the standard IPC(Inter Process Communication) mechanisms from unix systems to signal your process that should call a certain function. I'd recommend a message queue because it's very fast and lightweight. You can find a tutorial here: http://www.cs.cf.ac.uk/Dave/C/node25.html
I am not familiar with OS restrictions in above answers. However, I found an easy way to solve this problem. I hope it helps and does not have a technical issue. I used Linux OS. Suppose I want to call C++ program B inside another C++ program A. I wrote a perl script (such as PerlScript.pl) that contains a system call to run program B. Then in A, I did a system call like system("perl PerlScript.pl") that ask perl to run B for me.

How to provide a function for the other program to call it?

Assume I use a C++ program to maintain a queue in Linux, and do some things with the data in the queue, and now I want to run it in the background and provide a function, therefore other programs could simply call it to pull a data into my queue.
What's the best way to do this?
If your programs are running as two separate processes, You cannot just call functions in other process directly, you will need a Interprocess communication Mechanism(IPC) to communicate between the two processes.
Usually, this is done as follows:
The process which you want to communicate to provides a client side library, The process or application which wants to communicate with the process links to this client side library. This client side library provides simple function calls which your calling process/application can call directly. The client side library implements the necessary IPC mechanism to communicate with the remote process.
What I understand is that you want a client API that wraps communication with the queue.
You need to create a separate library that contains and exports the API, and include it in the programs that want to use it.
class Communicator
{
public:
bool putData(Data* data);
bool getData(Data*& data);
};
The implementation of Communicator does the actual communication with the queue via IPC, but you abstract that layer out.
There are a variety of mechanisms to do this, from creating your own server, using IPC, RPC, CORBA to name a few.
As to the best it depends on a variety of factors.
In the OP you mentioned you want a queue with one process processing it - perhaps using shared memory and a mutex would be a simple solution, with a library to access the queue for both processes.

c++ calls to fortran and back

In my c++ code (my_app) I need to launch external app (app_ext) that dynamically loads my library (dll,so) written in fortran (lib_fort). From this library (lib_fort) I need to call back to some method from my_app, synchronously.
So its like that:
(my_app) --launches--> (app_ext) --loads--> (lib_fort) --"calls"--> (my_app)
app_ext is not developed by me.
Do you have any suggestions how to do it, and what's most important, do it efficiently??
Edit:
Clarification. Launching external app (app_ext) and loading my library from it (lib_fort) will happen only once per whole program execution. So that part doesn't need to be ultra-efficient. Communication between lib_fort and my_app is performance critical. Lib_fort needs to "call" my_app millions of times.
The whole point is about efficient inter-process communication.
My_app role after launching app_ext is to wait and serve "calls" from lib_fort. The tricky part is that solution needs to work both for distributed and shared memory environment, i.e. both my_app and app_ext+lib_fort on single host (1) and my_app and app_ext+lib_fort on different machines (2).
In (1) scenario I was thinking about MPI, but I'm not sure if it is possible to communicate with MPI between two different applications (in contrast to single, multi-process, MPI application).
In (2) scenario probably some kind of inter-process communication using shared memory? (or maybe also MPI?)
OK, the real issue is how to communicate between processes. (Forget MPI, that's for a different kind of problem.) You may be talking about COM (Component Object Model) or RPC (Remote Procedure Call) or pipes, but underneath it's going to be using sockets. IME the simplest and most efficient thing is to open the socket connections yourself and converse over those. That will be the rate-limiter and there really isn't anything faster.