Methods of sharing class instances between processes - c++

I have written a C++ class that I need to share an instance of between at least two windows processes. What are the various ways to do this?
Initially I looked into #pragma data_seg only to be disappointed when I realised that it will not work on classes or with anything that allocates on the heap.
The instance of the class must be accessible via a dll because existing, complete applications already use this dll.

You can potentially use memory-mapped files to share data between processes. If you need to call functions on your object, you'd have to use COM or something similar, or you'd have to implement your own RPC protocol.

Look into Boost::interprocess. It takes a bit of getting used to, but it works very well. I've made relatively complex data structures in shared memory that worked fine between processes.
edit: it works with memory-mapped files too. The point is you can use data in a structured way; you don't have to treat the memory blocks (in files or shared memory) as just raw data that you have to carefully read/write to leave in a valid state. Boost::interprocess takes care of that part and you can use STL containers like trees, lists, etc.

You can use placement new to create the object in a shared memory zone. As long as the object doesn't use any pointers, that sould be fine.

Is it a POD or do you need to be able to share a single instance across processes? Have you considered using the Singleton pattern (static initialization version, for thread safety reasons)? You will need to use Mutexes as well to protect concurrent writes and stuff.
On Windows, you can use COM as well.

Related

How to futureproof thread-safe concurrent access to std::shared_ptr std::unique_ptr

What are some recommended strategies for future-proofing present-day C++ coding of concurrent access to std::shared_ptr(-like) and std::unique_ptr(-like) data structures, as the C++ language spec evolves in this area?
Background:
Circa 2021, available C++ language constructs for managing access to std::shared_ptr(-like) and std::unique_ptr(-like) smart pointers in concurrency-friendly ways are in flux. For example C++20 support for std::atomic<std::shared_ptr> hasn't made it very far into compiler in the wild yet, but the C++20 spec tells us it is coming.
I'm engaged in non-trivial multi-threaded development and need to be able to pass smart pointers (both shared and unique) between threads via (hopefully lock-free) queues and use them in various thread-safe ways. I'd like to develop this code in a way that allows it to be easily upgraded to modern language features once they become available. The ideal would be to be able to do this upgrade easily from a central place, such as would be the case if changing the definition of a CPP-macro and coding in terms of those macros.
Does anyone know of a good strategy (A good set of CPP macros perhaps?) for future-proofing present-day concurrency code?
[ CLARIFYING EDIT after some good comments (Thanks everyone) ]
From what I gather:
Different instances of std::shared_ptr and std::unique_ptr may be read/written, from different threads without an issue, (like when different instances are passed-in to different threads) but the object instance (or memory) they point to may NOT be safely accessed by multiple threads at the same time (so you should use a mutex, or another method to access the pointed-to-object if this is the use case). [ Thanks Alex Guteniev for that clarity ]
The SAME instance of a std::shared_ptr or std::unique_ptr may be read/written by threads in a safe way using (pre-C++20: std::atomic_load/store etc, AND post-C++20: std::atomic<std::shared_ptr> or std::atomic<std::unique_ptr> ) My thought is that this might be a place to use CPP MACROS, such as SHARED_GET, SHARED_SET, UNIQUE_GET, UNIQUE_SET that'd centralize the changes you'd need to make to go from C++17 to C++20. [ Thanks NicolBolas for the clarity on what is actually coming in C++20. As was pointed out: the link I provided in the comments below is outdated, so be careful not to consider it fact.]
If you are passing std::unique_ptr between threads using std::move to pass the pointed-to-memory along, and using queues to enforce that only a single thread has access at any given time, you can use both the std_unique pointers themselves AND the pointed-to-memory in the thread that receives the pointer without any mutexes, or other protection against resource contention.
Because of my confusion when asking, my original question was perhaps confusing. Now I'd rephrase the question as: I'm looking for a set of access CPP macros #defines, that detect C++17 and C++20 and use that version's cleanest/correct definition for the following operations:
MAKE_LOCAL_SHARED: Create/load a local std::shared_ptr instance from
a common/shared instance that the thread can read/write without
contention with the original. It should point to the same memory
that the common/shared one pointed to
BEGIN_USE_SHARED_TGT: Create an
hold a std::lock_guard/mutex within the scope of which the
pointed-to-memory of a local std::shared_ptr instance can be safely
used.
END_USE_SHARED_TGT: (probably just a closing brace?) Release
the std::lock_guard/mutex when done using the pointed-to-memory
BEGIN_USE_UNIQUE_TGT, END_USE_UNIQUE_TGT (same as above for
std::unique_ptr
I'm engaged in non-trivial multi-threaded development and need to be able to pass smart pointers (both shared and unique) between threads via (hopefully lock-free) queues and use them in various thread-safe ways.
When using a (lock-free) queue, you don't access produced and consumed elements at the same time.
For accessing different variables unique_ptr and shared_ptr are already safe. When two shared_ptrs point to the same object, there is a guarantee that manipulating these shared_ptrs in different threads is thread safe, usually implemented using reference counting. Different unique_ptrs don't point to the same object.
Just use shared_ptr and unique_ptr as usual if you just put them to a queue and don't really access the same variable from multiple threads at the same time.

How to create more than one instance with own copy of global variables

I have two projects:
Embedded one, written in C++, which uses a lot of static/global variables.
Second one, running on PC and using the same source codes as embedded one uses.
It works very well.
But now second project should run more than one instances of embedded project. Furthermore each instance should have its own copy of static/global variables, and I should be able to interact with each instance in one program scope. I don't know how to do this with all that static/global variables.
Is there any simple way to solve my problem?
There are several ways you can solve this:
Spawn multiple processes (each with their own set of globals) and setup channels of communication between them and the main program.
Get rid of the global variables. The easiest way to do this would be to dump them all in a class (as non-static members) and use instances of that class to access each set of variables.
Either way, it's not a small problem to solve if you have a large number of globals.
Run two separate processes and use some form of IPC to communicate between the the processes. In Windows IPC mechanisms available include:
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
See here for details of each of these. Similar mechanisms are available in other operating systems.
A perhaps simpler alternative is to run each instance in a separate thread and place the globals in thread local storage.
In all cases however, you should avoid nit just "a lot" but any global variables. It is generally indicative of poor design. See this article for why globals are bad, and ways to avoid them.
As the other answers state the best solution is to get rid of the globals, but I understand that this is not always feasible.
I ran into the exact same problem with our code base.
The solution I used was to build each instance as a separate DLL.
Then load I loaded each DLL with LoadLibrary() at runtime.
In this way you can get everything to run in a single process and have multiple version of the same globals and singletons.
And then you don't need to use any IPC but can pass data between the instances with a simple function call. It also makes the debugging easier, because you can see everything in one debugger.
NOTE: I made it on Windows, but I assume the something similar is possible on Unix.

How to use an old single-threaded C++ library in a multithreaded environment

I have an old C++ library which has been designed for use in single-threaded environmens.
The library exposes the interfaces for initialization, which change the internal data structures of the library, and usage, which only reads data and makes calculations.
My objective is to use this library in a Windows multithreaded application, with different threads calling instances of the dll initialized with different data.
Assuming that rewriting the dll to allow multithreading would be prohibitive, is there some way to let multiple instances of a DLL exist in the same process, with separate memory spaces, or to obtain a similar result by other means?
If the DLL contains static resources, then those would be shared among all instances created.
One possible way would be to create a single instance and restrict access to it using some kind of lock mechanism. This may reduce performance depending on usage, but without modifying internal structure of DLL, it may be difficult to work with multiple instance.
The sharing of static resources between all threads attached to a single DLL within a process conspires against you here.
However, there is a trick to achieve this. So long as DLLs have different names, then the system regards them as being different and so separate instances of code and data are created.
The way to achieve this is, for each thread, copy the DLL to a temporary file and load from there with LoadLibrary. You have to use explicit linking (GetProcAddress) rather than lib files but that's really the only way.

share queue between parent and child process in c++

I know there are many way to handle inter-communication between two processes, but I'm still a bit confused how to deal with it. Is it possible to share queue (from standard library) between two processes in efficient way?
Thanks
I believe your confusion comes from not understanding the relationship between the memory address spaces of the parent and child process. The two address spaces are effectively unrelated. Yes, immediately after the fork() the two processes contain almost identical copies of memory, but you should think of them as copies. Any change one proces makes to memory in its address space has no impact on the other process's memory.
Any "plain old data structures" (such as provided by the C++ standard library) are purely abstractions of memory, so there is no way to use them to communicate between the two processes. To send data from one process to the other, you must use one of several system calls that provide interprocess communication.
But, note that shared memory is an exception to this. You can use system calls to set up a section of share memory, and then create data structures in the share memory. You'll still need to protect these data structures with a mutex, but the mutex will have to be shared-memory aware. With Posix threads, you'd use pthread_mutexattr_init with the PTHREAD_PROCESS_SHARED attribute.
Simple answer: Sharing an std::queue by two processes can be done but it is not trivial to do.
You can use shared memory to hold the queue together with some synchronization mechanism (usually a mutex). Note that not only the std::queue object must be constructed in the shared memory region, but also the contents of the queue, so you will have to provide your own allocator that manages the creation of memory in the shared region.
If you can, try to look at higher level libraries that might provide already packed solutions to your process communication needs. Consider Boost.Interprocess or search in your favorite search engine for interprocess communication.
I don't think there are any simple ways to share structures/objects like that between two projects. If you want to implement a queue/list/array/etc between two processes, you will need to implement some kind of communication between the processes to manage the queues and to retrieve and store entries.
For example, you could implement the queue management in one process and implement some kind of IPC (shared memory, sockets, pipes, etc.) to hand off entries from one process to the other.
There may be other methods outside of the standard C++ libraries that will do this for you. For example, there are likely Boost libraries that already implement this.

C++: Is it possible to share a pointer through forked processes?

I have a count variable that should get counted up by a few processes I forked and used/read by the mother process.
I tried to create a pointer in my main() function of the mother process and count that pointer up in the forked children. That does not work! Every child seems to have it's own copy even though the address is the same in every process.
What is the best way to do that?
Each child gets its own copy of the parent processes memory (at least as soon as it trys to modify anything). If you need to share betweeen processes you need to look at shared memory or some similar IPC mechanism.
BTW, why are you making this a community wiki - you may be limiting responses by doing so.
2 processes cannot share the same memory. It is true that a forked child process will share the same underlying memory after forking, but an attempt to write to this would cause the operating system to allocate a new writeable space for it somewhere else.
Look into another form of IPC to use.
My experience is, that if you want to share information between at least two processes, you almost never want to share just some void* pointer into memory. You might want to have a look at
Boost Interprocess
which can give you an idea, how to share structured data (read "classes" and "structs") between processes.
No, use IPC or threads. Only file descriptors are shared (but not the seek pointer).
You might want to check out shared memory.
the pointers are always lies in the same process. It's private to the process, relative to the process's base address. There different kind of IPC mechanisms available in any operating systems. You can opt for Windows Messaging, Shared memory, socket, pipes etc. Choose one according to your requirement and size of data. Another mechanism is to write data in target process using Virtual memory APIs available and notify the process with corresponding pointer.
One simple option but limited form of IPC that would work well for a shared count is a 'shared data segment'. On Windows this is implemented using the #pragma data_seg directive.
See this article for an example.