How to manage type-safety in POSIX shared memory - c++

I have a 'program' that runs as multiple independent processes that use POSIX semaphores and shared memory to store and share common variables with each other. The problem is that so far this has been implemented as a main program which sets the initial values of the variables in shared memory, and the other programs rely on getting the address offset right to access a given variable.
I would like to improve on this design. An idea I had was to bundle together all methods for access to shared memory in a shared library, and have perhaps an enum as an argument to a GetVariable method, something like this
enum class SHMVariables: long
{
kVariable1 = address_offset_of_variable_1,
kVariable2 = address_offset_of_variable_2,
...
}
template <class T>
void GetVariable(SHMVariables var, T &)
{
// lock semaphore, do some memory boundary checks, and return the value of the variable
}
The challenge is that the variables may have different arithmetic types (some are user-defined structs with arithmetic type members) and I'm wondering what a best approach could be to manage type-safety. Could a map do the trick?
std::map<SHMVariables, std::size_t> // the value being assigned with something like sizeof(struct foo)
What are best practices when it comes to managing variables of different type in shared memory? Is shared memory even an appropriate choice?

Related

Is there a way to distinguish what type of memory used by the object instance?

If i have this code :
#include <assert.h>
class Foo {
public:
bool is_static();
bool is_stack();
bool is_dynamic();
};
Foo a;
int main()
{
Foo b;
Foo* c = new Foo;
assert( a.is_static() && !a.is_stack() && !a.is_dynamic());
assert(!b.is_static() && b.is_stack() && !b.is_dynamic());
assert(!c->is_static() && !c->is_stack() && c->is_dynamic());
delete c;
}
Is it possible to implement is_stack, is_static, is_dynamic method to do so in order to be assertions fulfilled?
Example of use: counting size of memory which particular objects of type Foo uses on stack, but not counting static or dynamic memory
This cannot be done using standard C++ facilities, which take pains to ensure that objects work the same way no matter how they are allocated.
You can do it, however, by asking the OS about your process memory map, and figuring out what address range a given object falls into. (Be sure to use uintptr_t for arithmetic while doing this.)
Scroll down to the second answer that gives a wide array of available options depending on the Operating System:
How to determine CPU and memory consumption from inside a process?
I would also recommend reading this article on Tracking Memory Alloactions in C++:
http://www.almostinfinite.com/memtrack.html
Just be aware that it's a ton of work.
while the intention is good here, the approach is not the best.
Consider a few things:
on the stack you allocate temporary variables for your methods. You
don't always have to worry about how much stack you use because the
lifetime of the temp variables is short
related to stack what you usually care about is not corrupting it,
which can happen if your program uses pointers and accesses data
outside the intended bounds. For this type of problems a isStatic
function will not help.
for dynamic memory allocation you usually override the new/ delete
operators and keep a counter to track the amount of memory used. so
again, a isDynamic function might not do the trick.
in the case of global variables (you said static but I extended the
scope a bit) which are allocated in a separate data section (not
stack nor heap) well you don't always care about them because they
are statically allocated and the linker will tell you at link time if
you don't have enough space. Plus you can check the map file if you
really want to know address ranges.
So most of your concerns are solved at compile time and to be honest you rarely care about them. And the rest are (dynamic memory allocation) are treated differently.
But if you insist on having those methods you can tell the linker to generate a map file which will give you the address ranges for all data sections and use those for your purposes.

Non-Boost STL allocator for inter-process shared memory?

Due to policy where I work, I am unable to use a version of Boost newer than 1.33.1 and unable to use a version of GCC newer than 4.1.2. Yes, it's garbage, but there is nothing I can do about it. Boost 1.33.1 does not contain the interprocess library.
That said, one of my projects requires placing an std::map (or more likely an std::unordered_map) in to shared memory. It is only written/modified ONE TIME when the process loads by a single process (the "server") and read by numerous other processes. I haven't done shared memory IPC before so this is fairly new territory for me. I took a look at shmget() but it would appear that I can't continually use the same shared memory key for allocation (as I assume would be needed with STL container allocators).
Are there any other NON-BOOST STL allocators that use shared memory?
EDIT: This has been done before. Dr. Dobbs had an article on how to do this exactly back in 2003, and I started to use it as a reference. However, the code listings are incomplete and links to them redirect to the main site.
EDIT EDIT: The only reason I don't just re-write Boost.Interprocess is because of the amount of code involved. I was just wondering if there was something relatively short and concise specifically for POSIX shared memory that I could re-write from scratch since data transfers between networks are also subject to a multi-day approval process...
Pointers do not work in shared memory unless you cannot pin down the shared memory at a fixed address (consistent in all processes). As such, you need specific classes that will either be contiguous (no pointer), or have an offset (and not a pointer) into the memory area in which the shared memory is mapped.
We are using shared memory at work in a pretty similar situation: one process computes a set of data, places it in shared memory, and then signal the other processes that they may map the memory into their own address space; the memory is never changed afterwards.
The way we go about it is having POD structures (*) (some including char xxx[N]; attributes for string storage). If you can actually limit your strings, you are golden. And as far as map goes: it's inefficient for read-only storage => a sorted array performs better (hurray for memory locality). So I would advise going at it so:
struct Key {
enum { Size = 318 };
char value[Size];
};
struct Value {
enum { Size = 412 };
enum K { Int, Long, String };
K kind;
union { int i; long l; char string[Size]; } value;
};
And then simply have an array of std::pair<Key, Value> that you sort (std::sort) and over which you use std::lower_bound for searches. You'll need to write a comparison operator for key, obviously:
bool operator<(Key const& left, Key const& right) {
return memcmp(left.value, right.value, Key::Size) < 0;
}
And I agree that the enum + union trick is less appealing (interface wise) than a boost variant... it's up to you to make the interface better.
(*) Actually, a pure POD is not necessary. It's perfectly okay to have private attributes, constructors and copy constructors for example. All that is needed is to avoid indirection (pointers).
Simple workaround. Create your own "libNotBoost v1.0` from Boost 1.51. The Boost library allows this. Since it's no longer Boost, you're fine.

How to store stl objects in shared memory (C++)?

I've the following code pattern:
class A {
double a, b, c;
...
};
class B {
map<int, A> table; // Can have maximum of MAX_ROWS elements.
...
};
class C {
B entries;
queue<int> d;
queue<int> e;
...
};
Now I want to store an object of type C in a shared memory, so that different processes can append, update and read it. How can I do this? (Note: I know how to store a simple C array that has a fixed size in shared memory. Also, remember that B.table may have arbitrary entries.
Use boost::interprocess, this library exposes this functionality.
EDIT: Here are some changes you'll need to do:
The example already defines an allocator that will allocate from the shared memory block, you need to pass this to the map and the queue. This means you'll have to change your definitions:
class B
{
map<int, A, less<int>, MapShmemAllocator> table;
// Constructor of the map needs the instance of the allocator
B(MapShmemAllocator& alloc) : table(less<int>(), alloc)
{ }
}
For queue, this is slightly complicated, because of the fact that it's really just an adapter, so you need to pass in the real implementation class as a template parameter:
typedef queue<int, deque<int, QueueShmemAllocator> > QueueType;
Now your class C changes slightly:
class C
{
B entries;
QueueType d, e;
C(MapShmemAllocator& allocM, QueueShmemAllocator& allocQ) : entries(allocM), d(allocQ), e(allocQ)
{ }
}
Now from the segment manager, construct an instance of C with the allocator.
C *pC = segment.construct<C>("CInst")(allocM_inst, allocQ_inst);
I think that should do the trick. NOTE: You will need to provide two allocators (one for queue and one for map), not sure if you can construct two allocators from the same segment manager, but I don't see why not.
Building and using STL objects in shared memory is not tricky yet (especially using boost::interprocess wrappers). For sure you should also use syncing mechanisms (also not a problem with boost's named_mutex).
The real challenge is to keep consistency of STL objects in a shared memory. Basically, if one of the processes crashes in a bad point in time, it leaves other processes with a two big problems:
A locked mutex (can be resolved using tricky PID-to-mutex mappings, robust mutexes (wherever available), timed mutexes etc.
An STL object in the inconsistent state (e.g. semi-updated map structure during erase() procedure). In general, this is not recoverable yet, you need to destroy and re-construct object in a shared memory region from the scratch (probably killing all other processes as well). You may try to intercept all possible external signals in your app and crossing fingers hope everything will go well and process never fail in a bad moment.
Just keep this in mind when deciding to use shared memory in your system.
UPD: check shmaps (https://github.com/rayrapetyan/shmaps) project to get an idea of how things should work.
This can be tricky. For starters, you'll need a custom allocator: Boost
Interprocess has one, and I'd start with it. In your exact example,
this may be sufficient, but more generally, you'll need to ensure that
all subtypes also use the shared memory. Thus, if you want to map from
a string, that string will also need a custom allocator, which means
that it has a different type than std::string, and you can't copy or
assign to it from an std::string (but you can use the two iterator
constructor, e.g.:
typedef std::basic_string<char, std::char_traits<char>, ShmemAllocator> ShmemString;
std::map<ShmemString, X, std::less<ShmemString>, ShmemAllocator> shmemMap;
with accesses like:
shmemMap[ShmemString(key.begin(), key.end())] ...
And of course, any types you define which go into the map must also use
shared memory for any allocations: Boost Interprocess has an
offset_ptr which may help here.

Access data in shared memory C++ POSIX

I open a piece of shared memory and get a handle of it. I'm aware there are several vectors of data stored in the memory. I'd like to access those vectors of data and perform some actions on them. How can I achieve this? Is it appropriate to treat the shared memory as an object so that we can define those vectors as fields of the object and those needed actions as member functions of the object?
I've never dealt with shared memory before. To make things worse, I'm new to C++ and POSIX. Could someone please provide some guidance? Simple examples would be greatly appreciated.
int my_shmid = shmget(key,size,shmflgs);
...
void* address_of_my_shm1 = shat(my_shmid,0,shmflags);
Object* optr = static_cast<Object*>(address_of_my_shm1);
...or, in some other thread/process to which you arranged to pass the address_of_my_shm1
...by some other means
void* address_of_my_shm2 = shat(my_shmid,address_of_my_shm1,shmflags);
You may want to assert that address_of_shm1 == address_of_shm2. But note that I say "may" - you don't actually have to do this. Some types/structs/classes can be read equally well at different addresses.
If the object will appear in different address spaces, then pointers outside the shhm in process A may not point to the same thing as in process B. In general, pointers outside the shm are bad. (Virtual functions are pointers outside the object, and outside the shm. Bad, unless you have other reason to trust them.)
Pointers inside the shm are usable, if they appear at the same address.
Relative pointers can be quite usable, but, again, so long as they point only inside the shm. Relative pointers may be relative to the base of an object, i.e. they may be offsets. Or they may be relative to the pointer itself. You can define some nice classes/templates that do these calculations, with casting going on under the hood.
Sharing of objects through shmem is simplest if the data is just POD (Plain Old Data). Nothing fancy.
Because you are in different processes that are not sharing the whole address space, you may not be guaranteed that things like virtual functions will appear at the same address in all processes using the shm shared memory segment. So probably best to avoid virtual functions. (If you try hard and/or know linkage, you may in some circumstances be able to share virtual functions. But that is one of the first things I would disable if I had to debug.)
You should only do this if you are aware of your implementation's object memory model. And if advanced (for C++) optimizations like splitting structs into discontiguous hot and cold parts are disabled. Since such optimizations rae arguably not legal for C++, you are probably safe.
Obviously you are better off if you are casting to the same object type/class on all sides.
You can get away with non-virtual functions. However, note that it can be quite easy to have the same class, but different versions of the class - e.g. differing in size, e.g. adding a new field and changing the offsets of all of the other fields - so you need to be quite careful to ensure all sides are using the same definitions and declarations.

Do I need to make a type a POD to persist it with a memory-mapped file?

Pointers cannot be persisted directly to file, because they point to absolute addresses. To address this issue I wrote a relative_ptr template that holds an offset instead of an absolute address.
Based on the fact that only trivially copyable types can be safely copied bit-by-bit, I made the assumption that this type needed to be trivially copyable to be safely persisted in a memory-mapped file and retrieved later on.
This restriction turned out to be a bit problematic, because the compiler generated copy constructor does not behave in a meaningful way. I found nothing that forbid me from defaulting the copy constructor and making it private, so I made it private to avoid accidental copies that would lead to undefined behaviour.
Later on, I found boost::interprocess::offset_ptr whose creation was driven by the same needs. However, it turns out that offset_ptr is not trivially copyable because it implements its own custom copy constructor.
Is my assumption that the smart pointer needs to be trivially copyable to be persisted safely wrong?
If there's no such restriction, I wonder if I can safely do the following as well. If not, exactly what are the requirements a type must fulfill to be usable in the scenario I described above?
struct base {
int x;
virtual void f() = 0;
virtual ~base() {} // virtual members!
};
struct derived : virtual base {
int x;
void f() { std::cout << x; }
};
using namespace boost::interprocess;
void persist() {
file_mapping file("blah");
mapped_region region(file, read_write, 128, sizeof(derived));
// create object on a memory-mapped file
derived* d = new (region.get_address()) derived();
d.x = 42;
d->f();
region.flush();
}
void retrieve() {
file_mapping file("blah");
mapped_region region(file, read_write, 128, sizeof(derived));
derived* d = region.get_address();
d->f();
}
int main() {
persist();
retrieve();
}
Thanks to all those that provided alternatives. It's unlikely that I will be using something else any time soon, because as I explained, I already have a working solution. And as you can see from the use of question marks above, I'm really interested in knowing why Boost can get away without a trivially copyable type, and how far can you go with it: it's quite obvious that classes with virtual members will not work, but where do you draw the line?
To avoid confusion let me restate the problem.
You want to create an object in mapped memory in such a way that after the application is closed and reopened the file can be mapped once again and object used without further deserialization.
POD is kind of a red herring for what you are trying to do. You don't need to be binary copyable (what POD means); you need to be address-independent.
Address-independence requires you to:
avoid all absolute pointers.
only use offset pointers to addresses within the mapped memory.
There are a few correlaries that follow from these rules.
You can't use virtual anything. C++ virtual functions are implemented with a hidden vtable pointer in the class instance. The vtable pointer is an absolute pointer over which you don't have any control.
You need to be very careful about the other C++ objects your address-independent objects use. Basically everything in the standard library may break if you use them. Even if they don't use new they may use virtual functions internally, or just store the address of a pointer.
You can't store references in the address-independent objects. Reference members are just syntactic sugar over absolute pointers.
Inheritance is still possible but of limited usefulness since virtual is outlawed.
Any and all constructors / destructors are fine as long as the above rules are followed.
Even Boost.Interprocess isn't a perfect fit for what you're trying to do. Boost.Interprocess also needs to manage shared access to the objects, whereas you can assume that you're only one messing with the memory.
In the end it may be simpler / saner to just use Google Protobufs and conventional serialization.
Yes, but for reasons other than the ones that seem to concern you.
You've got virtual functions and a virtual base class. These lead to a host of pointers created behind your back by the compiler. You can't turn them into offsets or anything else.
If you want to do this style of persistence, you need to eschew 'virtual'. After that, it's all a matter of the semantics. Really, just pretend you were doing this in C.
Even PoD has pitfalls if you are interested in interoperating across different systems or across time.
You might look at Google Protocol Buffers for a way to do this in a portable fashion.
Not as much an answer as a comment that grew too big:
I think it's going to depend on how much safety you're willing to trade for speed/ease of usage. In the case where you have a struct like this:
struct S { char c; double d; };
You have to consider padding and the fact that some architectures might not allow you to access a double unless it is aligned on a proper memory address. Adding accessor functions and fixing the padding tackles this and the structure is still memcpy-able, but now we're entering territory where we're not really gaining much of a benefit from using a memory mapped file.
Since it seems like you'll only be using this locally and in a fixed setup, relaxing the requirements a little seems OK, so we're back to using the above struct normally. Now does the function have to be trivially copyable? I don't necessarily think so, consider this (probably broken) class:
1 #include <iostream>
2 #include <utility>
3
4 enum Endian { LittleEndian, BigEndian };
5 template<typename T, Endian e> struct PV {
6 union {
7 unsigned char b[sizeof(T)];
8 T x;
9 } val;
10
11 template<Endian oe> PV& operator=(const PV<T,oe>& rhs) {
12 val.x = rhs.val.x;
13 if (e != oe) {
14 for(size_t b = 0; b < sizeof(T) / 2; b++) {
15 std::swap(val.b[sizeof(T)-1-b], val.b[b]);
16 }
17 }
18 return *this;
19 }
20 };
It's not trivially copyable and you can't just use memcpy to move it around in general, but I don't see anything immediately wrong with using a class like this in the context of a memory mapped file (especially not if the file matches the native byte order).
Update:
Where do you draw the line?
I think a decent rule of thumb is: if the equivalent C code is acceptable and C++ is just being used as a convenience, to enforce type-safety, or proper access it should be fine.
That would make boost::interprocess::offset_ptr OK since it's just a helpful wrapper around a ptrdiff_t with special semantic rules. In the same vein struct PV above would be OK as it's just meant to byte swap automatically, though like in C you have to be careful to keep track of the byte order and assume that the structure can be trivially copied. Virtual functions wouldn't be OK as the C equivalent, function pointers in the structure, wouldn't work. However something like the following (untested) code would again be OK:
struct Foo {
unsigned char obj_type;
void vfunc1(int arg0) { vtables[obj_type].vfunc1(this, arg0); }
};
That is not going to work. Your class Derived is not a POD, therefore it depends on the compiler how it compiles your code. In another words - do not do it.
by the way, where are you releasing your objects? I see are creaing in-place your objects, but you are not calling destructor.
Absolutely not. Serialisation is a well established functionality that is used in numerous of situations, and certainly does not require PODs. What it does require is that you specify a well defined serialisation binary interface (SBI).
Serialisation is needed anytime your objects leave the runtime environment, including shared memory, pipes, sockets, files, and many other persistence and communication mechanisms.
Where PODs help is where you know you are not leaving the processor architecture. If you will never be changing versions between writers of the object (serialisers) and readers (deserialisers) and you have no need for dynamically-sized data, then PODs allow easy memcpy based serialisers.
Commonly, though, you need to store things like strings. Then, you need a way to store and retrieve the dynamic information. Sometimes, 0 terminated strings are used, but that is pretty specific to strings, and doesn't work for vectors, maps, arrays, lists, etc. You will often see strings and other dynamic elements serialized as [size][element 1][element 2]… this is the Pascal array format. Additionally, when dealing with cross machine communications, the SBI must define integral formats to deal with potential endianness issues.
Now, pointers are usually implemented by IDs, not offsets. Each object that needs to be serialise can be given an incrementing number as an ID, and that can be the first field in the SBI. The reason you usually don't use offsets is because you may not be able to easily calculate future offsets without going through a sizing step or a second pass. IDs can be calculated inside the serialisation routine on first pass.
Additional ways to serialize include text based serialisers using some syntax like XML or JSON. These are parsed using standard textual tools that are used to reconstruct the object. These keep the SBI simple at the cost of pessimising performance and bandwidth.
In the end, you typically build an architecture where you build serialisation streams that take your objects and translate them member by member to the format of your SBI. In the case of shared memory, it typically pushes the members directly on to the memory after acquiring the shared mutex.
This often looks like
void MyClass::Serialise(SerialisationStream & stream)
{
stream & member1;
stream & member2;
stream & member3;
// ...
}
where the & operator is overloaded for your different types. You may take a look at boost.serialize for more examples.