boost::interprocess memory allocator on anonymous segment - c++

I'm trying to use an mmap-like segment to allocate objects on stl containers, for that I'm using boost::interprocess which provides with memory mappings, allocators and anonymous memory mapping support.
A bit like this
My problem is that the anonymous_shared_memory function here returns something that looks half mapped file and half shared memory(makes sense with mmap :) ) and although both styles work with interprocess allocators this one looks like its missing a segment_manager which does the actual chunk allocation.
As it returns a high-level mapped_region already mapped in the process but with no manager and no way that I can see to hook in a segment_manager.

A mapped_region is a low to mid-level object, and literally represents just the memory. Managed shared memory, however
is an advanced class that combines a shared memory object and a mapped region that covers all the shared memory object,
so it is the managed memory that possess the segment_manager.
Given that you want to use anonymous_shared_memory, first you'd get the memory_region as per the example, then you would use placement new to put a segment_manager at the beginning of it. Its constructor takes the size of the memory segment that it is being constructed in. I do not know if this includes the size of the manager, although I suspect it is included.

Related

Should classes manage dynamic memory on their own?

If a class needs to allocate memory dynamically (e.g. std::vector), is it acceptable for the class to simply allocate and deallocate the memory internally, using operator new or malloc?
The answer isn't entirely obvious to me. The lack of a system managing the memory allocation like in garbage collected languages is obviously empowering; but on the other hand, it is precisely this lack of coordination that ends up wasting memory. For instance, it would be quite trivial to make a 'fake' allocator that just passes stack memory to an object which would, under normal circumstances, require dynamic memory, but which the programmer can assert will never need more than X amount of bytes.
Perhaps you think that this issue is irrelevant in the days of large address spaces, but it feels a bit lame to fall back on the hardware, this is C++ after all.
EDIT
I realize now how cryptic I was with the question... Let me explain it a bit better.
When I say 'wasting memory', I specifically mean the kind of memory-wasting that happens with heap fragmentation. Reducing heap fragmentation is the most compelling point of making a memory managing system in C++, since (as many comments have pointed out) destructors already handle the resource management side of things. When your allocations are essentially random (you don't know where your new memory is in relation to other allocated memory) and every class could potentially allocate, you run into the sort of problem that data oriented design tries to fix: poor data locality.
So the question is: would it make sense for there to be a class that does the memory management, object management, heap compaction, and maybe statistics tracking (for debugging purposes) to make the most efficient use of memory and data locality?
[In this view, every class or function that allocates memory dynamically has to get a reference to that class, somehow.]
Or is it better to let every class be able to allocate without necessarily making it part of the interface of that class?
If a class needs to allocate memory dynamically (e.g. std::vector), is it acceptable for the class to simply allocate and deallocate the memory internally, using operator new or malloc?
Usually, we have two kinds of classes:
managers of resources (including dynamic memory);
"business logic" classes.
Most of the times we shouldn't mix the layers of resource management and domain logic.
So, if your class is a manager of a raw resource, it allocates/deallocates, initializes/deinitializes its only resource and does nothing else. In this case, new is OK and even necessary (e.g. you can't instead use std::vector when writing your own dynamic array, otherwise you don't need to write it at all). See RAII.
If your class contains some app logic, it is not permitted to explicitly allocate dynamic memory, open sockets etc., but it uses other RAII-classes for that. At this high level C++ provides you with something that GC languages don't: it makes RAII-owners manage files, sockets etc. - any kind of resource, not just raw bytes of heap memory, so you don't need manual Java/C#-style try-with-resources everywhere you create a not-of-raw-memory manager object - the compiler does it for you as soon as you have a RAII class for that.

Boost.Interprocess memory location

In the Boost.Interprocess documentation Where is this being allocated? it is stated that Boost.Interprocess containers are placed in shared memory using two mechanisms at the same time:
Boost.Interprocess construct<>, find_or_construct<>... functions. These functions place a C++ object in the shared
memory. But this places only the object, not the memory that this
object may allocate dynamically.
Shared memory allocators. These allow allocating shared memory portions so that containers can allocate dynamically fragments
of memory to store newly inserted elements.
What is the use case to have a boost.vector where internal memory lives in the current process, but using a shared memory allocator so that elements are placed in shared memory ?
If I want to share this structure to another process :
struct Shared
{
vector<string> m_names;
vector<char> m_data;
};
I guess I want the vectors to be accessible to the other process so that it can iterate on them, right ?
find_or_construct and friends are for your own direct allocations.
The allocators are to be passed to library types to do their internal allocations in similar fashion. Otherwise, only the "control structure" (e.g. 16 bytes for a typical std::string) would be in the shared memory, instead of all the related data allocated by the standard library container internally.
Well, you cannot access the vector as such from the other process but you can access the elements (so in your example the strings) e.g. via a pointer

C++ STL map in shared memory

i need to place a STL map in shared memory. Also have multiple process access that map. Any pointers to how it is done ?
I have checked this link. But need a more simpler method.
Map in Shared memory
For this to work you need to use a custom allocator that will allocate from the shared memory region, so that the map nodes are all in shared memory, and so that the pointer type of the allocator is not just a raw pointer but can refer to the shared memory region when it is mapped to different addresses in different processes.
You also need your std::map implementation to correctly use the allocator's pointer type everywhere it needs to use a pointer, and this isn't guaranteed by the standard.
The simplest way to do this currently is to use Boost.Interprocess which provides a nice API for shared memory and also provides allocators and containers that work correctly with it.

Using std::set or std::map with shared memory

I am working in a project which have two different processes.
The first process is a cache base on a std::map or std::set which allocate all the data in a share memory region.
The second process is a producer/consumer which will have access to the share memory, so whenever it needs some data it will ask through an unix pipe to the cache process the starting address of the shared memory which contain the requested data.
So far, I came up with two approaches, first is changing the allocation function for std::set to always allocate in the shared memory, or maybe in a easier approach storing as the value of the map a pointer to that shared region:
map<key, pointer to share region>
Any idea? :D
Thanks!!
In theory, you can use a custom allocator for std::set or std::map to do this. Of course, you'll have to ensure that any contents that might dynamically allocate also use the same custom allocator.
The real problem is that the mapped addresses of the shared memory might not be the same. It's often possible to work around this by using mmap and specifying the address, but the address range must be free in both processes. I've done this under Solaris, which always allocates (or allocated) static and heap at the bottom of the address space, and stack at the top, leaving a big hole in the middle, but even there, I don't think there was any guarantee, and other systems have different policies. Still, if the processes aren't too big otherwise, you may be able to find a solution empirically. (I'd recommend making the address and the size a configuration parameter.)
Alternatively, in theory, the allocator defines a pointer type, which the container should use; you should be able to define a pointer type which works with just the offset into the shared memory. I've no experience with this, however, and I fear that it could be very tricky, since the reference type will still be a true reference (and thus a pointer under the hood), and you cannot change this.

dynamic structures in static memory?

GIVEN that you have a fixed area of memory already allocated that you would like to use, what C or C++ libraries will allow you to store a dynamic structure (e.g. a hash) in that memory?
i.e. the hash library must not contain any calls to malloc or new, but must take a parameter that tells it the location and size of the memory it is permitted to use.
(bonus if the library uses offsets rather than pointers internally, in case the shared memory is mapped to different address spaces in each process that uses it)
You can write your own custom allocators for STL containers.
Dr.Dobb's: What Are Allocators Good For?
SO: Compelling examples of custom C++ STL allocators?
It's trivial to adapt a simple linear probing hash table to use a block of memory - just set its table(s) to point at the allocated memory when you create it, and don't implement anything to allocate more memory to let the table grow.