How to wipe some contents of boost managed_shared_memory? - c++

The boost::interprocess::managed_shared_memory manual and most other resources I checked always shows examples where there is a parent process and a bunch of children spawned by it.
In my case, I have several processes spawned by a third part application and I can only control the "children". It means I cannot have a central brain to allocate and deallocate the shared memory segment. All my processes must be able to do so (Therefore, I can't erase data on exit).
My idea was to open_or_create a segment and, using a lock stored (find_or_construct'ed) in this area, I check a certain hash to see if the memory area was created by this same software version.
If this is not true, the memory segment must be wiped to avoid breaking code.
Ideally, I would want to keep the lock object because there could be other processes already waiting on it.
Things I though:
List all object names and delete all but the lock.
This can not be done because the objects might be using different implementations
Also I couldn't find where to list the names.
Use shared_memory_object::truncate
I could not find much about it
By using a managed_shared_memory, I don't know how reliable it would be because I'm not sure the lock was the first allocated data.
Refcount the processes and wipe data on last one
Prone to fatal termination problems.
Use a separated shared memory area just for this bookkeeping.
Sounds reasonable, but overkill?
Any suggestions or insights?

This sounds like a "shared ownership" scenario.
What you'd usually think of in such a scenario, would be shared pointers:
http://www.boost.org/doc/libs/1_58_0/doc/html/interprocess/interprocess_smart_ptr.html#interprocess.interprocess_smart_ptr.shared_ptr
Interprocess has specialized shared pointers (and ditto make_shared) for exactly this purpose.
Creating the shared memory realm can be done "optimistically" from each participating process (open_or_create). Note that creation needs to be synchronized. Further segment manager operations are usually already implicitly synchronized:
Whenever the same managed shared memory is accessed from different processes, operations such as creating, finding, and destroying objects are automatically synchronized. If two programs try to create objects with different names in the managed shared memory, the access is serialized accordingly. To execute multiple operations at one time without being interrupted by operations from a different process, use the member function atomic_func() (see Example 33.11).

Related

Boost Shared Memory validity

I'm using Boost Shared Memory to share a vector across processes.
In the client, how can I, once I try and open the shared memory and read a vector off it, realize if the memory is not valid, or is not what I'm looking for.
Will the Open_Only fail if the memory segment does not exist, and if so, how do I catch this failure?
Also, the shared memory segment is supposed to be removed, if there are no references to it. However, in my case, even when both the client and server are shut down, and nothing else is accessing the shared memory, the segment remains in Boost Interprocess folder in Program data, with some data. So the next time client starts up, it has no problem opening up the segment, and so thinks it is accessing correct data when in fact, there is no data to be shared.
Kindly advise. Thank you.
Speaking from experience with the underlying shm api--and not as a Boost expert...
To determine validity, one technique is to figure out if the current process is the one that is creating the shared memory (the first time). You can do this by getting the size after creating (fstat) and seeing if the size is zero. If it is zero, the process is creating it. Once you know that you can initialize it. Also, when you call truncate() to set the size here, that size is set for all other processes.
To ensure removal, you can call shm_unlink() to remove the shared memory file from the system. I believe in Boost there is a remove() api that will do that.

Sharing heap memory with fork()

I am working on implementing a database server in C that will handle requests from multiple clients. I am using fork() to handle connections for individual clients.
The server stores data in the heap, which consists of a root pointer to hash tables of dynamically allocated records. The records are structs that have pointers to various data-types. I would like for the processes to be able to share this data so that, when a client makes a change to the heap, the changes will be visible for the other clients.
I have learned that fork() uses COW (Copy On Write), and my understanding is that it copies the heap (and stack) memory of the parent process when the child tries to modify the data in memory.
I have found out that I can use the shm library to share memory.
Would the code below be a valid way to share heap memory (in shared_string)? If a child were to use similar code (i.e. starting from //start), would other children be able to read/write to it while the child is running and after it's dead?
key_t key;
int shmid;
key = ftok("/tmp",'R');
shmid = shmget(key, 1024, 0644 | IPC_CREAT);
//start
char * string;
string = malloc(sizeof(char) * 10);
strcpy(string, "a string");
char * shared_string;
shared_string = shmat(shmid, string, 0);
strcpy(shared_string, string);
Here are some of my thoughts/concerns regarding this:
I'm thinking about sharing the root pointer of the database. I'm not sure if that would work or if I have to mark all allocated memory as shared.
I'm not sure if the parent / other children are able to access memory allocated by a child.
I'm not sure if a child's allocated memory stays on the heap after it is killed, or if that memory is released.
First of all, fork is completely inappropriate for what you're trying to achieve. Even if you can make it work, it's a horrible hack. In general, fork only works for very simplistic programs anyway, and I would go so far as to say that fork should never be used except followed quickly by exec, but that's aside from the point here. You really should be using threads.
With that said, the only way to have memory that's shared between the parent and child after fork, and where the same pointers are valid in both, is to mmap (or shmat, but that's a lot fuglier) a file or anonymous map with MAP_SHARED prior to the fork. You cannot create new shared memory like this after fork because there's no guarantee that it will get mapped at the same address range in both.
Just don't use fork. It's not the right tool for the job.
I think you are basically looking to do what is done by Redis (and probably others).
They describe it in http://redis.io/topics/persistence (search for "copy-on-write").
threads defeat the purpose
classic shared memory (shm, mapped memory) also defeats the purpose
The primary benefit to using this method is avoidance of locking, which can be a pain to get right.
As far as I understand it the idea of using COW is to:
fork when you want to write, not in advance
the child (re)writes the data to disk, then immediately exits
the parent keeps on doing its work, and detects (SIGCHLD) when the child exited.
If while doing its work the parent ends up making changes to the hash, the kernel
will execute a copy for the affected blocks (right terminology?).
A "dirty flag" is used to track if a new fork is needed to execute a new write.
Things to watch out for:
Make sure only one outstanding child
Transactional safety: write to a temp file first, then move it over so that you always have a complete copy, maybe keeping the previous around if the move is not atomic.
test if you will have issues with other resources that get duplicated (file descriptors, global destructors in c++)
You may want to take gander at the redis code as well
I'm thinking about sharing the root pointer of the database. I'm not sure if that would work or if I have to mark all allocated memory as shared.
Each process will have its own private memory range. Copy-on-write is a kernel-space optimization that is transparent to user space.
As others have said, SHM or mmap'd files are the only way to share memory between separate processes.
If you must you fork, the shared memory seems to be the 'only' choice.
Actually, I think in your scene, the thread is more suitable.
If you don't want to be multi-threaded. Here is another choice,you can only use one-process & one-thread mode, like redis
With this mode,you don't need worry about something like lock and if you want to scale, just design a route policy,as route with the hash value of the key
As you have discovered, if you want to share memory between separate processes (from fork or otherwise), you need to use shared memory, either the SYSV shm library or mmap with MAP_SHARED. Unfortunately, these are coarse-grained tools, suitable only for dealing with a small number of large blocks, and not suitable for fine-grained memory management as you would do with malloc/free.
In order to have useful shared memory between processes, you need to build a heap on top of shm or mmap. You can do that with my small shm_malloc library, which allows you to use calls to shm_malloc and shm_free exactly as you would use malloc/free.

C++ Thread safe vector.erase

I wrote a threaded Renderer for SFML which takes pointers to drawable objects and stores them in a vector to be draw each frame. Starting out adding objects to the vector and removing objects to the vector would frequently cause Segmentation faults (SIGSEGV). To try and combat this, I would add objects that needed to be removed/added to a queue to be removed later (before drawing the frame). This seemed to fix it, but lately I have noticed that if I add many objects at one time (or add/remove them fast enough) I will get the same SIGSEGV.
Should I be using locks when I add/remove from the vector?
You need to understand the thread-safety guarantees the C++ standard (and implementations of C++2003 for possibly concurrent systems) give. The standard containers are a thread-safe in the following sense:
It is OK to have multiple concurrent threads reading the same container.
If there is one thread modifying a container there shall be no concurrent threads reading or writing the same container.
Different containers are independent of each other.
Many people misunderstand thread-safety of container to mean that these rules are imposed by the container implementation: they are not! It is your responsibility to obey these rules.
The reason these aren't, and actually can't, be imposed by the containers is that they don't have an interface suitable for this. Consider for example the following trivial piece of code:
if (!c.empty() {
auto value = c.back();
// do something with the read value
}
The container can control the access to the calls to empty() and back(). However, between these calls it necessarily needs to release any sort of synchronization facilities, i.e. by the time the thread tries to read c.back() the container may be empty again! There are essentially two ways to deal with this problem:
You need to use external locking if there is possibility that a concurrent thread may be changing the container to span the entire range of accesses which are interdependent in some form.
You change the interface of the containers to become monitors. However, the container interface isn't at all suitable to be changed in this direction because monitors essentially only support "fire and forget" style of interfaces.
Both strategies have their advantages and the standard library containers are clearly supporting the first style, i.e. they require external locking when using concurrently with a potential of at least one thread modifying the container. They don't require any kind of locking (neither internal or external) if there is ever only one thread using them in the first place. This is actually the scenario they were designed for. The thread-safety guarantees given for them are in place to guarantee that there are no internal facilities used which are not thread-safe, say one per-object iterator object or a memory allocation facility shared by multiple threads without being thread-safe, etc.
To answer the original question: yes, you need to use external synchronization, e.g. in the form of mutex locks, if you modify the container in one thread and read it in another thread.
Should I be using locks when I add/remove from the vector?
Yes. If you're using the vector from two threads at the same time and you reallocate, then the backing allocation may be swapped out and freed behind the other thread's feet. The other thread would be reading/writing to freed memory, or memory in use for another unrelated allocation.

Shared memory API, where a process can attach shared memory to other process

Can any one look into this and suggest me with an API.
We have APIs for a process which can create and/or attach a shared memory to its own process. But I don't find an API to attach a shared memory to one process by other process(for e.g., process A should call one API(like shmat()) to attach the shared memory to process B).
Shared memory doesn't belong to any particular process (unless you create it with a private IPC_PRIVATE key). It belongs to the system.
So, when you use shmget with a non-private key (and the IPC_CREAT flag), you will either create a shared memory block or attach to an existing one.
You need a way for both processes to use the same IPC key and this is often done by using ftok which uses a file specification and an identifier to give you an IPC key for use in the shmget call (and other IPC type calls, such as msgget or semget).
For example, in the programs pax1 and pax2, you may have a code segment like:
int getMyShMem (void) {
key_t mykey = ftok ("/var/pax.cfg", 0); // only one shm block so use id of 0
if (mykey == (key_t)-1) // no go.
return -1;
return shmget (mykey, 1024, IPC_CREAT); // get (or make) a 1K block.
}
By having both processes use the same file specification and ID, they'll get the same shared memory block.
You can use different IDs to give you distinct shared memory blocks all based on the same file (you may, for example, want one for a configuration shared memory block and another for storing shared state).
And, given that it's your configuration file the IPC key is based on, the chances of other programs using it is minuscule (I think it may be zero but I'm not 100% sure).
You can't forcefully inject shared memory into a process from outside that process (well, you may be able to but it would be both dangerous and require all sorts of root-level permissions). That would break the protected process model and turn you system into something about as secure as MS-DOS :-)
Let's see, allow one process to force a shared memory segment on to another? What is the receiver going to do with it? How will it know it now has mapped this block in - what is expected of it.
You're thinking about the problem the wrong way - simply hoisting a block of memory on to a second process is not going to allow you to do what you want. You need to notify the second process also that it has now mapped this block and so can start doing stuff with it. I suggest you take a step back and really look at your design and what you are doing. My recommended approach would be
A connects to B via some other IPC (say socket)
A informs B that it should attach with the details (name etc.)
B then attaches - and now B is aware of it and can start doing stuff with it. (say for example once the attach completes, B confirms to A, and then they can start talking over the shared memory block).
As for wrapping shared memory in a nice library - consider boost::interprocess.
You are asking to attach the process memory of other process, right?
Just open(2) the file /proc/<pid>/mem and use it. Check the /proc/<pid>/map for the list of usable address in the file.

What is the most efficient implementation of a java like object monitor in C++?

In Java each object has a synchronisation monitor. So i guess the implementation is pretty condensed in term of memory usage and hopefully fast as well.
When porting this to C++ what whould be the best implementation for it. I think that there must be something better then "pthread_mutex_init" or is the object overhead in java really so high?
Edit: i just checked that pthread_mutex_t on Linux i386 is 24 bytes large. Thats huge if i have to reserve this space for each object.
In a sense it's worse than pthread_mutex_init, actually. Because of Java's wait/notify you kind of need a paired mutex and condition variable to implement a monitor.
In practice, when implementing a JVM you hunt down and apply every single platform-specific optimisation in the book, and then invent some new ones, to make monitors as fast as possible. If you can't do a really fiendish job of that, you definitely aren't up to optimising garbage collection ;-)
One observation is that not every object needs to have its own monitor. An object which isn't currently synchronised doesn't need one. So the JVM can create a pool of monitors, and each object could just have a pointer field, which is filled in when a thread actually wants to synchronise on the object (with a platform-specific atomic compare and swap operation, for instance). So the cost of monitor initialisation doesn't have to add to the cost of object creation. Assuming the memory is pre-cleared, object creation can be: decrement a pointer (plus some kind of bounds check, with a predicted-false branch to the code that runs gc and so on); fill in the type; call the most derived constructor. I think you can arrange for the constructor of Object to do nothing, but obviously a lot depends on the implementation.
In practice, the average Java application isn't synchronising on very many objects at any one time, so monitor pools are potentially a huge optimisation in time and memory.
The Sun Hotspot JVM implements thin locks using compare and swap. If an object is locked, then the waiting thread wait on the monitor of thread which locked the object. This means you only need one heavy lock per thread.
I'm not sure how Java does it, but .NET doesn't keep the mutex (or analog - the structure that holds it is called "syncblk" there) directly in the object. Rather, it has a global table of syncblks, and object references its syncblk by index in that table. Furthermore, objects don't get a syncblk as soon as they're created - instead, it's created on demand on the first lock.
I assume (note, I do not know how it actually does that!) that it uses atomic compare-and-exchange to associate the object and its syncblk in a thread-safe way:
Check the hidden syncblk_index field of our object for 0. If it's not 0, lock it and proceed, otherwise...
Create a new syncblk in global table, get the index for it (global locks are acquired/released here as needed).
Compare-and-exchange to write it into object itself.
If previous value was 0 (assume that 0 is not a valid index, and is the initial value for the hidden syncblk_index field of our objects), our syncblk creation was not contested. Lock on it and proceed.
If previous value was not 0, then someone else had already created a syncblk and associated it with the object while we were creating ours, and we have the index of that syncblk now. Dispose the one we've just created, and lock on the one that we've obtained.
Thus the overhead per-object is 4 bytes (assuming 32-bit indices into syncblk table) in best case, but larger for objects which actually have been locked. If you only rarely lock on your objects, then this scheme looks like a good way to cut down on resource usage. But if you need to lock on most or all your objects eventually, storing a mutex directly within the object might be faster.
Surely you don't need such a monitor for every object!
When porting from Java to C++, it strikes me as a bad idea to just copy everything blindly. The best structure for Java is not the same as the best for C++, not least because Java has garbage collection and C++ doesn't.
Add a monitor to only those objects that really need it. If only some instances of a type need synchronization then it's not that hard to create a wrapper class that contains the mutex (and possibly condition variable) necessary for synchronization. As others have already said, an alternative is to use a pool of synchronization objects with some means of choosing one for each object, such as using a hash of the object address to index the array.
I'd use the boost thread library or the new C++0x standard thread library for portability rather than relying on platform specifics at each turn. Boost.Thread supports Linux, MacOSX, win32, Solaris, HP-UX and others. My implementation of the C++0x thread library currently only supports Windows and Linux, but other implementations will become available in due course.