Basically what I want to achieve is to share a dynamically allocated array of state flags among different threads to control the interactions between threads.
Are there any library that can achieve this flawlessly in Windows OS?
I tried Open MP, and it gives me all kinds of weird bugs and lots headache, even with omp flush all sometimes the data are still not up-to-date, volatile pointers didnt help either when the freqency of accesses are high,so the program become very unstable and inconsistent.
Are there any libraries that can handle shared and freqently updated and accessed data array (dynamic) better? Can TBB handle this situation?
Threads of the same process share the same heap, so memory allocated on this heap can be shared between those threads.
All the program needs to asure is protecting such "shared" memory against concurrent access.
The latter can be achieved by using locks, like mutexes.
The common solution is to use mutexes. The basic idea is to wrap any access to a shared variable with a critical section, ie. a mutex lock:
WaitForSingleObject(mutexHandle);
// shared data access & modification
ReleaseMutex(mutexHandle);
CreateMutex
WaitForSingleObject
Tutorial
If you have access to C++11, try using std::atomic<T> types, which let you share primitive types with atomic access semantics.
std::atomic
Related
Problem (in short):
I'm using POSIX Shared Memory and currently just used POSIX semaphores and i need to control multiple readers, multiple writers. I need help with what variables/methods i can use to control access within the limitations described below.
I've found an approach that I want to implement but i'm unsure of what methodology i can use to implement it when using POSIX Shared memory.
What I've Found
https://stackoverflow.com/a/28140784
This link has the algorithm i'd like to use but i'm unsure how to implement it with shared memory. Do i store the class in shared memory somehow? This is where I need help please.
The reason I'm unsure is a lot of my research, points towards keeping shared memory to primitives only to avoid addressing problems and STL objects can't be used.
NOTE:
For all my multi-threading i'm using C++11 features. This shared memory will be completely seperate program executables using C++11 std::threads from which any thread of any process/executable will want access. I have avoided the Linux pthread for any of my multi-threading and will continue to do so (except if its just control variable not actual pThreads).
Solution Parameters aimed for
Must be shareable between 2+ processes which will be running multiple C++11 std::thread that may wish access. I.e. Multiple Writers (exclusive one at a time) while allowing multiple simultaneous readers when no writer wants access.
Not using BOOST libraries. Ideally native C++11 or built in linux libraries, something that will work without the need to install abstract libraries.
Not using pThread actual threads but could use some object from there that will work with C++11 std::thread.
Ideally can handle a process crash while in operation. E.g. Using POSIX semaphore if a process crashes while it has the semaphore, everyone is screwed. I have seen people using file locks?
Thanks in advance
keeping shared memory to primitives only to avoid addressing problems
You can use pointers in and to shared memory objects across programs, so long as the memory is mmaped to the same address. This is actually a straightforward proposition, especially on 64 bit. See this open source C library I wrote for implementation details: rszshm - resizable pointer-safe shared memory.
Using POSIX semaphore if a process crashes while it has the semaphore, everyone is screwed.
If you want to use OS mediated semaphores, the SysV semaphores have SEM_UNDO, which recovers in this case. OTOH pthread offers robust mutexes that can be embedded and shared in shared memory. This can be used to build more sophisticated mechanisms.
The SysV scheme of providing multiple semaphores in a semaphore set, where a group of actions must all succeed, or the call blocks, permits building sophisticated mechanism too. A read/write lock can be made with a set of three semaphores.
Does std::atomic play well in shared memory, or is it undefined? It seems like an easy way to add lockless basic types to shared memory however I could believe that it's not possible to guarantee atomic behaviour in the context of shared memory.
Why not, you just need to allocate and construct it inside the shared memory region properly.
It depends.
If the architecture you are using supports atomic operations on 64-bit types, I would expect it to work. If std::atomic is simulating atomic operations with mutexes then you will have a problem:
Shared memory is usually used to communicate between processes - and the mutex being used may only work between threads in a single process (for example the Windows CriticalSection API).
Alternatively, shared memory is quite likely to be mapped at different addresses in different processes, and the mutex may have internal pointers which mean that doesn't work.
problem : if i use mutex lock in thread, allocation slows down significantly, but im getting proper allocation, therefore - proper data structure.
if i dont use mutex lock, i get the allocation job done much faster in threads, but get corrupted data structure.
this is closely related to my previous post that had fully working code too (with improper usage of mutex lock).
c++ linked list missing nodes after allocation in multiple threads, on x64 linux; why?
ive tried three different allocators and they all seem to slow down if i use mutex lock and if i dont, data structure gets corrupted. any suggestions ?
If multiple threads use a common data structure, e.g., some sort of memory pool, and there is at least one thread modifying the data structure, you need synchronization of some form. Whether the synchronization is based on atomics, mutexes, or other primitives is separate question.
The memory allocation mechanisms provided by the standard library (operator new() and malloc() and the other members of their respective family) are thread-safe and you don't need to do any additional synchronization. If you need to use memory allocation from a resource shared between multiple threads you create yourself you will have to synchronize even it becomes slower as a result.
I have a piece of shared memory that contains a char string and an integer between two processes.
Process A writes to it and Process B reads it (and not vice versa)
What is the most efficient and effective way to make sure that Process A doesn't happen to update (write to it) that same time Process B is reading it? (Should I just use flags in the shared memory, use semaphores, critical section....)
If you could point me in the right direction, I would appreciate it.
Thanks.
Windows, C++
You cannot use a Critical Section because these can only be used for synchronization between threads within the same process. For inter process synchronization you need to use a Mutex or a Semaphore. The difference between these two is that the former allows only a single thread to own a resource, while the latter can allow up to a maximum number (specified during creation) to own the resource simultaneously.
In your case a Mutex seems appropriate.
Since you have two processes you need a cross-process synchronisation object. I think this means that you need to use a mutex.
A mutex object facilitates protection against data races and allows
thread-safe synchronization of data between threads. A thread obtains
ownership of a mutex object by calling one of the lock functions and
relinquishes ownership by calling the corresponding unlock function.
If you are using boost thread, you can use it's mutex and locking, more to read see the link below:
http://www.boost.org/doc/libs/1_47_0/doc/html/thread/synchronization.html#thread.synchronization.mutex_types
Since you're talking about two processes, system-wide mutexes will work, and Windows has those. However, they aren't necessarily the most efficient way.
If you can put more things in shared memory, then passing data via atomic operations on flags in that memory should be the most efficient thing to do. For instance, you might use the Interlocked functions to implement Dekker's Algorithm (you'll probably want to use something like YieldProcessor() to avoid busy waiting).
I found this:
Fast interprocess synchronization method
I used to believe that a pthread mutex can only be shared between two threads in the same address space.
The question / answers there seems to imply:
If I have two separate proceses A & B. They have a shared memory region M. I can put a pThread mutex in M, lock in A, lock in B, unlock in A; and B will no longer block on the mutex. Is this correct? Can pThread mutexes be shared in two separate processes?
Edit: I'm using C++, on MacOSX.
You need to tell the mutex to be process-shared when it's inited:
http://www.opengroup.org/onlinepubs/007908775/xsh/pthread_mutexattr_setpshared.html
Note in particular, "The default value of the attribute is PTHREAD_PROCESS_PRIVATE", meaning that accessing it from different processes is undefined behaviour.
If your C/pthread library is conforming, you should be able to tell if it supports mutexes shared across multiple process by checking if the _POSIX_THREAD_PROCESS_SHARED feature test macro is defined to a value other than -1 or by querying the system configuration at run-time using sysconf(_SC_THREAD_PROCESS_SHARED) if that feature test macro is undefined.
EDIT: As Steve pointed out, you'll need to explicitly configure the mutex for sharing across processes assuming the platform supports that feature as I described above.
I was concerned that there might be a condition where a mutex in shared memory might fail to behave properly, so I did some digging and came up with some documents which treat the issue like a no-brainer:
https://computing.llnl.gov/tutorials/pthreads/
Further digging, however, showed that older versions of glibc suffered issues in shared memory mutexes: (This is an ancient change, but it illustrates the point.)
in linuxthreads/mutex.c
int __pthread_mutexattr_setpshared(...) {
/* For now it is not possible to shared a conditional variable. */
if (pshared != PTHREAD_PROCESS_PRIVATE)
return ENOSYS;
}
Without more detail on what implementation of pthread you're using, it's difficult to say whether you're safe or not.
My cause for concern is that many implementations (and some entire languages, like perl, python, and ruby) have a global lock object that manages access to shared objects. That object would not be shared between processes and therefore, while your mutexes would probably work most of the time, you might find yourself having two processes simultaneously manipulating the mutex at the same time.
I know that this flies in the face of the definition of a mutex but it is possible:
If two threads are operating at the same time in different processes, it implies that they are on different cores. Both acquire their global lock object and go to manipulate the mutex in shared memory. If the pthread implementation forces the update of the mutex through the caches, both threads could end up updating at the same time, both thinking they hold the mutex. This is just a possible failure vector that comes to mind. There could be any number of others. What are the specifics of your situation - OS, pthreads version, etc.?