I have created two processes which are accessing same global shared memory. For synchronization purpose, I have used global semaphore.
Can we find out without debugging(using any windows tool) which process had acquired semaphore?
Print a message in your program each time the semaphore is acquired. Why don't you want you/can't you debug?
Really, without more information about what you're trying to do, this is all that can be said.
Related
I used boost::interprocess to create a boost::multi_index data structure in shared memory. There are many client processes that will access this data structure. When accessing, I will lock the data structure. The problem I encountered is Once the client process is accessing the data structure and crashes without releasing the occupied lock, then all other client processes cannot access the data structure. I use boost::interprocess::named_mutex, I Know that boost::interprocess::file_lock can be automatically released when the process crashes, but because he has a lot of restrictions, so I have no use, I don't know if there is any good way to solve this problem, thank you!
Do not place a mutex in shared memory. The boost documentation for named_mutex says:
https://www.boost.org/doc/libs/1_70_0/doc/html/boost/interprocess/named_mutex.html
A mutex with a global name, so it can be found from different processes. This mutex can't be placed in shared memory, and each process should have it's own named_mutex.
The whole point of using a named mutex is that multiple processes can create their own local mutex objects using the same name and they will share an underlying mutex that they can sync on. If a given process locks the mutex and then crashes, the underlying shared mutex will be released automatically by the OS, allowing another process to lock it (depending on OS, the underlying mutex API may report that the mutex had been unlocked abnormally).
I guess you can try to access the mutex with timed_lock and, if you get a timeout, forcibly delete the mutex with remove.
I know how mutexes on windows normally work. And yes, sure, I could create a test program to find out the results, I'm just wondering if anybody knows before I write this up.
I saw a IDXGIKeyedMutex in the documentation today. It has a weird method of calling it where you can call two methods: Acquire(Key) & Release(Key). Acquire waits to obtain the "mutex" (shared resource) associated with the key, no matter what thread it is on. Release releases the shared resource, no matter what thread it is on. It is expected that NO thread calls to Acquire result in Acquire being called more than once before a corresponding Release is called (for the same key).
In this fashion, a lock-step producer/consumer can be done, like this:
Producer: Acquire(0), write shared resource, Release(1)
Consumer: Acquire(1), read shared resource, Release(0)
That got me to thinking: Can Windows Mutexes be used this way, though it is not documented? What if I create a mutex for two processes and share it out to both of them, and call WaitForSingleObject(m_hMutex,INFINITE) on the one process, and call ReleaseMutex(m_hMutex) in the other process? I'm assuming this doesn't work? Or does it, but nobody uses it this way?
No, this cannot be done for simple reason: to preserve mutual exclusion
I think you have misunderstood the IDXGIKeyedMutex.
The documentation of Release method simply says:
Return Value
Returns S_OK if successful.
If the device attempted to release a keyed mutex that is not valid or
owned by the device, ReleaseSync returns E_FAIL.
It fails when trying to release a mutex owned by another device and please note here that the mutual exclusion is for the devices that share a resource.
I always learnt, that shared memory is the fastest way to share data between two threads (like e.g. http://www.boost.org/doc/libs/1_55_0/doc/html/interprocess.html). However, today I discovered that using boost::ref(X) it is possible to give boost a reference to X enabling access to X from outside the thread. Therefore the following pseudocode should work:
MyObjext X(para1,para2); // MyObject has a () operator
boost::thread thr(boost::ref(X));
X.setSomeMember(1);
This got me thinking: Assuming setSomeMember is thread safe, then - for most applications - this approach seems much easier, since most applications spawn their threads as they need and thus can always save and access the object X. So, why would I use shared memory or message queues anyway, if I have access to the thread object directly? Is it maybe faster? Or am I missing something here?
They're just different features - you happen to highlight the similarities.
Yes, threads are more lightweight than processes.
What you lose is isolation (processes can only share what's explicitely exposed, and only given the right permissions). There is no such control for inter-thread sharing.
If one thread messes up the shared state, all threads die, the same goes for shared memory. However, if one thread dies, the whole process dies, which doesn't happen for separate processes.
All in all, it's different. Inter-process synchronization/sharing is more heavy weight but has more features (how will you run a separate thread on a different host :)).
Does Windows offer any kind of mutex that can be placed in a memory mapped file and used across multiple processes?
Ideally it must be completely self contained such that it can survive by itself in the file, even across a reboot.
Also, no resources should be leaked if I simply remove the file manually while no processes are running.
If possible the solution should also offer the accompanying 'condition' concept which should also be an object that can sit in a shared memory mapped file.
In short, I need something similar to a PTHREADS mutex with the SHARED attribute.
As far as I understand, simply using a PTHREADS mutex is not possible because the SHARED attribute is unsupported in the Windows port of PTHREADS.
To share a synchronization object, give it a name and use the same name in each process when you Create the object.
The following synchronization objects can be shared between process that way :
Mutex
Semaphore
Event
Critical sections cannot be shared, but are faster.
Testing or waiting on those objects is done with the wait family of functions, often WaitForMultipleObjects.
Use the file as its own mutex: Use the LockFileEx function and have everybody agree to lock byte 0 of the file when they want to claim the mutex.
That's not possible. The mutex object itself lives in kernel space to protect it from user code messing with its state. The handle you acquired to it is only valid for the process that acquired it. Technically you could use DuplicateHandle() and put the returned handle in the mmf, but only if you have a handle to the other process that accesses the memory section. That's fairly brittle.
This is why you can specify a name for the mutex in the CreateMutex() function. The other process gets to it by using the same name in the OpenMutex call.
I have a piece of code that handles the multi-threading (with shared resources) issue, like that:
CRITICAL_SECTION gCS;
InitializeCriticalSection(&gCS);
EnterCriticalSection(&gCS);
// Do some shared resources stuff
LeaveCriticalSection(&gCS);
In this MSDN page is written: "The threads of a single process [my bold] can use a critical section object for mutual-exclusion synchronization."
So, my question is: what about the case that the operating system decides to divide the threads to different processes, or even different processors.
Does EnterCriticalSection indeed not do the job? And if the answer is "critical sections are no help with multi-processing", what is the alternative?
I prefer not to use the Boost classes.
An operating system will not divide a thread into different processes.
EnterCriticalSection is appropriate for programs with multiple threads, as well as systems with multiple processors.
So, my question is what about the case that the operation system
decide to divide the theards to different process, or even different
processors.
Different processors - critical sections cover this.
Different processes - you need different synchronization API, which can share [kernel] objects between processes, such as mutexes and semaphores.
See sample usage in Using Mutex Objects section.
If all your threads are started in the same program, they are part of a single process and there is nothing anyone, including the OS, can do to "separate them". They exist only as part of that process and will die with the process. You are perfectly safe using a critical section.
A process is been allocated a newly address space(stack&heap), whereas when a thread is created it is implicitly assigned the initiator process's memory space ,but for a newly allocated own stack space (a new stack space is assigned to each and every different thread)
for the OS a thread executes the same as it was a process,naturally when using threads this might result in more cache and memory\page hits .
the OS executer will give time to the process who then may use his own scheduler to divide time between his threads,but this is not a must since all threads are processes they are in the same process table and can run on any core concurrently\at any time, the same as regular process.
since threads (for the same process) have the same memory they can synchronize on variables\lock objects on User level
a process should not have access to a different process's allocated memory(unless he is a thread of joint space) so synchronizing between processes should be done on some joined\global space or at kernel level