Cross-Process Mutex Read/Write Locking - c++

I'm trying to make inter-process communication in C/C++ on Windows environment.
I am creating a shared memory page file and two processes get the handle to that file. It's like this:
Process1: Initialize shared memory area. Wait for Process2 to fill it.
Process2: Get handle to shared memory area. Put stuff in it.
I am creating a named mutex in process1 as well. Now process1 acquires the ownership of the mutex soon after creating it (using WaitSingleObject). Obviously, there is nothing in the memory area so I need to release the mutex. Now I need to wait until the memory is filled instead of trying to acquire the mutex again.
I was thinking of conditional variables. Process2 signals the condition variable once it fills in the memory area and process1 will acquire the information immediately.
However, as per MS Documentation on Condition Variables, they are not shared across processes which is clear from their initialization as they are not named.
Furthermore, the shared memory area can hold up to one element at any given moment which means process2 cannot refill after filling it unless process1 extracts its information.
From the given description it's clear that condition variables are the best for this purpose (or Monitors). So is there a way around this?

Conditional variables can be used with in the process, but not across the processes.
Try NamedPipe with PIPE_ACCESS_DUPLEX as open mode. So that you have communication options from both process.
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365150(v=vs.85).aspx

I have used events for this before. Use 2 named auto reset events. 1 data ready event and one buffer ready event. Writer waits for buffer ready, writes data and sets the data ready event. Reader waits for data ready event, reads memory and sets the buffer ready event. If done properly you should not need the mutex.

Related

Waiting for Memory Value to Change

I have two separate processes, a client and server process. They are linked using shared memory.
A client will begin his response by first altering a certain part of the shared memory to the input value and then flipping a bit indicating that the input is valid and that the value has not already been computed.
The server waits for a kill signal, or new data to come in. Right now the relevant server code looks like so:
while(!((*metadata)&SERVER_KILL)){
//while no kill signal
bool valid_client = ((*metadata)&CLIENT_REQUEST_VALID)==CLIENT_REQUEST_VALID;
bool not_already_finished = ((*metadata)&SERVER_RESPONSE_VALID)!=SERVER_RESPONSE_VALID;
if(valid_client & not_already_finished){
*int2 = sqrt(*int1);
*metadata = *metadata | SERVER_RESPONSE_VALID;
//place square root of input in memory, set
//metadata to indicate value has been found
}
}
The problem with this is that the while loop takes up too many resources.
Most solutions to this problem are usually with a multithreaded application in which case you can use condition variables and mutexes to control the progression of the server process. Since these are single threaded applications, this solution is not applicable. Is there a lightweight solution that allows for waiting for these memory locations to change all while not completely occupying a hardware thread?
You can poll or block... You can also wait for an interrupt, but that would probably also entail some polling or blocking.
Is message-passing on the table? That would allow you to block. Maybe a socket?
You can also send a signal from one process to another. You would write an interrupt handler for the receiving process.
Note that when an interrupt handler runs, it preempts the process' thread of execution. In other words, the main thread is paused while the handler runs. So your interrupt handler shouldn't grab a lock if there's a chance that the lock is already held, as it will create a deadlock situation. You can avoid this by using a re-entrant lock or one of the special lock types that disables interrupts before grabbing the lock. Things that grab locks: mutex.lock (obviously), I/O, allocating memory, condition.signal (much less obvious).

Concurrency of processes for mutex

I have to write a daemon to decide acces policy for mutexes ( it establishes which process get the mutex if more than one want the same mutex on whatever criteria)
For that I established some codes : L 1 231 (LOCK mtx_id process_pid).
When a process requests a mutex it writes on a shared memory zone some code similar to the one above.
The daemon reads it. (For every mutex I have a queue with processe waiting to get it.) Puts the process pid in queue.
If it is unlocked , pop queue, give mutex.( Write in shared memory id_mutex and process of the pid that got it, for other processes to read and know who has the mutex.
My question is : how do more processe request same mutex ? Creating them at first and selecting the requested process manually does not seem such a good option.
Any help is appreciated.THank you
Many OS have a container, a catalog, directory or registry, of OS objects that can be stored by name. Once stored in the container, they can be looked up by name and a reference token returned. That token can then be used to access the object.
A synchro object like an inter-process mutex would be a good candidate for storage in the container. Multiple processes could then look up the mutex by name and use it.
Such cataloged objects are often reference-counted so that they are only destroyed when the last process with a token calls for it to be closed.
BTW - see comments, your design suc.... has issues :(

How to pass data from one thread to another running thread using pthread in C++

Is there a way to pass data from one running thread to another running thread. One of the threads shows a menu and the user selects one option using cin. The other thread is processing the data and sending the result to a server each 'X' period of time. As I can wait the whole program in the cin instruction waiting for the user to input the data, I divided the program into two threads. The data input of the menu is used in the other thread.
Thanks
As far as I know, with pthreads there is no direct way of passing any arbitrary data from one thread to another.
However, threads share the same memory space; and as a result one thread can modify an object in the memory, and the other one can read it. To avoid race conditions, the access to this shared-memory object requires synchronization using a mutex.
Thread #1: when user responds: locks mutex, modifies the object and unlocks mutex.
Thread #2: every "x" period of time: locks the mutex, reads the object state, unlocks mutex and then does its processing based on the object state.
I hava meet the same question in a http-server, i get one thread to accept client-sockets, but distribute them to another thread. My suggestion is that , the waiting-thread and dealing-thread use a same queue, and you pass a pointer of the queue to both thread, the waiting-thread write data into the queue when there are user inputting, and the dealing-thread sleeps untill the queue is not empty.E.g:
ring_queue rq;//remember to pass the address of rq to waiting_thread & dealing_thread
waiting-thread
while(true){
res = getInput();//block here
rq->put(res);
}
=======================================
dealing-thread
while(true){
while(rq.isEmpty()){
usleep(100);
}
//not empty
doYourWorks();
}

Shared memory - need for synchronization

I've seen a project where communication between processes was made using shared memory (e.g. using ::CreateFileMapping under Windows) and every time one of the processes wanted to notify that some data is available in shared memory, a synchronization mechanism using named events notified the interested party that the content of the shared memory changed.
I am concerned on the fact that the appropriate memory fences are not present for the process that reads the new information to know that it has to invalidate it's copy of the data and read it from main memory once it is "published" by the producer process.
Do you know how can this be accomplished on Windows using shared memory?
EDIT
Just wanted to add that after creating the file mapping the processes uses MapViewOfFile() API only once and every new modification to the shared data uses the pointer obtained by the initial call to MapViewOfFile() to read the new data sent over the shared memory. Does correct synchronization require that every time data changes in shared memory the process that reads data must create MapViewOfFile() every time ?
If you use a Windows Named Event for signaling changes, then everything should be OK.
Process A changes the data and calls SetEvent.
Process B waits for the event using WaitForSingleObject or similar, and sees that it is set.
Process B then reads the data. WaitForSingleObject contains all the necessary synchronization to ensure that the changes made by process A before the call to SetEvent are read by process B.
Of course, if you make any changes to the data after calling SetEvent, then these may or may not show up when process B reads the data.
If you don't want to use Events, you could use a Mutex created with CreateMutex, or you could write lock-free code using the Interlocked... functions such as InterlockedExchange and InterlockedIncrement.
However you do the synchronization, you do not need to call MapViewOfFile more than once.
What you're looking for for shared memory on windows is the InterlockedExchange function. See the msdn article here. The REALLY important part is quoted:
This function generates a full memory barrier (or fence) to ensure
that memory operations are completed in order.
This will function cross-process. I've worked with it before, and found it 100% reliable for implementing a mutex-like construct on top of shared memory.
How you do that is that you exchange it with the "set" value. If you get "clear" back, you have it (it was clear), but if you get "set" back, then somebody else had it. You loop, sleep between looping, etc, until you "get" it. Basically this:
#define LOCK_SET 1
#define LOCK_CLEAR 0
int* lock_location = LOCK_LOCATION; // ensure this is in shared memory
if (InterlockedExchange(lock_location, LOCK_SET) == LOCK_CLEAR)
{
return true; // got the lock
}
else
{
return false; // didn't get the lock
}
As above, and loop until you "get" it.
Let's call process A the data producer and process B the data consumer. Until now, you have a mechanism for process A to notify process B that new data has been produced. I suggest you created a reverse notification (from B to A) which tells process A that the data has been consumed. If, for performance reason, you don't want process A to wait for the data to be consumed, you could set up a ring-buffer in the shared memory.

Difference between event object and condition variable

What is the difference between event objects and condition variables?
I am asking in context of WIN32 API.
Event objects are kernel-level objects. They can be shared across process boundaries, and are supported on all Windows OS versions. They can be used as their own standalone locks to shared resources, if desired. Since they are kernel objects, the OS has limitations on the number of available events that can be allocated at a time.
Condition Variables are user-level objects. They cannot be shared across process boundaries, and are only supported on Vista/2008 and later. They do not act as their own locks, but require a separate lock to be associated with them, such as a critical section. Since they are user- objects, the number of available variables is limited by available memory. When a Conditional Variable is put to sleep, it automatically releases the specified lock object so another thread can acquire it. When the Conditional Variable wakes up, it automatically re-acquires the specified lock object again.
In terms of functionality, think of a Conditional Variable as a logical combination of two objects working together - a keyed event and a lock object. When the Condition Variable is put to sleep, it resets the event, releases the lock, waits for the event to be signaled, and then re-acquires the lock. For instance, if you use a critical section as the lock object, SleepConditionalVariableCS() is similar to a sequence of calls to ResetEvent(), LeaveCriticalSection(), WaitForSingleObject(), and EnterCriticalSection(). Whereas if you use a SRWL as the lock, SleepConditionVariableSRW() is similar to a sequence of calls to ResetEvent(), ReleaseSRWLock...(), WaitForSingleObject(), and AcquireSRWLock...().
They are very similar, but event objects work across process boundaries, whereas condition variables do not. From the MSDN documentation on condition variables:
Condition variables are user-mode
objects that cannot be shared across
processes.
From the MSDN documentation on event objects:
Threads in other processes can open a
handle to an existing event object by
specifying its name in a call to the
OpenEvent function.
The most significant difference is the Event object is a kernel object and can be shared across processes as long as it is alive when processes/threads are trying to acquire, on the contrary, Condition variable is a user mode object which is light(only has same size as a pointer and has nothing additional to be released after using it) and has better performance.
Typically, condition variable is often used along with locks, since we need to keep data synchronized properly. When considering Condition Variable, we treat it like keyed events which was improved since Vista.
Joe duffy has a blog post http://joeduffyblog.com/2006/11/28/windows-keyed-events-critical-sections-and-new-vista-synchronization-features/ that explained more detailed information.