I read a few documents about Mutex and still the only Idea I have got is that it helps preventing threads from accessing a resource that is already being used by another resource.
I got from Code snippet and executed which works fine:
#include <windows.h>
#include <process.h>
#include <iostream>
using namespace std;
BOOL FunctionToWriteToDatabase(HANDLE hMutex)
{
DWORD dwWaitResult;
// Request ownership of mutex.
dwWaitResult = WaitForSingleObject(
hMutex, // handle to mutex
5000L); // five-second time-out interval
switch (dwWaitResult)
{
// The thread got mutex ownership.
case WAIT_OBJECT_0:
__try
{
// Write to the database.
}
__finally {
// Release ownership of the mutex object.
if (! ReleaseMutex(hMutex)) {
// Deal with error.
}
break;
}
// Cannot get mutex ownership due to time-out.
case WAIT_TIMEOUT:
return FALSE;
// Got ownership of the abandoned mutex object.
case WAIT_ABANDONED:
return FALSE;
}
return TRUE;
}
void main()
{
HANDLE hMutex;
hMutex=CreateMutex(NULL,FALSE,"MutexExample");
if (hMutex == NULL)
{
printf("CreateMutex error: %d\n", GetLastError() );
}
else if ( GetLastError() == ERROR_ALREADY_EXISTS )
printf("CreateMutex opened existing mutex\n");
else
printf("CreateMutex created new mutex\n");
}
But What I don't understand is where is the thread and where is the shared resource? Can anyone please explain or provide a better article or document?
A mutex provides mutually exclusive access to a resource; in your case, a database. There aren't multiple threads in your program, but you can have multiple instances of your program running, which is what your mutex is protecting against. Effectively, it is still protecting against access from more than one thread, it's just that those threads can be in separate processes.
Your code is creating a named mutex that can be shared across multiple instances of your application. This is a form of interprocess communication. MSDN documentation on CreateMutex has additional helpful information about named mutexes:
Two or more processes can call
CreateMutex to create the same named
mutex. The first process actually
creates the mutex, and subsequent
processes with sufficient access
rights simply open a handle to the
existing mutex...
Multiple processes can have handles of
the same mutex object, enabling use of
the object for interprocess
synchronization.
A mutex is only necessary here if the database you're working against doesn't inherently support multithreaded access.
Maybe It will be the best source to you
http://en.wikipedia.org/wiki/Mutual_exclusion
You can refer this SO post for comparison of various thread synchronization mechanisms
Difference between Locks, Mutex and Critical Sections
If you want specific information Mutex then wikipedia will give you enough details.
This link in msdn provides a similar example as yours with threads made in the main() function. But again the shared resource, which is supposed to be a database is not included.
Anyway, a shared resource is whatever that needs to be accessed from multiple threads: settingsfiles, drivers, database,...
Mind you that the counter in the example is written while protected by the mutex, while it is been read while not being protected. While in this case, there is probably no problem, it is a bit sloppy.
Related
I'm doing an assignment where we are creating 4 threads that all share some memory. Just want to ask if the design of my monitor looks good, since I'm having a deadlock/ stalling somewhere in my code when I try to cancel all threads.
Previously, I have identified the stalling to threads locking up the mutex when they are cancelled, leaving a thread deadlocking on waiting for the mutex to unlock. I've implemented some changes but still seems to stall when I pipe some data into it using
cat input.txt |./app
However if I read the data directly from the file using getline() then it does not stall and all threads are cancelled.
Currently, the monitor contains the 2 shared lists(these are created using the same pool of nodes), a mutex control access to this list, with 4 condition variables 2 per list.
//shared data
static List *sendList;
static List *receivelist;
//locks
static pthread_cond_t sendListIsFullCondVar = PTHREAD_COND_INITIALIZER;
static pthread_cond_t sendListIsEmptyCondVar = PTHREAD_COND_INITIALIZER;
static pthread_cond_t receiveListIsFullCondVar = PTHREAD_COND_INITIALIZER;
static pthread_cond_t receiveListIsEmptyCondVar = PTHREAD_COND_INITIALIZER;
static pthread_mutex_t listAccessMutex = PTHREAD_MUTEX_INITIALIZER;
The monitor interface consist of an add function and a get function for each list. Each threads are expected to be either adding to the list or getting from the list, but not both.
More specifically, the keyboardThread puts data into the sendlist, sendThread get data from sendlist, receiveThread put data into receivelist, and printThread get data from receivelist.
void Monitor_sendlistAdd(void *item)
{
pthread_mutex_lock(&listAccessMutex);
if (List_count(sendList) == MAX_LIST_SIZE)
{
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
pthread_cond_wait(&sendListIsFullCondVar, &listAccessMutex);
}
pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, NULL);
List_prepend(sendList, item);
pthread_cond_signal(&sendListIsEmptyCondVar);
pthread_mutex_unlock(&listAccessMutex);
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
}
void *Monitor_sendlistGet()
{
pthread_mutex_lock(&listAccessMutex);
if (List_count(sendList) == 0)
{
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
pthread_cond_wait(&sendListIsEmptyCondVar, &listAccessMutex);
}
pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, NULL);
void *item = List_trim(sendList);
pthread_cond_signal(&sendListIsFullCondVar);
pthread_mutex_unlock(&listAccessMutex);
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);
return item;
}
(I didn't include the interface for receiveList since its is identical.)
I'm changing the cancel states after the if-statement to make sure a thread is not cancelled while locking the mutex, which would stall any process from cancelling if it is waiting for the mutex.
I also giving each thread's a cleaup handler that releases its mutex, again to ensure that no thread cancel without keeping the mutex locked.
void *keyboardThread()
{
pthread_cleanup_push(Monitor_releaseListAccessMutex, NULL);
while (1)
{
... some codes
}
pthread_cleanup_pop(1);
return;
So yeah, I'm just completely stumped as to where else could be blocking in my code. The rest of the code are just making connections to sockets to shoot data between ports and mallocs. I have looked into the manual and it seems like mutex_lock is the only function in my code that can block a thread cancel.
Do not use thread cancellation if you can possibly help it. It has deep problems associated with it.
To break threads out of a pthread_cond_wait(), use pthread_cond_signal() or pthread_cond_broadcast(). Or use pthread_cond_timedwait() in the first place and just let the timeout expire.
"But, wait!" I imagine you saying, "Then my threads will just proceed as if they had been signaled normally!" And there's the rub: your threads need to be able to handle spurious returns from their waits anyway, as those can and do happen. They must check before waiting whether they need to wait at all, and then they must check again, and potentially wait again, after returning from their wait.
What you can do, then, is add a shared flag that informs your thread(s) that they should abort instead of proceeding normally, and have them check that it is not set as one of the conditions for waiting. If it is set before any iteration of the wait loop then the thread should take whatever action is appropriate, such as (releasing all its locked mutexes and) terminating.
You remark:
I also giving each thread's a cleaup handler that releases its mutex
That's probably a bad idea, and it may be directly contributing to your problem. Threads must not attempt to unlock a mutex that they do not hold locked, so for your cleanup handlers to work correctly, you would need to track which mutexes each thread currently has locked, in a form that the cleanup handlers can act upon. It's conceivable that you could do that, but at best it would be messy, and probably fragile, and it might well carry its own synchronization issues. This is among the problems attending use of thread cancellation.
i am new to multithreading in C(LINUX)
i am doing a multiple client server(single) program ,here i am using threads for execution of the server,so i need when a client is waiting for reply other threads(other server threads) should not run,
while(n = read(conn->sock, buffer, sizeof(buffer)) > 0 )
{
//HERE I NEED THE LOCK THE OTHER THREADS FROM THEIR EXECUTION
//process
//process
//end of process
//HERE I NEED TO RELEASE LOCK FOR THE OTHER THREADS EXECUTION
}
}
i did not find anything specific on the net,even some example URL will be helpful
For this you can use e.g. condition variables, where you can notify all waiting threads.
You can use conditions variables from POSIX pthreads if you need pure C.
https://computing.llnl.gov/tutorials/pthreads/#ConditionVariables
You should use Mutual Exclusion (mutex). If you're on Windows I would use EnterCriticalSection on a critical section object.
You can also use std::mutex, which has been added in C++11 to try and create a standardized technique for this sort of thing.
Basically you have the thread that needs to transmit or access something take 'ownership' of the mutex and all other threads would check before taking action to see if the mutex is already owned. If the mutex is owned the other threads would wait until it is released thus waiting their turn to take action.
It is highly advisable to use the operating system built-in method for doing this like my suggestion for Windows. If you don't, you will not have the same level of fairness. Most operating systems have built in optimizations for this while the STL objects may not.
Edit:
I some how missed the Linux tag but AlexBG provided the link to POSIX built in mutex usage: https://computing.llnl.gov/tutorials/pthreads/#ConditionVariables
Search for thread synchronization.
global variables will be shared by threads so use a global variable and then can acquire lock using that variable.
To protect the concurrent access to a shared resource a simple (fast) mutex would do.
#include <pthread.h>
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
[...]
void some_func(void)
{
[...]
pthread_mutex_lock(&mutex);
/* Access to shared resource here (print to stdout for example). */
pthread_mutex_unlock(&mutex);
[...]
}
Please note that this code lacks error checking for the sake of readability.
If I have the following code:
#include <boost/date_time.hpp>
#include <boost/thread.hpp>
boost::shared_mutex g_sharedMutex;
void reader()
{
boost::shared_lock<boost::shared_mutex> lock(g_sharedMutex);
boost::this_thread::sleep(boost::posix_time::seconds(10));
}
void makeReaders()
{
while (1)
{
boost::thread ar(reader);
boost::this_thread::sleep(boost::posix_time::seconds(3));
}
}
boost::thread mr(makeReaders);
boost::this_thread::sleep(boost::posix_time::seconds(5));
boost::unique_lock<boost::shared_mutex> lock(g_sharedMutex);
...
the unique lock will never be acquired, because there are always going to be readers. I want a unique_lock that, when it starts waiting, prevents any new read locks from gaining access to the mutex (called a write-biased or write-preferred lock, based on my wiki searching). Is there a simple way to do this with boost? Or would I need to write my own?
Note that I won't comment on the win32 implementation because it's way more involved and I don't have the time to go through it in detail. That being said, it's interface is the same as the pthread implementation which means that the following answer should be equally valid.
The relevant pieces of the pthread implementation of boost::shared_mutex as of v1.51.0:
void lock_shared()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
while(state.exclusive || state.exclusive_waiting_blocked)
{
shared_cond.wait(lk);
}
++state.shared_count;
}
void lock()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
while(state.shared_count || state.exclusive)
{
state.exclusive_waiting_blocked=true;
exclusive_cond.wait(lk);
}
state.exclusive=true;
}
The while loop conditions are the most relevant part for you. For the lock_shared function (read lock), notice how the while loop will not terminate as long as there's a thread trying to acquire (state.exclusive_waiting_blocked) or already owns (state.exclusive) the lock. This essentially means that write locks have priority over read locks.
For the lock function (write lock), the while loop will not terminate as long as there's at least one thread that currently owns the read lock (state.shared_count) or another thread owns the write lock (state.exclusive). This essentially gives you the usual mutual exclusion guarantees.
As for deadlocks, well the read lock will always return as long as the write locks are guaranteed to be unlocked once they are acquired. As for the write lock, it's guaranteed to return as long as the read locks and the write locks are always guaranteed to be unlocked once acquired.
In case you're wondering, the state_change mutex is used to ensure that there's no concurrent calls to either of these functions. I'm not going to go through the unlock functions because they're a bit more involved. Feel free to look them over yourself, you have the source after all (boost/thread/pthread/shared_mutex.hpp) :)
All in all, this is pretty much a text book implementation and they've been extensively tested in a wide range of scenarios (libs/thread/test/test_shared_mutex.cpp and massive use across the industry). I wouldn't worry too much as long you use them idiomatically (no recursive locking and always lock using the RAII helpers). If you still don't trust the implementation, then you could write a randomized test that simulates whatever test case you're worried about and let it run overnight on hundreds of thread. That's usually a good way to tease out deadlocks.
Now why would you see that a read lock is acquired after a write lock is requested? Difficult to say without seeing the diagnostic code that you're using. Chances are that the read lock is acquired after your print statement (or whatever you're using) is completed and before state_change lock is acquired in the write thread.
I'm not clear about this, can someone confirm this for me?
I have the following synchronization issue. I have the following objects:
A. Process 1, thread 1: Read & write access to the resource.
B. Process 1, thread 2: Read access to the resource.
C. Process 2, thread 3: Read access to the resource.
And here's the access conditions:
A must be blocked while B or C are on.
B must be blocked only while A is on.
C must be blocked only while A is on.
So I thought to use 2 named mutexes for that:
hMutex2 = used to satisfy condition 2 above.
hMutex3 = used to satisfy condition 3 above.
hStopEvent = a stop event (needs to stop the thread if the app is closing).
So for A:
HANDLE hHandles[3] = {hMutex2, hMutex3, hStopEvent};
DWORD dwRes = WaitForMultipleObjects(3, hHandles, FALSE, INFINITE);
if(dwRes == WAIT_OBJECT_0 + 2)
{
//Quit now
return;
}
else if(dwRes == WAIT_OBJECT_0 + 0 ||
dwRes == WAIT_OBJECT_0 + 1)
{
//Do reading & writing here
...
//Release ownership
ReleaseMutex(hMutex2);
ReleaseMutex(hMutex3);
}
else
{
//Error
}
For B:
DWORD dwRes = WaitForSingleObject(hMutex2, INFINITE);
if(dwRes == WAIT_OBJECT_0)
{
//Do reading here
...
//Release ownership
ReleaseMutex(hMutex2);
}
else
{
//Error
}
For C:
DWORD dwRes = WaitForSingleObject(hMutex3, INFINITE);
if(dwRes == WAIT_OBJECT_0)
{
//Do reading here
...
//Release ownership
ReleaseMutex(hMutex3);
}
else
{
//Error
}
Can someone confirm this:
When calling WaitForMultipleObjects on both mutexes, do they both become signaled (or blocked)?
Also do I needs to release both mutexes?
The WaitForMultipleObjects call as written (FALSE for the 3rd parameter) will return when any one of the mutexes is signaled. This means that both the writer and one of the readers could obtain simultaneous access to the resource. One reader could be accessing the resource while the other reader releases its mutex. At that point, the writer would be released.
So to use both mutexes like that, you would need to wait on both of them. However, you cannot just set that third parameter to TRUE since it would mean that it would require hStopEvent to also be signaled in order to release that thread (which is obviously not desired).
One possibility might be to check which mutex was released and then have the writer wait for the other one as well before continuing. Then it would need to release both of them after finishing its task. A problem with this type of solution is that it can start getting complex in a hurry and if you add more processes that need the mutexes, you can end up with deadlock if you are not careful. Using a reader-writer type of lock would simplify the processing quite a bit.
Edit This is not really part of the answer to the question, but depending on the processes involved and how often they will access the resource and how long they will hold the mutex while accessing it, you could really simplify it by using one mutex and just treating it as a critical section ... each process gets it when it needs access to the resource. It of course would not allow both reader threads/processes to have concurrent access, though, so that may or may not be acceptable. But it is a lot easier to verify in the long run.
What you are looking for is reader-writer lock. In your algorithm there is one serious problem - starvation of process A: if B and C keeps working and taking their mutexes, A might be unable to enter.
As a matter of fact, I can contradict it. WaitForMultipleObjects with the waitAll parameter set to FALSE will return if any of the objects are signaled. Here's the documentation :) Set it to TRUE and you'll have it waiting for all objects.
Your solution doesn't scale well, though: add another reading thread, and you're stuck with a third mutex...
The Writer/Readers problem has been solved many times before, however; why not take a look into existing implementations? Will save you a lot of debug time, especially if you're not yet familiar with the windows synchronization API. (Teaser: posix threads have a readwritelock, boost has a shared_mutex.)
Does anyone know how to check and see if a QMutex is locked, without using the function:
bool QMutex::tryLock()
The reason I don't want to use tryLock() is because it does two things:
Check and see if the mutex is locked.
If it's not locked then lock it.
For my purposes, I am not interested in performing this second step (locking the mutex).
I just want to know if it is locked or not.
Trying to lock a mutex is by definition the only way to tell if it's locked; otherwise when this imaginary function returned, how would you know if the mutex was still locked? It may have become unlocked while the function was returning; or more importantly, without performing all the cache-flushing and syncronization necessary to lock it, you couldn't actually be sure if it was locked or not.
OK, I'm guessing there is no real way to do what I'm asking without actually using tryLock().
This could be accomplished with the following code:
bool is_locked = true;
if( a_mutex.tryLock() )
{
a_mutex.unlock();
is_locked = false;
}
if( is_locked )
{
...
}
As you can see, it unlocks the QMutex, "a_mutex", if it was able to lock it.
Of course, this is not a perfect solution, as by the time it hits the 2nd if statement, the mutex's status could have changed.
Maybe a QSemaphore with one permit? The available() method may give you what you need.
QMutex is designed just for locking and unlocking functionality. Gathering statistics may be satisfied with some custom counters.
Try QSemaphore as #Luca Carion mentioned earlier.
static bool isLocked(const QBasicMutex *mut) {
auto mdata = reinterpret_cast<const QBasicAtomicPointer<QMutexData> *>(mut);
return mdata->load();
}
This code should work on Qt 5 and doesn't mess with the mutex state.
Every QBasicMutex holds a single (atomic) pointer (called d_ptr) that is NULL if not owned, a special value if it is owned but uncontested or a pointer to a platform dependent structure (on Unix, this is basically a pthread mutex) if the mutex is owned and contested.
We need the reinterpret_cast because d_ptr is private.
More info can be found here: https://woboq.com/blog/internals-of-qmutex-in-qt5.html
A legitimate use case is to verify that a mutex is indeed locked, for example if it is a function precondition. I suggest using Q_ASSERT(isLocked(...)) for this purpose.
Testing for an unlocked mutex is inherently unsafe and should not be done.