Read/Write lock using only critical section causes deadlock [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
After going through this question with the same title and its answers, I thought to try something that should really work only using critical section and thus should be much faster that existing solutions (which use other kernel objects too like mutex or semaphore)
Here are my Read/Write lock/unlock functions:
#include <windows.h>
typedef struct _RW_LOCK
{
CRITICAL_SECTION readerCountLock;
CRITICAL_SECTION writerLock;
int readerCount;
} RW_LOCK, *PRW_LOCK;
void InitLock(PRW_LOCK rwlock)
{
InitializeCriticalSection(&rwlock->readerCountLock);
InitializeCriticalSection(&rwlock->writerLock);
}
void ReadLock(PRW_LOCK rwlock)
{
EnterCriticalSection(&rwlock->readerCountLock); // In deadlock 1 thread waits here (see description below)
if (++rwlock->readerCount == 1)
{
EnterCriticalSection(&rwlock->writerLock); // In deadlock 1 thread waits here
}
LeaveCriticalSection(&rwlock->readerCountLock);
}
void ReadUnlock(PRW_LOCK rwlock)
{
EnterCriticalSection(&rwlock->readerCountLock);
if (--rwlock->readerCount == 0)
{
LeaveCriticalSection(&rwlock->writerLock);
}
LeaveCriticalSection(&rwlock->readerCountLock);
}
int WriteLock(PRW_LOCK rwlock)
{
EnterCriticalSection(&rwlock->writerLock); // In deadlock 3 threads wait here
}
void WriteUnlock(PRW_LOCK rwlock)
{
LeaveCriticalSection(&rwlock->writerLock);
}
And here is a thread function. After calling InitLock (&g_rwLock); from main I created FIVE threads to try these locks.
void thread_function()
{
static int value = 0;
RW_LOCK g_rwLock;
while(1)
{
ReadLock(&g_rwlLock);
BOOL bIsValueOdd = value % 2;
ReadUnlock(&g_rwlLock);
WriteLock(&g_rwlLock);
value ++;
WriteUnlock(&g_rwlLock);
}
}
Ideally this code should keep running forever without any trouble. But to my disappointment, it doesn't run always. Some times it lands up in deadlock. I compiled this and ran it on Windows XP. To create threads using threadpool, I am using third party library. Hence cannot give here all that code which involves lots of initializing routines and other stuff.
But to cut the story short, I like to know if anyone by looking at the code above can point out what is wrong with this approach?
I've commented in the code above where each thread (out of FIVE threads) keeps waiting when deadlock happens. (I found it out by attaching debugger to the deadlocked process)
Any inputs/suggestions would be really great as I've stuck over this for quite some time now (In the greed of making my code run faster than ever).

Spotted two things so far:
You initialize the critical sections in every thread, which is not allowed (behavior is undefined)
You can't leave a critical section in a different thread from the one that entered it ("If a thread calls LeaveCriticalSection when it does not have ownership of the specified critical section object, an error occurs that may cause another thread using EnterCriticalSection to wait indefinitely.")
The latter fits the deadlock you see.
Once you have multiple readers concurrently, you don't control which order they call ReadUnlock, so you can't ensure that the first thread in, which is the only one allowed to call LeaveCriticalSection, is the last one out.

This way it cannot run correctly.
lets 1 thread enter ReadLock(), let it pass ++ instruction but pause it before entering writer CS
another thread enters WriteLock() and successfuly enters writerCS
so now we have readers count = 1 and running writer at the same time. note that reader is deadlocked on EnterCriticalSection(&rwlock->writerLock)

Related

abort() is called when I try to terminate a std::thread [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 months ago.
Improve this question
Note: I'm using WinForms & C++17.
So I was working on a school project. I have this function:
bool exs::ExprSimplifier::simplify()
{
bool completed = false;
std::thread thread1(&ExprSimplifier::internalSimplity, this, std::ref(completed));
while (true)
{
if (completed)
{
thread1.~thread(); // calls abort()
return true;
}
if (GetAsyncKeyState(27))
{
thread1.~thread(); // calls abort()
return false;
}
}
}
Basically what I want is to run the following function:
// at the end of this function, I set completed = true
void exs::ExprSimplifier::internalSimplity(bool& completed)
..on another thread. I also want to check while the function's doing it's thing and the user pressed esc key, the thread terminates. But there's where I'm facing issues. This:
thread1.~thread();
..is calling abort(), crashing the application. Now what I think is that this is due to some scope thing of std::thread, but I'm not really sure.
Questions:
What's the reason for this?
What can I do to fix this?
You can't terminate threads - end of story. In the past, they tried to make it so you could terminate threads, but they realized it's impossible to do it without crashing, so now you can't. (E.g. Windows had a TerminateThread function, because it's old. C++ doesn't have it, because C++ threads are new)
The only thing you can do is set a variable that tells the thread to stop, and then wait for it to stop.
~thread doesn't terminate threads, anyway. All it does is check that you remembered to call join or detach, and if you forgot to call one of them, it aborts, as you are seeing.
What you typically want to do is something along this general line:
class doWhatever {
std::atomic<bool> stop {false};
std::thread t;
public:
void run() {
t = std::thread([] {
while (!stop) {
doSomeProcessing();
}
});
}
void stop() {
stop = true;
}
~doWhatever() {
if (t.joinable())
t.join();
}
};
Exactly what you're going to do in doSomeProcessing obviously varies depending on what you really want your thread to do. Some threads have a queue of incoming tasks, and check the variable before processing each incoming task. Others have one long task to do, but if they do there's typically some loop there that you can check the variable at each iteration.
At least in my opinion, for a lot of situations, the ideal is that you check whether you've been asked to shut down something like once very 100 ms. This gives a nice balance--to a person, shutting down with 100 ms of telling it to looks nearly instantaneous, but you're still checking it infrequently enough that it doesn't affect execution speed enough to notice.
If you have a new enough compiler to support it, you may prefer to use std::jthread instead of std::thread. It basically includes an equivalent of the std::atomic<bool> in the thread object.

C++: Condition Variable - Is there a mistake in this youtube demo? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Youtube details
I have been browsing youtube to try and develop my understanding of C++ multithread support with mutex and condition variables.
I came across this video. Skip to time 6:30 to see what I am currently looking at. (A page of code.)
https://www.youtube.com/watch?v=eh_9zUNmTig
I believe there is a mistake in the code, but I wanted to check. It could just as well be that I don't understand something.
Question
The author states that std::unique_lock locks the mutex on creation. Meaning that there is no need to call
unique_lock<mutex> lock(m)
lock.lock(); // this is wrong, because unique_lock already locked the mutex
after creating a unique_lock object.
I assume although I do not know for certain, that unique_lock will release the mutex lock on destruction. (Aka when it goes out of scope.)
Can it also be unlocked manually by calling
lock.unlock()
? From the documentation it appears there is no such unlock function. It looks like unique_lock is therefore the same as scoped_lock? But again, I'm assuming this isn't the case and there's some other information I am missing.
Continuing... The author has a function which looks like this:
void addMoney(int money)
{
std::lock_guard<mutex> lg(m); // lock_guard being used interchangably with unique_lock - why?
balance += money; // adding to global variable
cv.notify_one(); // error here
// the lock_guard is still in scope
// the mutex is still locked
// calling notify_one() may cause the sleeping thread to wake up
// check if the mutex is still locked (which it might be if the
// destructor for lg hasn't finished running)
// and then go back to sleep
// meaning this line of code may have no effect
// it is undefined behaviour
}
I have anotated where I believe there is an error. I think this function causes undefined behaviour, because the lock_guard is still in scope, and therefore the mutex might be locked.
Effectively it is a race condition:
If addMoney() ends before the other function begins, we are ok
If the other function withdrawMoney() checks the lock (cv.wait()) before addMoney() exits then the program breaks, and remains in a locked state
For completeness here is the other function:
void withdrawMoney(int money)
{
std::unique_lock<mutex> ul(m); // unique_lock instead of scoped_lock? why?
cv.wait(ul, []{return balance != 0;});
// some more stuff omitted
}
Summary
There are a couple of points I have raised
Most importantly the race condition
Of secondary importance, why are two different things (lock_guard and unique_lock) being used to do what appears to be the same thing (performing the same function)
That comment
// calling notify_one() may cause the sleeping thread to wake up
// check if the mutex is still locked (which it might be if the
// destructor for lg hasn't finished running)
// and then go back to sleep
is incorrect. There are two separate control mechanisms here: the condition variable and the mutex. Waking up on a notification to a condition variable means, simply, waking up. After waking up, the thread blocks waiting for the mutex. When the mutex is released by the thread that called notify_one(), the blocked thread (or perhaps some other thread, but eventually, the blocked thread) gets the mutex and continues execution. It does not go back to waiting for the condition variable.
For some more explanation on std::unique_lock vs std::lock_guard, see this question.
There is no undefined behavior when sending the notification while the mutex is still locked. It might cause an unnecessary thread-switch especially if the receiving thread has a higher priority, but this is just a small performance hit. It is also not needed to have the mutex locked to send the notification, so the function may be written as:
void addMoney(int money)
{
{
std::lock_guard<mutex> lg(m);
balance += money;
}
cv.notify_one();
}
You have to make sure that the resources for the condition are protected when changing and when checked.

How do I make a thread wait without polling?

I have question about multi threading in c++. I have a scenario as follows
void ThreadedRead(int32_t thread_num, BinReader reader) {
while (!reader.endOfData) {
thread_buckets[thread_num].clear();
thread_buckets[thread_num] = reader.readnextbatch()
thread_flags[thread_num] = THREAD_WAITING;
while (thread_flags[thread_num] != THREAD_RUNNING) {
// wait until awakened
if (thread_flags[thread_num] != THREAD_RUNNING) {
//go back to sleep
}
}
}
thread_flags[thread_num] = THREAD_FINISHED;
}
No section of the above code writes or access memory shared between threads. Each thread is assigned a thread_num and a unique reader object that it may use to read data.
I want the main thread to be able to notify a thread that is in the THREAD_WAITING state that his state has been changed back to THREAD_RUNNING and he needs to do some work. I don't want to him to keep polling his state.
I understand conditional vars and mutexes can help me. But I'm not sure how to use them because I don't want to acquire or need a lock. How can the mainthread blanket notify all waiting threads that they are now free to read more data?
EDIT:
Just in case anyone needs more details
1) reader reads some files
2) thread_buckets is a vector of vectors of uint16
3) threadflags is a int vector
they have all been resized appropriately
I realize that you wrote that you wanted to avoid condition variables and locks. On the other hand you mentioned that this was because you were not sure about how to use them. Please consider the following example to get the job done without polling:
The trick with the condition variables is that a single condition_variable object together with a single mutex object will do the management for you including the handling of the unique_lock objects in the worker threads. Since you tagged your question as C++ I assume you are talking about C++11 (or higher) multithreading (I guess that C-pthreads may work similarly). Your code could be as follows:
// compile for C++11 or higher
#include <thread>
#include <condition_variable>
#include <mutex>
// objects visible to both master and workers:
std::condition_variable cvr;
std::mutex mtx;
void ThreadedRead(int32_t thread_num, BinReader reader) {
while (!reader.endOfData) {
thread_buckets[thread_num].clear();
thread_buckets[thread_num] = reader.readnextbatch()
std::unique_lock<std::mutex> myLock(mtx);
// This lock will be managed by the condition variable!
thread_flags[thread_num] = THREAD_WAITING;
while (thread_flags[thread_num] == THREAD_WAITING) {
cvr.wait(myLock);
// ...must be in a loop as shown because of potential spurious wake-ups
}
}
thread_flags[thread_num] = THREAD_FINISHED;
}
To (re-)activate the workers from a master thread:
{ // block...
// step 1: usually make sure that there is no worker still preparing itself at the moment
std::unique_lock<std::mutex> someLock(mtx);
// (in your case this would not cover workers currently busy with reader.readnextbatch(),
// these would be not re-started this time...)
// step 2: set all worker threads that should work now to THREAD_RUNNING
for (...looping over the worker's flags...) {
if (...corresponding worker should run now...) {
flag = THREAD_RUNNING;
}
}
// step 3: signalize the workers to run now
cvr.notify_all();
} // ...block, releasing someLock
Notice:
If you just want to trigger all sleeping workers you should control them with a single flag instead of a container of flags.
If you want to trigger single sleeping workers but it doesn't matter which one consider the .notify_one() member function instead of .notify_all(). Note as well that also in this case a single mutex/condition_variable pair is sufficient.
The flags should better be placed in an atomic object such as a global std::atomic<int> or maybe for finer control in a std::vector<std::atomic<int>>.
A good introduction to std::condition_variable which also inspired the suggested solution is given in: cplusplus website
It looks like there are a few issues. For one thing, you do not need the conditional inside of your loop:
while (thread_flags[thread_num] != THREAD_RUNNING);
will work by itself. As soon as that condition is false, the loop will exit.
If all you want to do is avoid checking thread_flags as quickly as possible, just put a yield in the loop:
while (thread_flags[thread_num] != THREAD_RUNNING) yield(100);
This will cause the thread to yield the CPU so that it can do other things while the thread waits for its state to change. This will make make the overhead for polling close to negligible. You can experiment with the sleep duration to find a good value. 100ms is probably on the long side.
Depending on what causes the thread state to change, you could have the thread poll that condition/value directly (with a sleep in still) and not bother with states at all.
There are a lot of options here. If you look up reader threads you can probably find just what you want; having a separate reader thread is very common.

Run threads in parallel in C++ [duplicate]

This question already has answers here:
std::thread - "terminate called without an active exception", don't want to 'join' it
(3 answers)
Closed 9 years ago.
I am trying to do a dekker algorithm implementation for homework, I understand the concept but I'm not being able to execute two threads in parallel using C++0x.
#include <thread>
#include <iostream>
using namespace std;
class Homework2 {
public:
void run() {
try {
thread c1(&Homework2::output_one, this);
thread c2(&Homework2::output_two, this);
} catch(int e) {
cout << e << endl;
}
}
void output_one() {
//cout << "output one" << endl;
}
void output_two() {
//cout << "output two" << endl;
}
};
int main() {
try {
Homework2 p2;
p2.run();
} catch(int e) {
cout << e << endl;
}
return 0;
}
My problem is that the threads will return this error:
terminate called without an active exception
Aborted
The only way to success until now for me has been adding c1.join(); c2.join(); or .detach();
the problem is that join(); will wait for the threads to finish, and detach(); ... well Im not sure what detach does because there is no error but also no output, I guess it leaves the threads on their own...
So all this to say:
Does anybody knows how can I do this both threads to run parallel and not sequencial??
The help is must appreciated!
Thanks.-
P.S:
here is what I do for build:
g++ -o output/Practica2.out main.cpp -pthread -std=c++11
The only way to success until now for me has been adding c1.join(); c2.join(); or .detach();...
After you have spawned the 2 threads, your main thread continues on and, based on your code, ends 'pretty' quick (p2.run() then return 0; are relatively close in CPU instruction 'time'). Depending on how quickly the threads started, they might not have had enough CPU time to fully 'spawn' before the program terminated or if they did fully spawn, there might not have been enough time to do the proper cleanup by the kernel. This is also known as a race condition.
Calling join on the spawned threads from the thread you spawned them from allows the threads to finish and clean up properly (under the hood) before your program exits (a good thing). Calling detach works in this scenario too as it releases all resources (under the hood) from your thread object, but keeps the thread active. In the case of calling detach there were no errors reported because the thread objects were detached from the executing threads, so when your program exited, the kernel (nicely) cleaned up the threads for you (or at least that's what might happen, depends on OS/compiler implementation, etc.) so you didn't see your threads ending 'uncleanly'.
So all this to say: Does anybody knows how can I do this both threads to run parallel and not sequencial??
I think you might have some confusion on how threads work. Your threads already run in 'parallel' (so to speak), that is the nature of a thread. Your code posted does not have anything that would be 'parallel' in nature (i.e. parallel computing of data) but your threads are running concurrently (at the same time, or 'parallel' to each).
If you want your main thread to continue without putting the join in the run function, that would require a little more code than what you currently have and I don't want to assume how your code's future should look, but you could take a look at these two questions regarding the std::thread as a member of a class (and executing within such).
I hope that can help.
Ok this is bit more complex but I will try to explain some things in your code.
When you create the threads in the method called run, you want to print two things (imagine you uncomment the lines), but the thread object is destroyed in the stack unwiding of the method which created them (run).
You actually need to do two things, first create the threads and keep them running(for example do it as pointers) and second call the method join to release all the memory and stuff they needed when they are finished.
You can store you threads in a vector something like std::vector<std::thread*>

Mutex example / tutorial? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I was trying to understand how mutexes work. Did a lot of Googling but it still left some doubts of how it works because I created my own program in which locking didn't work.
One absolutely non-intuitive syntax of the mutex is pthread_mutex_lock( &mutex1 );, where it looks like the mutex is being locked, when what I really want to lock is some other variable. Does this syntax mean that locking a mutex locks a region of code until the mutex is unlocked? Then how do threads know that the region is locked? [UPDATE: Threads know that the region is locked, by Memory Fencing ]. And isn't such a phenomenon supposed to be called critical section? [UPDATE: Critical section objects are available in Windows only, where the objects are faster than mutexes and are visible only to the thread which implements it. Otherwise, critical section just refers to the area of code protected by a mutex]
What's the simplest possible mutex example program and the simplest possible explanation on the logic of how it works?
Here goes my humble attempt to explain the concept to newbies around the world: (a color coded version on my blog too)
A lot of people run to a lone phone booth (they don't have mobile phones) to talk to their loved ones. The first person to catch the door-handle of the booth, is the one who is allowed to use the phone. He has to keep holding on to the handle of the door as long as he uses the phone, otherwise someone else will catch hold of the handle, throw him out and talk to his wife :) There's no queue system as such. When the person finishes his call, comes out of the booth and leaves the door handle, the next person to get hold of the door handle will be allowed to use the phone.
A thread is : Each person
The mutex is : The door handle
The lock is : The person's hand
The resource is : The phone
Any thread which has to execute some lines of code which should not be modified by other threads at the same time (using the phone to talk to his wife), has to first acquire a lock on a mutex (clutching the door handle of the booth). Only then will a thread be able to run those lines of code (making the phone call).
Once the thread has executed that code, it should release the lock on the mutex so that another thread can acquire a lock on the mutex (other people being able to access the phone booth).
[The concept of having a mutex is a bit absurd when considering real-world exclusive access, but in the programming world I guess there was no other way to let the other threads 'see' that a thread was already executing some lines of code. There are concepts of recursive mutexes etc, but this example was only meant to show you the basic concept. Hope the example gives you a clear picture of the concept.]
With C++11 threading:
#include <iostream>
#include <thread>
#include <mutex>
std::mutex m;//you can use std::lock_guard if you want to be exception safe
int i = 0;
void makeACallFromPhoneBooth()
{
m.lock();//man gets a hold of the phone booth door and locks it. The other men wait outside
//man happily talks to his wife from now....
std::cout << i << " Hello Wife" << std::endl;
i++;//no other thread can access variable i until m.unlock() is called
//...until now, with no interruption from other men
m.unlock();//man lets go of the door handle and unlocks the door
}
int main()
{
//This is the main crowd of people uninterested in making a phone call
//man1 leaves the crowd to go to the phone booth
std::thread man1(makeACallFromPhoneBooth);
//Although man2 appears to start second, there's a good chance he might
//reach the phone booth before man1
std::thread man2(makeACallFromPhoneBooth);
//And hey, man3 also joined the race to the booth
std::thread man3(makeACallFromPhoneBooth);
man1.join();//man1 finished his phone call and joins the crowd
man2.join();//man2 finished his phone call and joins the crowd
man3.join();//man3 finished his phone call and joins the crowd
return 0;
}
Compile and run using g++ -std=c++0x -pthread -o thread thread.cpp;./thread
Instead of explicitly using lock and unlock, you can use brackets as shown here, if you are using a scoped lock for the advantage it provides. Scoped locks have a slight performance overhead though.
While a mutex may be used to solve other problems, the primary reason they exist is to provide mutual exclusion and thereby solve what is known as a race condition. When two (or more) threads or processes are attempting to access the same variable concurrently, we have potential for a race condition. Consider the following code
//somewhere long ago, we have i declared as int
void my_concurrently_called_function()
{
i++;
}
The internals of this function look so simple. It's only one statement. However, a typical pseudo-assembly language equivalent might be:
load i from memory into a register
add 1 to i
store i back into memory
Because the equivalent assembly-language instructions are all required to perform the increment operation on i, we say that incrementing i is a non-atmoic operation. An atomic operation is one that can be completed on the hardware with a gurantee of not being interrupted once the instruction execution has begun. Incrementing i consists of a chain of 3 atomic instructions. In a concurrent system where several threads are calling the function, problems arise when a thread reads or writes at the wrong time. Imagine we have two threads running simultaneoulsy and one calls the function immediately after the other. Let's also say that we have i initialized to 0. Also assume that we have plenty of registers and that the two threads are using completely different registers, so there will be no collisions. The actual timing of these events may be:
thread 1 load 0 into register from memory corresponding to i //register is currently 0
thread 1 add 1 to a register //register is now 1, but not memory is 0
thread 2 load 0 into register from memory corresponding to i
thread 2 add 1 to a register //register is now 1, but not memory is 0
thread 1 write register to memory //memory is now 1
thread 2 write register to memory //memory is now 1
What's happened is that we have two threads incrementing i concurrently, our function gets called twice, but the outcome is inconsistent with that fact. It looks like the function was only called once. This is because the atomicity is "broken" at the machine level, meaning threads can interrupt each other or work together at the wrong times.
We need a mechanism to solve this. We need to impose some ordering to the instructions above. One common mechanism is to block all threads except one. Pthread mutex uses this mechanism.
Any thread which has to execute some lines of code which may unsafely modify shared values by other threads at the same time (using the phone to talk to his wife), should first be made acquire a lock on a mutex. In this way, any thread that requires access to the shared data must pass through the mutex lock. Only then will a thread be able to execute the code. This section of code is called a critical section.
Once the thread has executed the critical section, it should release the lock on the mutex so that another thread can acquire a lock on the mutex.
The concept of having a mutex seems a bit odd when considering humans seeking exclusive access to real, physical objects but when programming, we must be intentional. Concurrent threads and processes don't have the social and cultural upbringing that we do, so we must force them to share data nicely.
So technically speaking, how does a mutex work? Doesn't it suffer from the same race conditions that we mentioned earlier? Isn't pthread_mutex_lock() a bit more complex that a simple increment of a variable?
Technically speaking, we need some hardware support to help us out. The hardware designers give us machine instructions that do more than one thing but are guranteed to be atomic. A classic example of such an instruction is the test-and-set (TAS). When trying to acquire a lock on a resource, we might use the TAS might check to see if a value in memory is 0. If it is, that would be our signal that the resource is in use and we do nothing (or more accurately, we wait by some mechanism. A pthreads mutex will put us into a special queue in the operating system and will notify us when the resource becomes available. Dumber systems may require us to do a tight spin loop, testing the condition over and over). If the value in memory is not 0, the TAS sets the location to something other than 0 without using any other instructions. It's like combining two assembly instructions into 1 to give us atomicity. Thus, testing and changing the value (if changing is appropriate) cannot be interrupted once it has begun. We can build mutexes on top of such an instruction.
Note: some sections may appear similar to an earlier answer. I accepted his invite to edit, he preferred the original way it was, so I'm keeping what I had which is infused with a little bit of his verbiage.
I stumbled upon this post recently and think that it needs an updated solution for the standard library's c++11 mutex (namely std::mutex).
I've pasted some code below (my first steps with a mutex - I learned concurrency on win32 with HANDLE, SetEvent, WaitForMultipleObjects etc).
Since it's my first attempt with std::mutex and friends, I'd love to see comments, suggestions and improvements!
#include <condition_variable>
#include <mutex>
#include <algorithm>
#include <thread>
#include <queue>
#include <chrono>
#include <iostream>
int _tmain(int argc, _TCHAR* argv[])
{
// these vars are shared among the following threads
std::queue<unsigned int> nNumbers;
std::mutex mtxQueue;
std::condition_variable cvQueue;
bool m_bQueueLocked = false;
std::mutex mtxQuit;
std::condition_variable cvQuit;
bool m_bQuit = false;
std::thread thrQuit(
[&]()
{
using namespace std;
this_thread::sleep_for(chrono::seconds(5));
// set event by setting the bool variable to true
// then notifying via the condition variable
m_bQuit = true;
cvQuit.notify_all();
}
);
std::thread thrProducer(
[&]()
{
using namespace std;
int nNum = 13;
unique_lock<mutex> lock( mtxQuit );
while ( ! m_bQuit )
{
while( cvQuit.wait_for( lock, chrono::milliseconds(75) ) == cv_status::timeout )
{
nNum = nNum + 13 / 2;
unique_lock<mutex> qLock(mtxQueue);
cout << "Produced: " << nNum << "\n";
nNumbers.push( nNum );
}
}
}
);
std::thread thrConsumer(
[&]()
{
using namespace std;
unique_lock<mutex> lock(mtxQuit);
while( cvQuit.wait_for(lock, chrono::milliseconds(150)) == cv_status::timeout )
{
unique_lock<mutex> qLock(mtxQueue);
if( nNumbers.size() > 0 )
{
cout << "Consumed: " << nNumbers.front() << "\n";
nNumbers.pop();
}
}
}
);
thrQuit.join();
thrProducer.join();
thrConsumer.join();
return 0;
}
For those looking for the shortex mutex example:
#include <mutex>
int main() {
std::mutex m;
m.lock();
// do thread-safe stuff
m.unlock();
}
The function pthread_mutex_lock() either acquires the mutex for the calling thread or blocks the thread until the mutex can be acquired. The related pthread_mutex_unlock() releases the mutex.
Think of the mutex as a queue; every thread that attempts to acquire the mutex will be placed on the end of the queue. When a thread releases the mutex, the next thread in the queue comes off and is now running.
A critical section refers to a region of code where non-determinism is possible. Often this because multiple threads are attempting to access a shared variable. The critical section is not safe until some sort of synchronization is in place. A mutex lock is one form of synchronization.
You are supposed to check the mutex variable before using the area protected by the mutex. So your pthread_mutex_lock() could (depending on implementation) wait until mutex1 is released or return a value indicating that the lock could not be obtained if someone else has already locked it.
Mutex is really just a simplified semaphore. If you read about them and understand them, you understand mutexes. There are several questions regarding mutexes and semaphores in SO. Difference between binary semaphore and mutex, When should we use mutex and when should we use semaphore and so on. The toilet example in the first link is about as good an example as one can think of. All code does is to check if the key is available and if it is, reserves it. Notice that you don't really reserve the toilet itself, but the key.
SEMAPHORE EXAMPLE ::
sem_t m;
sem_init(&m, 0, 0); // initialize semaphore to 0
sem_wait(&m);
// critical section here
sem_post(&m);
Reference : http://pages.cs.wisc.edu/~remzi/Classes/537/Fall2008/Notes/threads-semaphores.txt