Here is the sample code:
#include <iostream>
#include <thread>
#include <mutex>
int value = 0;
void criticalSection(int changeValue)
{
std::mutex mtx;
std::unique_lock<std::mutex> uniqueLock(mtx);
value = changeValue;
std::cout << value << std::endl;
uniqueLock.unlock();
uniqueLock.lock();
++value;
std::cout << value << std::endl;
}
int main(int argc, char **argv)
{
std::thread t1(criticalSection, 1), t2(criticalSection, 2);
t1.join();
t2.join();
return 0;
}
My question is: what is the scope of mtx in the above code? Will each thread create a mtx within that thread? Is there any difference if I specify mtx as a global variable instead of a local variable?
I just started to learn multithread in C++. Thank you very much for your help.
There are no special scoping rules for mutexes. Their scope ends on the next }. In your code every call to criticalSection creates a new mutex instance. Hence, the mutex cannot possibly be used to synchronize the two threads. To do so, both threads would need to use the same mutex.
You can pass a reference to a mutex to the function:
#include <iostream>
#include <thread>
#include <mutex>
int value = 0;
void criticalSection(int changeValue,std::mutex& mtx)
{
std::unique_lock<std::mutex> uniqueLock(mtx);
//...
}
int main(int argc, char **argv)
{
std::mutex mtx;
std::thread t1(criticalSection, 1,std::ref(mtx)), t2(criticalSection, 2,std::ref(mtx));
t1.join();
t2.join();
return 0;
}
Alternatively you could use a std::atomic<int> value; and drop the mutex when value is the only shared state.
The mutex type doesn't change C++ rules on how variables and objects work.
If you declare a non-static variable on the stack, each call to that function will create its own version of that variable. Regardless of which thread it is. These are all separate and distinct objects.
When you lock a mutex, you are locking that object. Other objects of the same type are unrelated to that lock.
To make a mutex useful, both of the pieces of code trying to lock it must be locking the same object. How you accomplish this is ultimately up to your needs.
Related
I would like in c++ to share a list between two threads. I would like very simple not by taking FIFO or Shared memory so i just use mutex and locks.
I tried this way and its working :
#include <string.h>
#include <mutex>
#include <iostream>
#include <thread>
#include <list>
std::list<int> myList;
std::mutex list_mutex;
void client(){
std::lock_guard<std::mutex> guard(list_mutex);
myList.push_back(4);
};
void server(){
std::lock_guard<std::mutex> guard(list_mutex);
myList.push_back(2);
};
void print(std::list<int> const &list)
{
for (auto const& i: list) {
std::cout << i << "\n";
}
};
int main(int ac, char** av)
{
std::mutex list_mutex;
std::thread t1(client);
std::thread t2(server);
t1.join();
t2.join();
print(myList);
std::cout<<"test";
return 0;
}
And it print me this
24test
This is fine it work HOWEVER i'm not sure i'm using the same lock ? My supervisor wants me to have explicit Lock/Unlock in the code. At least if i can use the same mutex?
Thank you very much to help me
Ted's comment is important, what you are working with are threads, not processes. Processes don't share memory (besides using Shared Memory, but you wanted to avoid that). Threads share their entire memory space with each other.
You also mentioned that your supervisor wants you to use unlock/lock sections. You could do this by calling:
list_mutex.lock()
... critical section ...
list_mutx.unlock()
But you already do this implicitly by constructing a lock_guard. The lock_guard locks when you create it and unlocks at the end of the current scope.
As noted by Ted, you need to remove the second declaration of list_mutex (inside main).
I have a function that must not be called from more than one thread at the same time. Can you suggest some elegant assert for this?
You can use a thin RAII wrapper around std::atomic<>:
namespace {
std::atomic<int> access_counter;
struct access_checker {
access_checker() { check = ++access_counter; }
access_checker( const access_checker & ) = delete;
~access_checker() { --access_counter; }
int check;
};
}
void foobar()
{
access_checker checker;
// assert than checker.check == 1 and react accordingly
...
}
it is simplified version for single use to show the idea and can be improved to use for multiple functions if necessary
Sounds like you need a mutex. Assuming you are using std::thread you can look at the coding example in the following link for specifically using std::mutex: http://www.cplusplus.com/reference/mutex/mutex/
// mutex example
#include <iostream> // std::cout
#include <thread> // std::thread
#include <mutex> // std::mutex
std::mutex mtx; // mutex for critical section
void print_block (int n, char c) {
// critical section (exclusive access to std::cout signaled by locking mtx):
mtx.lock();
for (int i=0; i<n; ++i) { std::cout << c; }
std::cout << '\n';
mtx.unlock();
}
int main ()
{
std::thread th1 (print_block,50,'*');
std::thread th2 (print_block,50,'$');
th1.join();
th2.join();
return 0;
}
In the above code print_block locks mtx, does what it needs to do, and then unlocks mtx. If print_block is called from two different threads, one thread will lock mtx first and the other thread will block on mtx.lock() and be force to wait until the other thread calls mtx.unlock(). This means only one thread can execute the code between mtx.lock() and mtx.unlock() (exclusive) at the same time.
This assumes by "at the same time" you mean at the same literal time. If you only want one thread to be able to call a function I would recommend looking into std::this_thread::get_id which will get you the id of the current thread. An assert could be as simple as storing the owning thread in owning_thread_id and then calling assert(owning_thread_id == std::this_thread::get_id()).
In what situations would one use the release method of std::unique_lock ?
I made the mistake of using the release method instead of the unlock method and it took a while to understand why the following code wasn't working.
#include <mutex>
#include <iostream>
#include <vector>
#include <thread>
#include <chrono>
std::mutex mtx;
void foo()
{
std::unique_lock<std::mutex> lock(mtx);
std::cout << "in critical section\n";
std::this_thread::sleep_for(std::chrono::seconds(1));
lock.release();
}
int main()
{
std::vector<std::thread> threads;
for (int i = 0; i < 5; ++i)
threads.push_back(std::thread(foo));
for (std::thread& t : threads)
t.join();
}
There's a good use for it in this answer where ownership of the locked state is explicitly transferred from a function-local unique_lock to an external entity (a by-reference Lockable parameter).
This concrete example is typical of the use: To transfer ownership of the locked state from one object (or even type) to another.
.release() is useful when you want to keep the mutex locked until some other object/code decides to unlock it... for example, if you were calling into a function that needed the mutex locked and would unlock it itself at a certain point in that function's processing, where that function accepts only a std::mutex& rather than a std::unique_lock<std::mutex>&&. (Conceptually similar to the uses for smart pointer release functions.)
I am having some trouble conceptualizing how unique_lock is supposed to operate across threads. I tried to make a quick example to recreate something that I would normally use a condition_variable for.
#include <mutex>
#include <thread>
using namespace std;
mutex m;
unique_lock<mutex>* mLock;
void funcA()
{
//thread 2
mLock->lock();//blocks until unlock?Access violation reading location 0x0000000000000000.
}
int _tmain(int argc, _TCHAR* argv[])
{
//thread 1
mLock = new unique_lock<mutex>(m);
mLock->release();//Allows .lock() to be taken by a different thread?
auto a = std::thread(funcA);
std::chrono::milliseconds dura(1000);//make sure thread is running
std::this_thread::sleep_for(dura);
mLock->unlock();//Unlocks thread 2's lock?
a.join();
return 0;
}
unique_lock should not be accessed from multiple threads at once. It was not designed to be thread-safe in that manner. Instead, multiple unique_locks (local variables) reference the same global mutex. Only the mutex itself is designed to be accessed by multiple threads at once. And even then, my statement excludes ~mutex().
For example, one knows that mutex::lock() can be accessed by multiple threads because its specification includes the following:
Synchronization: Prior unlock() operations on the same object shall synchronize with (4.7) this operation.
where synchronize with is a term of art defined in 4.7 [intro.multithread] (and its subclauses).
That doesn't look at all right. First, release is "disassociates the mutex without unlocking it", which is highly unlikely that it is what you want to do in that place. It basically means that you no longer have a mutex in your unique_lock<mutex> - which will make it pretty useless - and probably the reason you get "access violation".
Edit: After some "massaging" of your code, and convincing g++ 4.6.3 to do what I wanted (hence the #define _GLIBCXX_USE_NANOSLEEP), here's a working example:
#define _GLIBCXX_USE_NANOSLEEP
#include <chrono>
#include <mutex>
#include <thread>
#include <iostream>
using namespace std;
mutex m;
void funcA()
{
cout << "FuncA Before lock" << endl;
unique_lock<mutex> mLock(m);
//thread 2
cout << "FuncA After lock" << endl;
std::chrono::milliseconds dura(500);//make sure thread is running
std::this_thread::sleep_for(dura); //this_thread::sleep_for(dura);
cout << "FuncA After sleep" << endl;
}
int main(int argc, char* argv[])
{
cout << "Main before lock" << endl;
unique_lock<mutex> mLock(m);
auto a = std::thread(funcA);
std::chrono::milliseconds dura(1000);//make sure thread is running
std::this_thread::sleep_for(dura); //this_thread::sleep_for(dura);
mLock.unlock();//Unlocks thread 2's lock?
cout << "Main After unlock" << endl;
a.join();
cout << "Main after a.join" << endl;
return 0;
}
Not sure why you need to use new to create the lock tho'. Surely unique_lock<mutex> mlock(m); should do the trick (and corresponding changes of mLock-> into mLock. of course).
A lock is just an automatic guard that operates a mutex in a safe and sane fashion.
What you really want is this code:
std::mutex m;
void f()
{
std::lock_guard<std::mutex> lock(m);
// ...
}
This effectively "synchronizes" calls to f, since every thread that enters it blocks until it manages to obtain the mutex.
A unique_lock is just a beefed-up version of the lock_guard: It can be constructed unlocked, moved around (thanks, #MikeVine) and it is itself a "lockable object", like the mutex itself, and so it can be used for example in the variadic std::lock(...) to lock multiple things at once in a deadlock-free way, and it can be managed by an std::condition_variable (thanks, #syam).
But unless you have a good reason to use a unique_lock, prefer to use a lock_guard. And once you need to upgrade to a unique_lock, you'll know why.
As a side-note, the above answers skip over the difference between immediate and deferred locking of mutex:
#include<mutex>
::std::mutex(mu);
auto MyFunction()->void
{
std::unique_lock<mutex> lock(mu); //Created instance and immediately locked the mutex
//Do stuff....
}
auto MyOtherFunction()->void
{
std::unique_lock<mutex> lock(mu,std::defer_lock); //Create but not locked the mutex
lock.lock(); //Lock mutex
//Do stuff....
lock.unlock(); //Unlock mutex
}
MyFunction() shows the widely used immediate lock, whilst MyOtherFunction() shows the deferred lock.
I am new to windows c++ programming. Please see the below code where I want to make the two threads synchronized. The first thread should print "Hello" then pass the control/event to the second thread. Not sure how to do it. As of now I am using Sleep(1000). But if I dont use Sleep it result into undefined behavior. Please help...
#include <windows.h>
#include <process.h>
#include <iostream>
void thread1(void*);
void thread2(void*);
int main(int argc, char **argv) {
_beginthread(&thread1,0,(void*)0);
_beginthread(&thread2,0,(void*)0);
Sleep(1000);
}
void thread1(void*)
{
std::cout<<"Hello "<<std::endl;
}
void thread2(void*)
{
std::cout<<"World"<<std::endl;
}
The problem is the question you are asking really doesn't make sense. Multiple threads are designed to run at the same time and you're trying to play a game of pass the buck from one thread to another to get sequential serialised behavoir. Its like taking a really complicated tool and ask how it solves what is normally a really easy question.
However, multithreading is a really important topic to learn so I'll try to answer what you need to the best of my ability.
Firstly, I'd recommend using the new, standard C++11 functions and libraries. For windows, you can download Visual Studio 2012 Express Edition to play about with.
With this you can use std::thread, std::mutex and a lot [but not all] of the other C++11 goodies (like std::condition_variable).
To solve your problem you really need a condition variable. This lets you signal to another thread that something is ready for them:
#include <iostream>
#include <mutex>
#include <atomic>
#include <condition_variable>
#include <thread>
static std::atomic<bool> ready;
static std::mutex lock;
static std::condition_variable cv;
// ThreadOne immediately prints Hello then 'notifies' the condition variable
void ThreadOne()
{
std::cout << "Hello ";
ready = true;
cv.notify_one();
}
// ThreadTwo waits for someone to 'notify' the condition variable then prints 'World'
// Note: The 'cv.wait' must be in a loop as spurious wake-ups for condition_variables are allowed
void ThreadTwo()
{
while(true)
{
std::unique_lock<std::mutex> stackLock(lock);
cv.wait(stackLock);
if(ready) break;
}
std::cout << "World!" << std::endl;
}
// Main just kicks off two 'std::thread's. We must wait for both those threads
// to finish before we can return from main. 'join' does this - its the std
// equivalent of calling 'WaitForSingleObject' on the thread handle. its necessary
// to call join as the standard says so - but the underlying reason is that
// when main returns global destructors will start running. If your thread is also
// running at this critical time then it will possibly access global objects which
// are destructing or have destructed which is *bad*
int main(int argc, char **argv)
{
std::thread t1([](){ThreadOne();});
std::thread t2([](){ThreadTwo();});
t1.join();
t2.join();
}
Here is the simplified version to handle your situation.
You are creating 2 threads to call 2 different function.
Ideally thread synchronization is used to serialize same code between threads but in your case it is not the need. You are trying to serialize 2 threads which are no way related to one another.
Any how you can wait for each thread to finish by not making async call.
#include <windows.h>
#include <process.h>
#include <iostream>
#include<mutex>
using namespace std;
void thread1(void*);
void thread2(void*);
int main(int argc, char **argv) {
HANDLE h1 = (HANDLE)_beginthread(&thread1,0,(void*)0);
WaitForSingleObject(h1,INFINITE);
HANDLE h2 = (HANDLE)_beginthread(&thread2,0,(void*)0);
WaitForSingleObject(h2,INFINITE);
}
void thread1(void*)
{
std::cout<<"Hello "<<std::endl;
}
void thread2(void*)
{
std::cout<<"World"<<std::endl;
}
You can group both beginthread in single function and call that function in while loop if you want to print multiple times.
void fun()
{
HANDLE h1 = (HANDLE)_beginthread(&thread1,0,(void*)0);
WaitForSingleObject(h1,INFINITE);
HANDLE h2 = (HANDLE)_beginthread(&thread2,0,(void*)0);
WaitForSingleObject(h2,INFINITE);
}