I have tried the mutex::try_lock() member in a program, which does the following:
1) It deliberately locks a mutex in a parallel thread.
2) In the main thread, it tries to lock the mutex using try_lock().
a) If the lock isn't acquired, it adds chars to a string.
b) When the lock is acquired, it prints the string.
I have tested this program on 2 online compilers:
1) On coliru, (which has a thread::hardware_concurrency() of 1), the program is here:
int main()
{
/// Lock the mutex for 1 nanosecond.
thread t {lock_mutex};
job();
t.join();
}
/// Lock the mutex for 1 nanosecond.
void lock_mutex()
{
m.lock();
this_thread::sleep_for(nanoseconds {1});
m.unlock();
}
void job()
{
cout << "starting job ..." << endl;
int lock_attempts {};
/// Try to lock the mutex.
while (!m.try_lock())
{
++lock_attempts;
/// Lock not acquired.
/// Append characters to the string.
append();
}
/// Unlock the mutex.
m.unlock();
cout << "lock attempts = " << lock_attempts
<< endl;
/// Lock acquired.
/// Print the string.
print();
}
/// Append characters to the string
void append()
{
static int count = 0;
s.push_back('a');
/// For every 5 characters appended,
/// append a space.
if (++count == 5)
{
count = 0;
s.push_back(' ');
}
}
/// Print the string.
void print()
{
cout << s << endl;
}
Here, the program output is as expected:
starting job ...
lock attempts = 2444
aaaaa aaaaa aaaaa ...
However, here, if I remove the following statement from the program:
cout << "starting job ..." << endl;
the output shows:
lock attempts = 0
Why does this happen?
2) On the other hand when I try this program (even locking for 1 second rather than 1 nanosecond) on ideone - here - I always get an output showing:
lock attempts = 0
This happens even if the diagnostic "starting job" is present in the program.
ideone has a thread::hardware_concurrency() of 8.
In other words, I successfully get the lock immediately. Why does this happen?
Note that this is NOT a case of try_lock() spuriously failing. In that case, though there is no existing lock on the mutex, the member returns false, indicating an unsuccessful locking attempt.
Here, the OPPOSITE appears to be happening. Though a lock (apparently) exists on the mutex, the member returns true, indicating a new lock has been successfully taken! Why?
calling cout.operator << (...) with std::endl calls flush. This is a switch into kernel and gives a lot of time (some nano seconds :) ) to allow the lock_mutex thread to run. When you are not calling this function the lock_mutex has not started yet.
Due to the call of into kernel you might even see this in a single core system.
Related
I'd like, instead of having my threads wait, doing nothing, for other threads to finish using data, to do something else in the meantime (like checking for input, or re-rendering the previous frame in the queue, and then returning to check to see if the other thread is done with its task).
I think this code that I've written does that, and it "seems" to work in the tests I've performed, but I don't really understand how std::memory_order_acquire and std::memory_order_clear work exactly, so I'd like some expert advice on if I'm using those correctly to achieve the behaviour I want.
Also, I've never seen multithreading done this way before, which makes me a bit worried. Are there good reasons not to have a thread do other tasks instead of waiting?
/*test program
intended to test if atomic flags can be used to perform other tasks while shared
data is in use, instead of blocking
each thread enters the flag protected part of the loop 20 times before quitting
if the flag indicates that the if block is already in use, the thread is intended to
execute the code in the else block (only up to 5 times to avoid cluttering the output)
debug note: this doesn't work with std::cout because all the threads are using it at once
and it's not thread safe so it all gets garbled. at least it didn't crash
real world usage
one thread renders and draws to the screen, while the other checks for input and
provides frameData for the renderer to use. neither thread should ever block*/
#include <fstream>
#include <atomic>
#include <thread>
#include <string>
struct ThreadData {
int numTimesToWriteToDebugIfBlockFile;
int numTimesToWriteToDebugElseBlockFile;
};
class SharedData {
public:
SharedData() {
threadData = new ThreadData[10];
for (int a = 0; a < 10; ++a) {
threadData[a] = { 20, 5 };
}
flag.clear();
}
~SharedData() {
delete[] threadData;
}
void runThread(int threadID) {
while (this->threadData[threadID].numTimesToWriteToDebugIfBlockFile > 0) {
if (this->flag.test_and_set(std::memory_order_acquire)) {
std::string fileName = "debugIfBlockOutputThread#";
fileName += std::to_string(threadID);
fileName += ".txt";
std::ofstream writeFile(fileName.c_str(), std::ios::app);
writeFile << threadID << ", running, output #" << this->threadData[threadID].numTimesToWriteToDebugIfBlockFile << std::endl;
writeFile.close();
writeFile.clear();
this->threadData[threadID].numTimesToWriteToDebugIfBlockFile -= 1;
this->flag.clear(std::memory_order_release);
}
else {
if (this->threadData[threadID].numTimesToWriteToDebugElseBlockFile > 0) {
std::string fileName = "debugElseBlockOutputThread#";
fileName += std::to_string(threadID);
fileName += ".txt";
std::ofstream writeFile(fileName.c_str(), std::ios::app);
writeFile << threadID << ", standing by, output #" << this->threadData[threadID].numTimesToWriteToDebugElseBlockFile << std::endl;
writeFile.close();
writeFile.clear();
this->threadData[threadID].numTimesToWriteToDebugElseBlockFile -= 1;
}
}
}
}
private:
ThreadData* threadData;
std::atomic_flag flag;
};
void runThread(int threadID, SharedData* sharedData) {
sharedData->runThread(threadID);
}
int main() {
SharedData sharedData;
std::thread thread[10];
for (int a = 0; a < 10; ++a) {
thread[a] = std::thread(runThread, a, &sharedData);
}
thread[0].join();
thread[1].join();
thread[2].join();
thread[3].join();
thread[4].join();
thread[5].join();
thread[6].join();
thread[7].join();
thread[8].join();
thread[9].join();
return 0;
}```
The memory ordering you're using here is correct.
The acquire memory order when you test and set your flag (to take your hand-written lock) has the effect, informally speaking, of preventing any memory accesses of the following code from becoming visible before the flag is tested. That's what you want, because you want to ensure that those accesses are effectively not done if the flag was already set. Likewise, the release order on the clear at the end prevents any of the preceding accesses from becoming visible after the clear, which is also what you need so that they only happen while the lock is held.
However, it's probably simpler to just use a std::mutex. If you don't want to wait to take the lock, but instead do something else if you can't, that's what try_lock is for.
class SharedData {
// ...
private:
std::mutex my_lock;
}
// ...
if (my_lock.try_lock()) {
// lock was taken, proceed with critical section
my_lock.unlock();
} else {
// lock not taken, do non-critical work
}
This may have a bit more overhead, but avoids the need to think about atomicity and memory ordering. It also gives you the option to easily do a blocking wait if that later becomes useful. If you've designed your program around an atomic_flag and later find a situation where you must wait to take the lock, you may find yourself stuck with either spinning while continually retrying the lock (which is wasteful of CPU cycles), or something like std::this_thread::yield(), which may wait for longer than necessary after the lock is available.
It's true this pattern is somewhat unusual. If there is always non-critical work to be done that doesn't need the lock, commonly you'd design your program to have a separate thread that just does the non-critical work continuously, and then the "critical" thread can just block as it waits for the lock.
I have a program in which we can monitor 2 objects at same time.
myThread = new thread (thred1, id);
vec.push_back (myThread);
In thred1 function,i use Boolean function to read the stored values from a different vector and it runs parallely like this:
element found 2 -- hj
HUMIDITY-1681692777 DISPLAYED IN RH
element found 1 -- hj
TEMPERATURE--1714636915 IN DEGREE CELSIUS
This keeps on running as that is what my program should do.
I have a case where I need to get ID from the user and stop that particular thread and the other should keep running till I stop it.Can someone help me with that?
void thred1 (int id)
{
bool err = false;
while (stopThread == false)
{
for (size_t i = 0; i < v.size (); i++)
{
if (id == v[i]->id)
{
cout << "element found " << v[i]->id << " -- " << v[i]->name << endl;
v[i]->Read ();
this_thread::sleep_for (chrono::seconds (4));
err = true;
break;
}
}
if (!err)
{
cout << "element not found" << endl;
break;
}
}
}
Suspension
1. Assuming you want to suspend the monitor thread but only temporarily (i.e making any changes) then you can just use a mutex. Lock it before accessing the shared vector and unlock it when you're done, ensuring that only one thread can access the data at a time.
2. You can actively suspend the thread using OS support such as SuspendThread and ResumeThread, in the case of Windows, when it's ready.
Termination
1. You could use an event for each monitor thread, name being linked to the ID would work. At each iteration of the monitor check for the termination event, ending the thread if it's active.
2. Pass some variable to each thread, store them in a map with the thread handle being the key, and similar to the previous option just check the value for each iteration.
3. Store all threads in a map with the handle as key, terminating it directly with OS support.
Honestly there are a ton of ways to do this, the best implementation depends on why exactly you want to stop the monitor thread. Any sort of synchronization object like a mutex should be fine if you're reading from one thread and writing from another. Otherwise, just storing all threads with the internal ID as key and the thread as the value should be fine for terminating monitor threads on demand.
Basically I have 2 text files with each file having a bunch of lines all 1-character long. Each character in one file is a letter or zero, if the character is zero, I need to look at the other file to see what is supposed to be there. My goal is to start two threads, each one reading a separate file and add each character to a string.
File 1:
t
0
i
s
0
0
0
t
e
0
t
File 2:
0
h
0
0
i
s
a
0
0
s
0
So the expected output of this should be 'thisisatest'.
I'm currently able to run the two threads and have each of them read their respective files, and I know I need to use a mutex lock() and unlock() to make sure only one thread is adding to the string at at time, but I'm having trouble figuring out how to implement it.
mutex m;
int i = 0;
string s = "";
void *readFile(string fileName) {
ifstream file;
char a;
file.open(fileName);
if(!file) {
cout << "Failed to open file." << endl;
exit(1);
}
while(file >> a) {
if(a == '0') {
} else {
s += a;
}
}
}
int main() {
thread p1(readFile, "Person1");
thread p2(readFile, "Person2");
p1.join();
p2.join();
cout << s << endl;
return 0;
}
I have tried placing the m.lock() just inside the while() loop and having the m.unlock() nested in the if() statement, but it did not work. Currently my code will just output file1 with no zeros and file2 with no zeros concatenated (not in any particular order since there's no way to predict which thread completes first).
I want the program to look at the text file, check the character on the current line, and if it's a letter, concatenate it to the string s, and if it's a zero, pause this thread and let the other thread check it's line.
You need to ensure the two threads run in sync, taking turns reading one line at a time. When a 0 is read, skip the turn, otherwise print the value.
For that you can use:
A variable shared between the worker threads, to keep track of turns;
A condition variable to notify threads of turn change;
A mutex to make the condition variable work.
Here's a working example demonstrating the turn-taking approach:
#include <iostream>
#include <condition_variable>
#include <mutex>
#include <thread>
int main() {
std::mutex mtx;
std::condition_variable cond;
int turn = 0;
auto task = [&](int myturn, int turns) {
std::unique_lock<std::mutex> lock(mtx);
while (turn < 9) {
cond.wait(lock, [&] { return turn % turns == myturn; });
std::cout << "Task " << myturn << std::endl;
turn++;
cond.notify_all();
}
};
std::thread p1(task, 0, 2);
std::thread p2(task, 1, 2);
p1.join();
p2.join();
std::cout << "Done" << std::endl;
}
Output:
Task 0
Task 1
Task 0
Task 1
Task 0
Task 1
Task 0
Task 1
Task 0
Task 1
Done
Consider that the index position in the string where each letter must go is predetermined and easily calculated from the data.
The thread which reads the second file:
0
h
0
0
i
s
knows that it is not responsible for the characters at str[0], str[2] and str[3], but is responsible for str[1], str[4] and str[5].
If we add a mutex and a condition variable, the algorithm is straightforward.
index = 0
while reading a line from the file succeeds: {
if the line isn't "0": {
lock(mutex)
while length(str) < index: {
wait(condition, mutex)
}
assert(length(str) == index)
add line[0] to end of str
unlock(mutex)
broadcast(condition)
}
index++
}
Basically, for each character that the thread needs to write, it knows the index. It waits for the string to get that long first, which the other thread(s) will do. Whenever a thread adds a character, it broadcasts the condition variable, to wake up another thread which wants to put a character at the new index.
The assert check should never go off, unless the data is bad (tells two or more threads to place a character at the same index). Also, if all threads hit a 0 line at the same index, of course, this will deadlock; every thread will be waiting for another thread to put a character at that index.
Another solution is possible using a synchronization object called a barrier. This problem is perfect for barriers, because what we have is a group of threads working through some tuples of data in parallel. For each tuple, exactly one thread must take action.
The algorithm is something like this:
// initialization:
init(barrier, 2) // number of threads
// each thread:
while able to read line from file: {
if line is not "0":
append line[0] to str
wait(barrier)
}
What wait(barrier) does is delay execution until 2 threads call it (because we initialized it to 2). When this happens, all threads are released. Then the barrier resets itself for the next wait, whereupon it will wait for 2 threads again.
Thus, the execution is serialized: the threads execute the loop body in lock step as they march through the file. That thread which reads a character instead of 0 adds it to the string. The other threads don't touch the string; they proceed straight to the barrier wait, so there is no data race.
Using cout in multiple threads might result in interleaved output.
So I tried to protect cout with a mutex.
The following code starts 10 background threads with std::async. When a thread starts, it prints "Started thread ...".
The main thread iterates over the futures of the background threads in the order in which they were created and prints out "Done thread ..." when the corresponding thread finished.
The output is synchronized correctly, but after some threads have started and some have finished (see output below), a deadlock occurres. All background threads left and the main thread are waiting for the mutex.
What is the reason for the deadlock?
When the print function is left or one iteration of the for loop ends, the lock_guard should unlock the mutex, so that one of the waiting threads would be able to proceed.
Why are all the threads left starving?
Code
#include <future>
#include <iostream>
#include <vector>
using namespace std;
std::mutex mtx; // mutex for critical section
int print_start(int i) {
lock_guard<mutex> g(mtx);
cout << "Started thread" << i << "(" << this_thread::get_id() << ") " << endl;
return i;
}
int main() {
vector<future<int>> futures;
for (int i = 0; i < 10; ++i) {
futures.push_back(async(print_start, i));
}
//retrieve and print the value stored in the future
for (auto &f : futures) {
lock_guard<mutex> g(mtx);
cout << "Done thread" << f.get() << "(" << this_thread::get_id() << ")" << endl;
}
cin.get();
return 0;
}
Output
Started thread0(352)
Started thread1(14944)
Started thread2(6404)
Started thread3(16884)
Done thread0(16024)
Done thread1(16024)
Done thread2(16024)
Done thread3(16024)
Your problem lies in the use of future::get:
Returns the value stored in the shared state (or throws its exception)
when the shared state is ready.
If the shared state is not yet ready (i.e., the provider has not yet
set its value or exception), the function blocks the calling thread
and waits until it is ready.
http://www.cplusplus.com/reference/future/future/get/
So if the thread behind the future didn't get to run yet, the function blocks until that thread finishes. However, you take ownership of the mutex before calling future::get, so whichever thread you're waiting for will not be able to attain the mutex for itself.
This should fix your deadlock problem:
int value = f.get();
lock_guard<mutex> g(mtx);
cout << "Done thread" << value << "(" << this_thread::get_id() << ")" << endl;
You lock the mutex and then wait for one of the futures, which in turn requires a lock on the mutex itself. Simple rule: Don't wait with locked mutexes.
BTW: Locking output streams is not very effective, because it can easily be circumvented by code you don't even control. Rather than using those globals, give a stream to code that needs to output something (dependency injection) and then collect the data from that stream in a threadsafe way. Or use a logging library, because that's probably what you wanted to do anyway.
It is good that the reason was spotted from the source. However, quite often the error, as it happens, may be not so easy to locate. And the reason may differ as well. Fortunately, in case of deadlock you can use debugger to investigate it.
I compiled and ran your example, then after attaching to it with gdb (gcc 4.9.2/Linux), there is a backtrace (noisy implementation details skipped):
#0 __lll_lock_wait ()
...
#5 0x0000000000403140 in std::lock_guard<std::mutex>::lock_guard (
this=0x7ffe74903320, __m=...) at /usr/include/c++/4.9/mutex:377
#6 0x0000000000402147 in print_start (i=0) at so_deadlock.cc:9
...
#23 0x0000000000409e69 in ....::_M_complete_async() (this=0xdd4020)
at /usr/include/c++/4.9/future:1498
#24 0x0000000000402af2 in std::__future_base::_State_baseV2::wait (
this=0xdd4020) at /usr/include/c++/4.9/future:321
#25 0x0000000000404713 in std::__basic_future<int>::_M_get_result (
this=0xdd47e0) at /usr/include/c++/4.9/future:621
#26 0x0000000000403c48 in std::future<int>::get (this=0xdd47e0)
at /usr/include/c++/4.9/future:700
#27 0x000000000040229b in main () at so_deadlock.cc:24
This is just what is explained in the other answers - the code in locked section (so_deadlock.cc:24) calls future::get(), which in turn (by forcing the result) trying to acquire the lock again.
It might be not that simple in other cases, there are usually several threads, but it's all there.
in my program there's a part of code that waits to be waken up from other part of code:
Here's the part that goes to sleep:
void flush2device(int task_id) {
if (pthread_mutex_lock(&id2cvLock) != SUCCESS) {
cerr << "system error - exiting!!!\n";
exit(1);
}
map<int,pthread_cond_t*>::iterator it;
it = id2cv.find(task_id);
if(it == id2cv.end()){
if (pthread_mutex_unlock(&id2cvLock) != SUCCESS) {
cerr << "system error\n UNLOCKING MUTEX flush2device\n";
exit(1);
}
return;
}
cout << "Waiting for CV signal" <<endl;
if(pthread_cond_wait(it->second, &id2cvLock)!=SUCCESS){
cerr << "system error\n COND_WAIT flush2device - exiting!!!\n";
exit(1);
}
cout << "should be right after " << task_id << " signal" << endl;
if (pthread_mutex_unlock(&id2cvLock) != SUCCESS) {
cerr << "system error\n UNLOCKING MUTEX flush2device -exiting!!!\n";
exit(1);
}
}
In another part of code, there's the waking up part (signaling):
//id2cv is a map <int, pthread_cond_t*> variable. - the value is a pointer to the cv on
//which we call with the broadcast method.
if(pthread_mutex_lock(&id2cvLock)!=SUCCESS){
cerr <<"system error\n";
exit(1);
}
id2cv.erase(nextBuf->_taskID);
cout << "In Thread b4 signal, i'm tID " <<nextBuf->_taskID << endl;
if (pthread_cond_broadcast(nextBuf->cv) != 0) {
cerr << "system error SIGNAL_CV doThreads\n";
exit(1);
}
cout << "In doThread, after erasing id2cv " << endl;
if(pthread_mutex_unlock(&id2cvLock)!=SUCCESS){
cerr <<"system error\n;
exit(1);
}
Most of the runnings work just fine, but once in a while the program just stop "reacting" - the first method (above) just doesn't pass the cond_wait part - it seems like no one really send her the signal on time (or from some other reason) - while the other method (which the last part of code is a part of it) keeps running.
Where do i go wrong in the logic of mutexes and signaling? I've already checked that the pthread_cond_t variable is still "alive" before the calling to the cond_wait and the cond_broadcast method, and nothing in that area seems to be the fault.
Despite it's name, pthread_cond_wait is an unconditional wait for a condition. You must not call pthread_cond_wait unless you have confirmed that there is something to wait for, and the thing it's waiting for must be protected by the associated mutex.
Condition variables are stateless and it is the application's responsibility to store the state of the thing being waited for, called a 'predicate'.
The canonical pattern is:
pthread_mutex_lock(&mutex);
while(!ready_for_me_to_do_something)
pthread_cond_wait(&condvar, &mutex);
do_stuff();
ready_for_me_to_do_something=false; // this may or may not be appropriate
pthread_mutex_unlock(&mutex);
and:
pthread_mutex_lock(&mutex);
ready_for_me_to_do_something=true;
pthread_cond_broadcast(&condvar);
pthread_mutex_unlock(&mutex);
Notice how this code maintains the state in the ready_for_me_to_do_something variable and the waiting thread waits in a loop until that variable is true. Notice how the mutex protects that shared variable, and it protects the condition variable (because that is also shared between the threads).
This is not the only correct way to use a condition variable, but it is very easy to run into trouble with any other use. You call pthread_cond_wait even if there is no reason to wait. If you wait for your sister to get home with the car before you use it, and she has already returned, you will be waiting a long time.
Your use of pthread_cond_wait() is not correct. If a condition variable is signalled while no processes are waiting, the signal has no effect. It's not saved for the next time a process waits. This means that correct use of pthread_cond_wait() looks like:
pthread_mutex_lock(&mutex);
/* ... */
while (!should_wake_up)
pthread_cond_wait(&cond, &mutex);
The should_wake_up condition might just be a simple test of a flag variable, or it might be something like a more complicated test for a buffer being empty or full, or something similar. The mutex must be locked to protect against concurrent modifications that might change the result of should_wake_up.
It is not clear what that test should be in your program - you might need to add a specific flag variable.
I don't think there's enough code in the "waking up" part, but my initial guess is that the pthread_cond_wait hasn't been entered at the time pthread_cond_broadcast is issued.
Another possibility is that pthread_cond_wait is in the middle of a spurious wakeup and misses the signal completely.
I'm pretty sure that most uses of condition variables also have an external predicate that must be checked after every wakeup to see if there is work to be done.