The following simplified example of several
I'm writing a c++20 software which explits pthreads. The simplified example shows how I have a shared resource shared_resource, an int variable, which is written by several threads, several times. To access the variable I use a mutex and a condition variable. A typical use of mutex and condition variables.
the num_readers is used as following:
greater than 0: multiple readers accessing the shared variable
0: neither writers nor readers are accessing the resource
-1: a writer is writing a new value on the resource. No more readers nor writers are avaibale until the writer releases the resource
The simplified version has no readers for focusing on the problem. Since num_readers = num_readers - 1; can be executed only when a writer releases the resource by setting it to 0 and signaling the other writers, I expect 0 or -1 values, but never -2!
The problem is that by executing the following I randomly get -2 values, so some interleaving problem is occurring I guess:
WAT>? num_readers -2
Process finished with exit code 1
#include <iostream>
#include <pthread.h>
#include <cstdlib>
#include <thread>
#include <random>
void* writer(void* parameters);
pthread_mutex_t mutex{PTHREAD_MUTEX_DEFAULT};
pthread_cond_t cond_writer = PTHREAD_COND_INITIALIZER;
int num_readers{0};
int shared_resource{0};
int main() {
const int WRITERS{500};
pthread_t writers[WRITERS];
for(unsigned int i=0; i < WRITERS; i++) {
pthread_create(&writers[i], NULL, writer, NULL);
}
for(auto &writer_thread : writers) {
pthread_join(writer_thread, NULL);
std::cout << "[main] writer returned\n";
}
std::cout << "[main] exiting..." << std::endl;
return 0;
}
void* writer(void* parameters) {
for (int i=0; i<5; i++) {
pthread_mutex_lock(&mutex);
while(num_readers != 0) {
if (num_readers < -1) {
std::cout << "WAT>? num_readers " << std::to_string(num_readers) << "\n";
exit(1);
}
pthread_cond_wait(&cond_writer, &mutex);
}
num_readers = num_readers - 1;
pthread_mutex_unlock(&mutex);
std::uniform_int_distribution<int> dist(1, 1000);
std::random_device rd;
int new_value = dist(rd);
shared_resource = new_value;
pthread_mutex_lock(&mutex);
num_readers = 0;
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&cond_writer);
}
return 0;
}
So: why isn't this code thread safe?
Some issues stand out in your code:
You modify the number of readers in the write funtion. Only the reader function should do that.
Same thing for the signaling of the condition variable. That should only be signaled from the reader function.
incrementing and decrementing the number of readers is usually done with a semaphore: an atomic int and an associated condition variable.
Here is the algorithm:
int reader()
{
// indicate that a read is in progress.
//
// a. lock()/
// b. increment number of readers.
// c. unlock() as soon as possible, so other readers can also read reading.
//
// note that any write in progress will stop the thread here.
pthread_mutex_lock(&mutex);
++num_readers;
pthread_mutex_unlock(&mutex);
// read protected data
int result = shared_resource;
// decremennt readers count.
//
// note that calls to lock()/unlock() are not necessary if
// num_readers is atomic (I.e.: std::atomic<int>)
pthread_mutex_lock(&mutex);
if (--num_readers == 0)
pthread_cond_signal(&cond_writer); // last reader sets the cond_var
pthread_mutex_unlock(&mutex);
return result;
}
void writer(int value)
{
// lock
pthread_mutex_lock(&mutex);
// wait for no readers, the mutex is released while waiting for
// the last read to complete. Note that access to num_readers is
// done while the mutex is owned.
while (num_readers != 0)
pthread_cond_wait(&cond_writer, &mutex);
// modify protected data.
shared_resource = value;
// unlock.
pthread_mutex_unlock(&mutex);
}
Related
In trying to create an asynchronous I/O file reader in C++ under Linux. The example I have has two buffers. The first read blocks. Then, for each time around the main loop, I asynchronously launch the IO and call process() which runs the simulated processing of the current block. When processing is done, we wait for the condition variable. The idea is that the asynchronous handler should notify the condition variable.
Unfortunately the notify seems to happen before wait, and it seems like this is not the way the condition variable wait() function works. How should I rewrite the code so that the loop waits until the asynchronous io has completed?
#include <aio.h>
#include <fcntl.h>
#include <signal.h>
#include <unistd.h>
#include <condition_variable>
#include <cstring>
#include <iostream>
#include <thread>
using namespace std;
using namespace std::chrono_literals;
constexpr uint32_t blockSize = 512;
mutex readMutex;
condition_variable cv;
int fh;
int bytesRead;
void process(char* buf, uint32_t bytesRead) {
cout << "processing..." << endl;
usleep(100000);
}
void aio_completion_handler(sigval_t sigval) {
struct aiocb* req = (struct aiocb*)sigval.sival_ptr;
// check whether asynch operation is complete
if (aio_error(req) == 0) {
int ret = aio_return(req);
bytesRead = req->aio_nbytes;
cout << "ret == " << ret << endl;
cout << (char*)req->aio_buf << endl;
}
{
unique_lock<mutex> readLock(readMutex);
cv.notify_one();
}
}
void thready() {
char* buf1 = new char[blockSize];
char* buf2 = new char[blockSize];
aiocb cb;
char* processbuf = buf1;
char* readbuf = buf2;
fh = open("smallfile.dat", O_RDONLY);
if (fh < 0) {
throw std::runtime_error("cannot open file!");
}
memset(&cb, 0, sizeof(aiocb));
cb.aio_fildes = fh;
cb.aio_nbytes = blockSize;
cb.aio_offset = 0;
// Fill in callback information
/*
Using SIGEV_THREAD to request a thread callback function as a notification
method
*/
cb.aio_sigevent.sigev_notify_attributes = nullptr;
cb.aio_sigevent.sigev_notify = SIGEV_THREAD;
cb.aio_sigevent.sigev_notify_function = aio_completion_handler;
/*
The context to be transmitted is loaded into the handler (in this case, a
reference to the aiocb request itself). In this handler, we simply refer to
the arrived sigval pointer and use the AIO function to verify that the request
has been completed.
*/
cb.aio_sigevent.sigev_value.sival_ptr = &cb;
int currentBytesRead = read(fh, buf1, blockSize); // read the 1st block
while (true) {
cb.aio_buf = readbuf;
aio_read(&cb); // each next block is read asynchronously
process(processbuf, currentBytesRead); // process while waiting
{
unique_lock<mutex> readLock(readMutex);
cv.wait(readLock);
}
currentBytesRead = bytesRead; // make local copy of global modified by the asynch code
if (currentBytesRead < blockSize) {
break; // last time, get out
}
cout << "back from wait" << endl;
swap(processbuf, readbuf); // switch to other buffer for next time
currentBytesRead = bytesRead; // create local copy
}
delete[] buf1;
delete[] buf2;
}
int main() {
try {
thready();
} catch (std::exception& e) {
cerr << e.what() << '\n';
}
return 0;
}
A condition varible should generally be used for
waiting until it is possible that the predicate (for example a shared variable) has changed, and
notifying waiting threads that the predicate may have changed, so that waiting threads should check the predicate again.
However, you seem to be attempting to use the state of the condition variable itself as the predicate. This is not how condition variables are supposed to be used and may lead to race conditions such as those described in your question. Another reason to always check the predicate is that spurious wakeups are possible with condition variables.
In your case, it would probably be appropriate to create a shared variable
bool operation_completed = false;
and use that variable as the predicate for the condition variable. Access to that variable should always be controlled by the mutex.
You can then change the lines
{
unique_lock<mutex> readLock(readMutex);
cv.notify_one();
}
to
{
unique_lock<mutex> readLock(readMutex);
operation_completed = true;
cv.notify_one();
}
and change the lines
{
unique_lock<mutex> readLock(readMutex);
cv.wait(readLock);
}
to:
{
unique_lock<mutex> readLock(readMutex);
while ( !operation_completed )
cv.wait(readLock);
}
Instead of
while ( !operation_completed )
cv.wait(readLock);
you can also write
cv.wait( readLock, []{ return operation_completed; } );
which is equivalent. See the documentation of std::condition_varible::wait for further information.
Of course, operation_completed should also be set back to false when appropriate, while the mutex is locked.
I am trying to implement the Producer-Consumer problem operating system using semaphore and pthread. But my output is totally different from expected. Here is my code:
#include<iostream>
#include<pthread.h>
#include<fstream>
#include<unistd.h>
#include<queue>
// define queue size
#define QUEUE_SIZE 5
// declare and initialize semaphore and read/write counter
static int semaphore = 1;
static int counter = 0;
// Queue for saving characters
static std::queue<char> charQueue;
// indicator for end of file
static bool endOfFile = false;
// save arrays
char consumerArray1[100];
char consumerArray2[100];
// function to wait for semaphore
void wait()
{
while(semaphore<=0);
semaphore--;
}
// function to signal the wait function
void signal()
{
semaphore++;
}
void *Producer(void *ptr)
{
int i=0;
std::ifstream input("string.txt");
char temp;
while(input>>temp)
{
wait();
charQueue.push(temp);
//std::cout<<"Producer:\nCounter: "<<counter<<" Semaphore: "<<semaphore<<std::endl;
counter++;
std::cout<<"Procuder Index: "<<i<<std::endl;
i++;
signal();
sleep(2);
}
endOfFile = true;
pthread_exit(NULL);
}
void *Consumer1(void *ptr)
{
std::cout<<"Entered consumer 1:"<<std::endl;
int i = 0;
while(counter<=0);
while(!endOfFile)
{
while(counter<=0);
wait();
//std::cout<<"Consumer1:\nCounter: "<<counter<<" Semaphore: "<<semaphore<<std::endl;
consumerArray1[i] = charQueue.front();
charQueue.pop();
i++;
counter--;
std::cout<<"Consumer1 index:"<<i<<" char: "<<consumerArray1[i]<<std::endl;
signal();
sleep(2);
}
consumerArray1[i] = '\0';
pthread_exit(NULL);
}
void *Consumer2(void *ptr)
{
std::cout<<"Entered consumer 2:"<<std::endl;
int i = 0;
while(counter<=0);
while(!endOfFile)
{
while(counter<=0);
wait();
//std::cout<<"Consumer2:\nCounter: "<<counter<<" Semaphore: "<<semaphore<<std::endl;
consumerArray2[i] = charQueue.front();
charQueue.pop();
i++;
counter--;
std::cout<<"Consumer2 index: "<<i<<" char: "<<consumerArray2[i]<<std::endl;
signal();
sleep(4);
}
consumerArray2[i] = '\0';
pthread_exit(NULL);
}
int main()
{
pthread_t thread[3];
pthread_create(&thread[0],NULL,Producer,NULL);
int rc = pthread_create(&thread[1],NULL,Consumer1,NULL);
if(rc)
{
std::cout<<"Thread not created"<<std::endl;
}
pthread_create(&thread[2],NULL,Consumer2,NULL);
pthread_join(thread[0],NULL);pthread_join(thread[1],NULL);pthread_join(thread[2],NULL);
std::cout<<"First array: "<<consumerArray1<<std::endl;
std::cout<<"Second array: "<<consumerArray2<<std::endl;
pthread_exit(NULL);
}
The problem is my code, in some runs freezes(probably in an infinite loop) after the entire file has been read. And also both of the consumer functions read the same words even though I am popping it out after reading. Also the part of printing the array element that has been read just prints blank. Why are these problems happening? I am new to threads(as in coding using threads, I know theoretical concepts of threads) so please help me with this problem.
The pthreads standard prohibits accessing an object in one thread while another thread is, or might be, modifying it. Your wait and signal functions violate this rule by modifying semaphore (in signal) while a thread calling wait might be accessing it. You do this with counter as well.
If what you were doing in signal and wait were legal, you wouldn't need signal and wait. You could just access the queue directly the same way you access semaphore directly. If the queue needs protection (as I hope you know it does) then semaphore needs protection too and for exactly the same reason.
The compiler is permitted to optimize this code:
while(semaphore<=0);
To this code:
if (semaphore<=0) { while (1); }
Why? Because it knows that no other thread can possibly modify semaphore while this thread could be accessing it since that is prohibited by the standard. Therefore, there is no reason to read more than once.
You need to use actual sempahores and/or locks.
so i want the program to ouput 1\n2\n1\n2\n1\n2\n but it seems to get stuck somewhere. But when i debug it and set a breackpoint at cv1.notify_one() right after declaring t2 it executes ??
#include <iostream>
#include <mutex>
#include <thread>
#include <condition_variable>
using namespace std;
mutex cout_lock;
condition_variable cv1, cv2;
mutex mtx1;
unique_lock<std::mutex> lck1(mtx1);
mutex mtx2;
unique_lock<std::mutex> lck2(mtx2);
const int COUNT = 3;
int main(int argc, char** argv)
{
thread t1([&](){
for(int i = 0; i < COUNT; ++i)
{
cv1.wait(lck1);
cout << "1" << endl;
cv2.notify_one();
}
});
thread t2([&](){
for(int i = 0; i < COUNT; ++i)
{
cv2.wait(lck2);
cout << "2" << endl;
cv1.notify_one();
}
});
cv1.notify_one();
t1.join();
t2.join();
return 0;
}
There are several flaws:
You want to guard your output. Therefor you need just one mutex so only one thread can do their work at a time.
You are potentially missing out notifications to your condition variables.
Your global unique_locks aquire the locks of the mutexs in their constructors. So you are holding the locks the whole time and no thread can make progress. Your global unique_locks aquire the locks of the mutexs in their constructors. This is done in the main thread. T1 and T2 are unlocking them through the condition_variable. This is undefined behaviour (thread that owns mutex must unlock it).
This is a recipe to use the condition variable approach correctly:
Have a condition you are interested in. In this case some kind of variable to remember who's turn it is.
Guard this variable by a (ONE!) mutex
Use a (ONE!) condition_variable in conjunction with the mutex of point 2 and the condition of point 1.
This ensures:
There is at any time only one thread which can look and/or change the condition you have.
If a thread is reaching the point in code where it possibly waits for the condition variable, it first checks the condition. Maybe the thread does not even need to go to sleep since the condition he wanna wait for is already true. To do so, the thread has to aquire the mutex, check the condition and decides what to do. While doing so, he owns the lock. The condition cant change because the thread has the lock itself. So you cant miss out a notification.
This leads to the following code ( see live here ):
#include <iostream>
#include <mutex>
#include <thread>
#include <condition_variable>
using namespace std;
int main(int argc, char** argv)
{
condition_variable cv;
mutex mtx;
bool runt1 = true;
bool runt2 = false;
constexpr int COUNT = 3;
thread t1([&]()
{
for(int i = 0; i < COUNT; ++i)
{
unique_lock<std::mutex> lck(mtx);
cv.wait(lck, [&](){ return runt1; });
cout << "1" << endl;
runt1 = false;
runt2 = true;
lck.unlock();
cv.notify_one();
}
});
thread t2([&]()
{
for(int i = 0; i < COUNT; ++i)
{
unique_lock<std::mutex> lck(mtx);
cv.wait(lck, [&](){ return runt2; });
cout << "2" << endl;
runt1 = true;
runt2 = false;
lck.unlock();
cv.notify_one();
}
});
t1.join();
t2.join();
return 0;
}
I think you have a data race between your threads starting and the call to cv1.notify_one(); in main().
Consider the case when cv1.notify_one() call happens before thread 1 has started and called cv1.wait(). After that no one calls cv1.notify anymore and your cv-s are just waiting. This is called Lost Wake-up.
You need a mechanism to wait in main till both threads have started, then execute cv1.notify()
Below is an example using int and a mutex.
#include "pch.h"
#include <iostream>
#include <mutex>
#include <thread>
#include <condition_variable>
using namespace std;
condition_variable cv1, cv2;
mutex m;
const int COUNT = 3;
enum Turn
{
T1,
T2
};
int main(int argc, char** argv)
{
mutex thread_start_mutex;
int num_started_threads = 0;
Turn turn = T1;
thread t1([&]() {
{
// increase the number of started threads
unique_lock<std::mutex> lck(thread_start_mutex);
++num_started_threads;
}
for (int i = 0; i < COUNT; ++i)
{
// locked cout, unlock before calling notify
{
unique_lock<std::mutex> lck1(m);
// wait till main thread calls notify
cv1.wait(lck1, [&] { return turn == T1;});
cout << "1 a really long string" << endl;
turn = T2; // next it's T2's turn
}
cv2.notify_one();
}
});
thread t2([&]() {
{
// increase the number of started threads
unique_lock<std::mutex> lck(thread_start_mutex);
++num_started_threads;
}
for (int i = 0; i < COUNT; ++i)
{
// locked cout, unlock before calling notify
{
unique_lock<std::mutex> lck2(m);
cv2.wait(lck2, [&] {return turn == T2;});
cout << "2 some other stuff to test" << endl;
turn = T1;
}
cv1.notify_one();
}
});
unique_lock<std::mutex> lck(thread_start_mutex);
// wait until both threads have started
cv1.wait(lck, [&] { return num_started_threads == 2; });
lck.unlock();
cv1.notify_one();
t1.join();
t2.join();
return 0;
}
Also it's unclear why you have two mutexes that are locked outside of main. I usually think of a mutex as something that is protected a resource that should not be accessed concurrently. Seems like the idea was to protect the cout calls, for which you should use one mutex, that each thread will lock, do the cout, unlock and notify the other one.
Edit
My original answer had exact same issue between calls to t1.notify() and t2.wait().
If t1.notify() was called before thread 2 was waiting, thread 2 never got woken up.
To address this I added an enum "Turn" which indicates who's turn it is, and each wait condition now checks if it's their turn or not.
If it is, they are not waiting and just printing out, so even if notify was missed they'd still do their task. If it is not their turn, they'll block until the other thread sets turn variable and calls notify.
NOTE: This demonstrates a good example/practice that it's usually much better to have a condition when using cv.wait(). This both makes intentions clear, and avoids both Lost Wake-up and Spurious Wake-ups.
NOTE 2 this solution might be overly complicated, and in general condition variables and mutexes are unlikely the best approach for this problem.
The other answer is right conceptually but still has another race condition. I ran the code and it would still deadlock.
The issue is that t1 is created, but it does not get to cv1.wait(lck1) until after the cv1.notify_one() executes. Thus your two threads sit together forever waiting. You demonstrate this when you put your breakpoint on that line, allowing the thread to catch up. Also, this issue persists when one thread finishes but doesn't give the other time to call wait() so it just calls notify_one. This can be seen, also fixed* (used loosely), by adding some usleep(100) calls from unistd.h.
See below:
#include <iostream>
#include <mutex>
#include <thread>
#include <condition_variable>
#include <unistd.h>
using namespace std;
mutex cout_lock;
condition_variable cv1, cv2;
mutex mtx1;
unique_lock<std::mutex> lck1(mtx1);
mutex mtx2;
unique_lock<std::mutex> lck2(mtx2);
const int COUNT = 3;
int main(int argc, char** argv)
{
thread t1([&](){
for(int i = 0; i < COUNT; ++i)
{
cv1.wait(lck1);
cout << "1\n";
usleep(100);
cv2.notify_one();
}
});
thread t2([&](){
for(int i = 0; i < COUNT; ++i)
{
cv2.wait(lck2);
cout << "2\n";
usleep(100);
cv1.notify_one();
}
});
usleep(1000);
cv1.notify_one();
t1.join();
t2.join();
return 0;
}
EDIT: To do better would be to check for waiting threads, which is not built into the mutexes you use. The proper way might be to create your own mutex wrapper class and include that functionality in the class, but for simplicity sake, I just made a waiting variable.
See below:
#include <iostream>
#include <mutex>
#include <thread>
#include <condition_variable>
#include <unistd.h>
using namespace std;
mutex cout_lock;
condition_variable cv1, cv2, cv3;
mutex mtx1;
unique_lock<std::mutex> lck1(mtx1);
mutex mtx2;
unique_lock<std::mutex> lck2(mtx2);
int waiting = 0;
const int COUNT = 3;
int main(int argc, char** argv)
{
thread t1([&](){
for(int i = 0; i < COUNT; ++i)
{
waiting++;
cv1.wait(lck1);
cout << "1\n";
waiting--;
if(!waiting)
usleep(100);
cv2.notify_one();
}
});
thread t2([&](){
for(int i = 0; i < COUNT; ++i)
{
waiting++;
cv2.wait(lck2);
cout << "2\n";
waiting--;
if(!waiting)
usleep(100);
cv1.notify_one();
}
});
if(!waiting)
usleep(100);
cv1.notify_one();
t1.join();
t2.join();
return 0;
}
I need feedback on my code for following statement, am I on right path?
Problem statement:
a. Implement a semaphore class that has a private int and three public methods: init, wait and signal. The wait and signal methods should behave as expected from a semaphore and must use Peterson's N process algorithm in their implementation.
b. Write a program that creates 5 threads that concurrently update the value of a shared integer and use an object of semaphore class created in part a) to ensure the correctness of the concurrent updates.
Here is my working program:
#include <iostream>
#include <pthread.h>
using namespace std;
pthread_mutex_t mid; //muted id
int shared=0; //global shared variable
class semaphore {
int counter;
public:
semaphore(){
}
void init(){
counter=1; //initialise counter 1 to get first thread access
}
void wait(){
pthread_mutex_lock(&mid); //lock the mutex here
while(1){
if(counter>0){ //check for counter value
counter--; //decrement counter
break; //break the loop
}
}
pthread_mutex_unlock(&mid); //unlock mutex here
}
void signal(){
pthread_mutex_lock(&mid); //lock the mutex here
counter++; //increment counter
pthread_mutex_unlock(&mid); //unlock mutex here
}
};
semaphore sm;
void* fun(void* id)
{
sm.wait(); //call semaphore wait
shared++; //increment shared variable
cout<<"Inside thread "<<shared<<endl;
sm.signal(); //call signal to semaphore
}
int main() {
pthread_t id[5]; //thread ids for 5 threads
sm.init();
int i;
for(i=0;i<5;i++) //create 5 threads
pthread_create(&id[i],NULL,fun,NULL);
for(i=0;i<5;i++)
pthread_join(id[i],NULL); //join 5 threads to complete their task
cout<<"Outside thread "<<shared<<endl;//final value of shared variable
return 0;
}
You need to release the mutex while spinning in the wait loop.
The test happens to work because the threads very likely run their functions start to finish before there is any context switch, and hence each one finishes before the next one even starts. So you have no contention over the semaphore. If you did, they'd get stuck with one waiter spinning with the mutex held, preventing anyone from accessing the counter and hence release the spinner.
Here's an example that works (though it may still have an initialization race that causes it to sporadically not launch correctly). It looks more complicated, mainly because it uses the gcc built-in atomic operations. These are needed whenever you have more than a single core, since each core has its own cache. Declaring the counters 'volatile' only helps with compiler optimization - for what is effectively SMP, cache consistency requires cross-processor cache invalidation, which means special processor instructions need to be used. You can try replacing them with e.g. counter++ and counter-- (and same for 'shared') - and observe how on a multi-core CPU it won't work. (For more details on the gcc atomic ops, see https://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/_005f_005fatomic-Builtins.html)
#include <stdio.h>
#include <pthread.h>
#include <unistd.h>
#include <stdint.h>
class semaphore {
pthread_mutex_t lock;
int32_t counter;
public:
semaphore() {
init();
}
void init() {
counter = 1; //initialise counter 1 to get first access
}
void spinwait() {
while (true) {
// Spin, waiting until we see a positive counter
while (__atomic_load_n(&counter, __ATOMIC_SEQ_CST) <= 0)
;
pthread_mutex_lock(&lock);
if (__atomic_load_n(&counter, __ATOMIC_SEQ_CST) <= 0) {
// Someone else stole the count from under us or it was
// a fluke - keep trying
pthread_mutex_unlock(&lock);
continue;
}
// It's ours
__atomic_fetch_add(&counter, -1, __ATOMIC_SEQ_CST);
pthread_mutex_unlock(&lock);
return;
}
}
void signal() {
pthread_mutex_lock(&lock); //lock the mutex here
__atomic_fetch_add(&counter, 1, __ATOMIC_SEQ_CST);
pthread_mutex_unlock(&lock); //unlock mutex here
}
};
enum {
NUM_TEST_THREADS = 5,
NUM_BANGS = 1000
};
// Making semaphore sm volatile would be complicated, because the
// pthread_mutex library calls don't expect volatile arguments.
int shared = 0; // Global shared variable
semaphore sm; // Semaphore protecting shared variable
volatile int num_workers = 0; // So we can wait until we have N threads
void* fun(void* id)
{
usleep(100000); // 0.1s. Encourage context switch.
const int worker = (intptr_t)id + 1;
printf("Worker %d ready\n", worker);
// Spin, waiting for all workers to be in a runnable state. These printouts
// could be out of order.
++num_workers;
while (num_workers < NUM_TEST_THREADS)
;
// Go!
// Bang on the semaphore. Odd workers increment, even decrement.
if (worker & 1) {
for (int n = 0; n < NUM_BANGS; ++n) {
sm.spinwait();
__atomic_fetch_add(&shared, 1, __ATOMIC_SEQ_CST);
sm.signal();
}
} else {
for (int n = 0; n < NUM_BANGS; ++n) {
sm.spinwait();
__atomic_fetch_add(&shared, -1, __ATOMIC_SEQ_CST);
sm.signal();
}
}
printf("Worker %d done\n", worker);
return NULL;
}
int main() {
pthread_t id[NUM_TEST_THREADS]; //thread ids
// create test worker threads
for(int i = 0; i < NUM_TEST_THREADS; i++)
pthread_create(&id[i], NULL, fun, (void*)((intptr_t)(i)));
// join threads to complete their task
for(int i = 0; i < NUM_TEST_THREADS; i++)
pthread_join(id[i], NULL);
//final value of shared variable. For an odd number of
// workers this is the loop count, NUM_BANGS
printf("Test done. Final value: %d\n", shared);
const int expected = (NUM_TEST_THREADS & 1) ? NUM_BANGS : 0;
if (shared == expected) {
puts("PASS");
} else {
printf("Value expected was: %d\nFAIL\n", expected);
}
return 0;
}
I am using boost::thread, and I meet some problems.
The thing is, are there any ways I can join a thread before the last join finish?
for example,
int id=1;
void temp()
{
int theardID = id++;
for(int i=0;i<3;i++)
{
cout<<theardID << " : "<<i<<endl;
boost::this_thread::sleep(boost::posix_time::millisec(100));
}
}
int main(void)
{
boost::thread thrd1(temp);
thrd1.join();
boost::thread thrd2(temp);
boost::thread thrd3(temp);
thrd2.join();
thrd3.join();
return 0;
}
In this simple example, the order of output may be:
1:0
1:1
1:2
2:0
3:0
3:1
2:1
2:2
3:2
As the above example, we can see find out that thrd2 and thrd3 start to run after thrd1 finish.
Are there any ways to let thrd2 and thrd3 run before thrd1 finish?
You can use Boost.Thread's condition variables to synchronize on a condition more complex than what join can provide. Here's a example based on yours:
#include <iostream>
#include <boost/thread.hpp>
#include <boost/thread/locks.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition_variable.hpp>
boost::mutex mutex;
boost::condition_variable cond;
// These three variables protected by mutex
bool finishedFlag = false;
int finishedID = 0;
int finishedCount = 0;
int id=1;
void temp()
{
int threadID = id++;
for(int i=0;i<3;i++)
{
std::cout << threadID << " : " << i << std::endl;
boost::this_thread::sleep(boost::posix_time::millisec(100));
}
{
boost::lock_guard<boost::mutex> lock(mutex);
finishedFlag = true;
finishedID = threadID;
++finishedCount;
}
cond.notify_one();
}
int main(void)
{
boost::thread thrd1(temp);
boost::this_thread::sleep(boost::posix_time::millisec(300));
boost::thread thrd2(temp);
boost::thread thrd3(temp);
boost::unique_lock<boost::mutex> lock(mutex);
while (finishedCount < 3)
{
while (finishedFlag != true)
{
// mutex is released while we wait for cond to be signalled.
cond.wait(lock);
// mutex is reacquired as soon as we finish waiting.
}
finishedFlag = false;
if (finishedID == 1)
{
// Do something special about thrd1 finishing
std::cout << "thrd1 finished" << std::endl;
}
};
// All 3 threads finished at this point.
return 0;
}
The join function means "stop this thread until that thread finishes." It's a simple tool for a simple purpose: ensuring that, past this point in the code, thread X is finished.
What you want to do isn't a join operation at all. What you want is some kind of synchronization primitive to communicate and synchronize behavior between threads. Boost.Thread has a number of alternatives for synchronization, from conditions to mutexes.