simple custom made mutex failing - c++

Can you spot the error in the code? tickets ends up going below 0 causing long stalls.
struct SContext {
volatile unsigned long* mutex;
volatile long* ticket;
volatile bool* done;
};
static unsigned int MyThreadFunc(SContext* ctxt) {
// -- keep going until we signal for thread to close
while(*ctxt->done == false) {
while(*ctxt->ticket) { // while we have tickets waiting
unsigned int lockedaquired = 0;
do {
if(*ctxt->mutex == 0) { // only try if someone doesn't have mutex locked
// -- if the compare and swap doesn't work then the function returns
// -- the value it expects
lockedaquired = InterlockedCompareExchange(ctxt->mutex, 1, 0);
}
} while(lockedaquired != 0); // loop while we didn't aquire lock
// -- enter critical section
// -- grab a ticket
if(*ctxt->ticket > 0);
(*ctxt->ticket)--;
// -- exit critical section
*ctxt->mutex = 0; // release lock
}
}
return 0;
}
Calling function waiting for threads to finish
for(unsigned int loops = 0; loops < eLoopCount; ++loops) {
*ctxt.ticket = eNumThreads; // let the threads start!
// -- wait for threads to finish
while(*ctxt.ticket != 0)
;
}
done = true;
EDIT:
The answer to this question is simple and unfortunately after I spent the time trimming down the example to post a simplified version I immediately find the answer after I post the question. Sigh..
I initialize lockaquired to 0. Then as an optimization to not take up bus bandwith I don't do the CAS if the mutex is taken.
Unfortunately, in that case where the lock is taken the while loop will let the second thread through!
Sorry for the extra question. I thought I didn't understand windows low level synchronization primitives but really I just had a simple mistake.

I see another race in your code: One thread can cause *ctxt.ticket to hit 0, allowing the parent loop to go back and re-set *ctxt.ticket = eNumThreads without holding *ctxt.mutex. Some other thread may already now hold the mutex (in fact, it probably does) and operate on *ctxt.ticket. For your simplified example this only prevents "batches" from being cleanly separated, but if you had more complex initialization (as in more complex than a single word write) at the top of the loops loop you could see strange behavior.

I posted a bug where I thought it was a legitimate multithreaded problem but really it was just bad logic. I solved the bug as soon as I posted. Here is the problem lines and answer
unsigned int lockedaquired = 0;
I initialized lockaquired to 0 and then after I added an if statement to skip the expensive operation of doing a CAS. This optimization caused it to fall out of the while loop and into the critical section. Changing the code to
unsigned int lockedaquired = 1;
Fixes the problem. There is another hidden problem in the code that I found as well(I really shouldn't code late at night anymore). Anyone notice the semicolon after the if statement in the critical section? Sigh...
if(*ctxt->ticket > 0);
(*ctxt->ticket)--;
That should be
if(*ctxt->ticket > 0)
Also, Ben Jackson pointed out that a thread probably will be inside the critical section when we reset the ticket to eNumThreads. While this is perfectly fine in this sample code if you were to apply it to a problem where you needed to do more operations it might not be safe because the threads aren't running in lockstep so keep that in mind if you apply this to your code.
A final note, if anyone does decide to use this code for their own implementation of a mutex please remember that your main driver thread is spinning idle. If you are doing a large operation in the critical section that takes a deal of time and your ticket count is high consider yielding your thread to let other software make use of the CPU while its waiting. Also, consider using a spin lock if the critical section is large.
Thank you

Related

Synchronize n Threads with only using Semaphore and/or mutex in C++

We're studying for our test next week, and have been given an exercise from our teacher, and we just don't see the solution:
How to synchronize n threads, so that all n threads wait at a specific location and only continue with their "work" together when all n threads have reached that location?
We're allowed to use Mutex and Semaphore constructs. The solution should be easy, but we just cant find the answer.
Here's a big hint. You need 2 semaphores, both with N flags. You can solve this with an extra thread. The key is that you can call down() on a semaphore multiple times. e.g. If you call down() on a semaphore 8 times, you need all 8 up()'s before you can continue.
// an additional thread (not one of the N)
void trigger(Semaphore* workersCollect, Semaphore* workersRelease, int n)
{
while(true)
{
for (int i = 0; i < n; ++i)
workersCollect->down();
for (int i = 0; i < n; ++i)
workersRelease->up();
}
}
// Prototype for the "checkpoint" function (exercise for the reader)
void await(Semaphore* workersCollect, Semaphore* workersRelease);
You can also solve it without the extra thread, by using more complicated state checking.
This design has a drawback. If a worker finishes its work extremely quickly, it can grab more than one task (while another thread ends up not running at all). This is fine if you have a threadpool kind of design, but bad if, say, each thread is supposed to work on it's own distinct section of a dataset.
To fix that, you need a semaphore per thread. Something akin to
Semaphore workerRelease[N];
but being careful to avoid false sharing. (You don't want more than 1 semaphore on a cache line.)

Atomic thread counter

I'm experimenting with the C++11 atomic primitives to implement an atomic "thread counter" of sorts. Basically, I have a single critical section of code. Within this code block, any thread is free to READ from memory. However, sometimes, I want to do a reset or clear operation, which resets all shared memory to a default initialized value.
This seems like a great opportunity to use a read-write lock. C++11 doesn't include read-write mutexes out of the box, but maybe something simpler will do. I thought this problem would be a great opportunity to become more familiar with C++11 atomic primitives.
So I thought through this problem for a while, and it seems to me that all I have to do is :
Whenever a thread enters the critical section, increment an
atomic counter variable
Whenever a thread leaves the critical section, decrement the
atomic counter variable
If a thread wishes to reset all
variables to default values, it must atomically wait for the counter
to be 0, then atomically set it to some special "clearing flag"
value, perform the clear, then reset the counter to 0.
Of course,
threads wishing to increment and decrement the counter must also check for the
clearing flag.
So, the algorithm I just described can be implemented with three functions. The first function, increment_thread_counter() must ALWAYS be called before entering the critical section. The second function, decrement_thread_counter(), must ALWAYS be called right before leaving the critical section. Finally, the function clear() can be called from outside the critical section only iff the thread counter == 0.
This is what I came up with:
Given:
A thread counter variable, std::atomic<std::size_t> thread_counter
A constant clearing_flag set to std::numeric_limits<std::size_t>::max()
...
void increment_thread_counter()
{
std::size_t expected = 0;
while (!std::atomic_compare_exchange_strong(&thread_counter, &expected, 1))
{
if (expected != clearing_flag)
{
thread_counter.fetch_add(1);
break;
}
expected = 0;
}
}
void decrement_thread_counter()
{
thread_counter.fetch_sub(1);
}
void clear()
{
std::size_t expected = 0;
while (!thread_counter.compare_exchange_strong(expected, clearing_flag)) expected = 0;
/* PERFORM WRITES WHICH WRITE TO ALL SHARED VARIABLES */
thread_counter.store(0);
}
As far as I can reason, this should be thread-safe. Note that the decrement_thread_counter function shouldn't require ANY synchronization logic, because it is a given that increment() is always called before decrement(). So, when we get to decrement(), thread_counter can never equal 0 or clearing_flag.
Regardless, since THREADING IS HARDâ„¢, and I'm not an expert at lockless algorithms, I'm not entirely sure this algorithm is race-condition free.
Question: Is this code thread safe? Are any race conditions possible here?
You have a race condition; bad things happen if another thread changes the counter between increment_thread_counter()'s test for clearing_flag and the fetch_add.
I think this classic CAS loop should work better:
void increment_thread_counter()
{
std::size_t expected = 0;
std::size_t updated;
do {
if (expected == clearing_flag) { // don't want to succeed while clearing,
expected = 0; //take a chance that clearing completes before CMPEXC
}
updated = expected + 1;
// if (updated == clearing_flag) TOO MANY READERS!
} while (!std::atomic_compare_exchange_weak(&thread_counter, &expected, updated));
}

Why does my lock-free message queue segfault :(?

As a purely mental exercise I'm trying to get this to work without locks or mutexes. The idea is that when the consumer thread is reading/executing messages it atomically swaps which std::vector the producer thread uses for writes. Is this possible? I've tried playing around with thread fences to no avail. There's a race condition here somewhere because it occasionally seg faults. I imagine it's somewhere in the enqueue function. Any ideas?
// should execute functions on the original thread
class message_queue {
public:
using fn = std::function<void()>;
using queue = std::vector<fn>;
message_queue() : write_index(0) {
}
// should only be called from consumer thread
void run () {
// atomically gets the current pending queue and switches it with the other one
// for example if we're writing to queues[0], we grab a reference to queue[0]
// and tell the producer to write to queues[1]
queue& active = queues[write_index.fetch_xor(1)];
// skip if we don't have any messages
if (active.size() == 0) return;
// run all messages/callbacks
for (auto fn : active) {
fn();
}
// clear the active queue so it can be re-used
active.clear();
// swap active and pending threads
write_index.fetch_xor(1);
}
void enqueue (fn value) {
// loads the current pending queue and append some work
queues[write_index.load()].push_back(value);
}
private:
queue queues[2];
std::atomic<bool> is_empty; // unused for now
std::atomic<int> write_index;
};
int main(int argc, const char * argv[])
{
message_queue queue{};
// flag to stop the message loop
// doesn't actually need to be atomic because it's only read/wrote on the main thread
std::atomic<bool> done(false);
std::thread worker([&queue, &done] {
int count = 100;
// send 100 messages
while (--count) {
queue.enqueue([count] {
// should be executed in the main thread
std::cout << count << "\n";
});
}
// finally tell the main thread we're done
queue.enqueue([&] {
std::cout << "done!\n";
done = true;
});
});
// run messages until the done flag is set
while(!done) queue.run();
worker.join();
}
if I understand your code correctly, there are data races, e.g.:
// producer
int r0 = write_index.load(); // r0 == 0
// consumer
int r1 = write_index.fetch_xor(1); // r1 == 0
queue& active = queues[r1];
active.size();
// producer
queue[r0].push_back(...);
Now both threads access the same queue at the same time. That's a data race, and that means undefined behaviour.
Your lock-free queue fails to work because you did not start with at least a semi-formal proof of correctness, then turn that proof into an algorithm with the proof being the primary text, comments connecting the proof to the code, all interconnected with the code.
Unless you are copy/pasting someone else's implementation who did do that, any attempt to write a lock-free algorithm will fail. If you are copy-pasting someone else's implementation, please provide it.
Lock free algorithms are not robust unless you have such a proof that they are correct, because the kind of errors that make them fail are subtle, and extreme care must be taken. Simply "rolling" a lock free algorithm, even if it fails to result in apparent problems during testing, is a recipe for unreliable code.
One way to get around writing a formal proof in this kind of situation is to track down someone who has written proven correct pseudo code or the like. Sketch out the pseudo code, together with the proof of correctness, in comments. Then fill in the code in the holes.
In general, proving an "almost correct" lock free algorithm is flawed is harder than writing a solid proof that a lock free algorithm is correct if implemented in a particular way, then implementing it. Now, if your algorithm is so flawed that it is easy to find the flaws, then you aren't showing a basic understanding of the problem domain.
In short, by posting "why is my algorithm wrong", you are approaching how to write lock free algorithms incorrectly. "Where is the flaw in my proof?", "I proved this pseudo-code correct here, and then I implemented it, why do my tests show deadlocks?" are good lock-free questions. "Here is a bunch of code with comments that merely describe what the next line of code does, and no comments describing why I do the next line of code, or how that line of code maintains my lock-free invariants" is not a good lock-free question.
Step back. Find some proven-correct algorithms. Learn how the proof work. Implement some proven correct algorithms via monkey-see monkey-do. Look at the footnotes to note the issues their proof overlooked (like A-B issues). After you have a bunch of those under your belt, try a variation, and do the proof, and check the proof, and do the implementation, and check the implementation.

C++ - Threads without coordinating mechanism like mutex_Lock

I attended one interview two days back. The interviewed guy was good in C++, but not in multithreading. When he asked me to write a code for multithreading of two threads, where one thread prints 1,3,5,.. and the other prints 2,4,6,.. . But, the output should be 1,2,3,4,5,.... So, I gave the below code(sudo code)
mutex_Lock LOCK;
int last=2;
int last_Value = 0;
void function_Thread_1()
{
while(1)
{
mutex_Lock(&LOCK);
if(last == 2)
{
cout << ++last_Value << endl;
last = 1;
}
mutex_Unlock(&LOCK);
}
}
void function_Thread_2()
{
while(1)
{
mutex_Lock(&LOCK);
if(last == 1)
{
cout << ++last_Value << endl;
last = 2;
}
mutex_Unlock(&LOCK);
}
}
After this, he said "these threads will work correctly even without those locks. Those locks will reduce the efficiency". My point was without the lock there will be a situation where one thread will check for(last == 1 or 2) at the same time the other thread will try to change the value to 2 or 1. So, My conclusion is that it will work without that lock, but that is not a correct/standard way. Now, I want to know who is correct and in which basis?
Without the lock, running the two functions concurrently would be undefined behaviour because there's a data race in the access of last and last_Value Moreover (though not causing UB) the printing would be unpredictable.
With the lock, the program becomes essentially single-threaded, and is probably slower than the naive single-threaded code. But that's just in the nature of the problem (i.e. to produce a serialized sequence of events).
I think the interviewer might have thought about using atomic variables.
Each instantiation and full specialization of the std::atomic template defines an atomic type. Objects of atomic types are the only C++ objects that are free from data races; that is, if one thread writes to an atomic object while another thread reads from it, the behavior is well-defined.
In addition, accesses to atomic objects may establish inter-thread synchronization and order non-atomic memory accesses as specified by std::memory_order.
[Source]
By this I mean the only thing you should change is remove the locks and change the lastvariable to std::atomic<int> last = 2; instead of int last = 2;
This should make it safe to access the last variable concurrently.
Out of curiosity I have edited your code a bit, and ran it on my Windows machine:
#include <iostream>
#include <atomic>
#include <thread>
#include <Windows.h>
std::atomic<int> last=2;
std::atomic<int> last_Value = 0;
std::atomic<bool> running = true;
void function_Thread_1()
{
while(running)
{
if(last == 2)
{
last_Value = last_Value + 1;
std::cout << last_Value << std::endl;
last = 1;
}
}
}
void function_Thread_2()
{
while(running)
{
if(last == 1)
{
last_Value = last_Value + 1;
std::cout << last_Value << std::endl;
last = 2;
}
}
}
int main()
{
std::thread a(function_Thread_1);
std::thread b(function_Thread_2);
while(last_Value != 6){}//we want to print 1 to 6
running = false;//inform threads we are about to stop
a.join();
b.join();//join
while(!GetAsyncKeyState('Q')){}//wait for 'Q' press
return 0;
}
and the output is always:
1
2
3
4
5
6
Ideone refuses to run this code (compilation errors)..
Edit: But here is a working linux version :) (thanks to soon)
The interviewer doesn't know what he is talking about. Without the locks you get races on both last and last_value. The compiler could for example reorder the assignment to last before the print and increment of last_value, which could lead to the other thread executing on stale data. Furthermore you could get interleaved output, meaning things like two numbers not being seperated by a linebreak.
Another thing, which could go wrong is that the compiler might decide not to reload last and (less importantly) last_value each iteration, since it can't (safely) change between those iterations anyways (since data races are illegal by the C++11 standard and aren't acknowledged in previous standards). This means that the code suggested by the interviewer actually has a good chance of creating infinite loops of doing absoulutely doing nothing.
While it is possible to make that code correct without mutices, that absolutely needs atomic operations with appropriate ordering constraints (release-semantics on the assignment to last and acquire on the load of last inside the if statement).
Of course your solution does lower efficiency due to effectivly serializing the whole execution. However since the runtime is almost completely spent inside the streamout operation, which is almost certainly internally synchronized by the use of locks, your solution doesn't lower the efficiency anymore then it already is. Waiting on the lock in your code might actually be faster then busy waiting for it, depending on the availible resources (the nonlocking version using atomics would absolutely tank when executed on a single core machine)

when to use mutex

Here is the thing: there is a float array float bucket[5] and 2 threads, say thread1 and thread2.
Thread1 is in charge of tanking up the bucket, assigning each element in bucket a random number. When the bucket is tanked up, thread2 will access bucket and read its elements.
Here is how I do the job:
float bucket[5];
pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER;
pthread_t thread1, thread2;
void* thread_1_proc(void*); //thread1's startup routine, tank up the bucket
void* thread_2_proc(void*); //thread2's startup routine, read the bucket
int main()
{
pthread_create(&thread1, NULL, thread_1_proc, NULL);
pthread_create(&thread2, NULL, thread_2_proc, NULL);
pthread_join(thread1);
pthread_join(thread2);
}
Below is my implementation for thread_x_proc:
void* thread_1_proc(void*)
{
while(1) { //make it work forever
pthread_mutex_lock(&mu); //lock the mutex, right?
cout << "tanking\n";
for(int i=0; i<5; i++)
bucket[i] = rand(); //actually, rand() returns int, doesn't matter
pthread_mutex_unlock(&mu); //bucket tanked, unlock the mutex, right?
//sleep(1); /* this line is commented */
}
}
void* thread_2_proc(void*)
{
while(1) {
pthread_mutex_lock(&mu);
cout << "reading\n";
for(int i=0; i<5; i++)
cout << bucket[i] << " "; //read each element in the bucket
pthread_mutex_unlock(&mu); //reading done, unlock the mutex, right?
//sleep(1); /* this line is commented */
}
}
Question
Is my implementation right? Cuz the output is not as what I expected.
...
reading
5.09434e+08 6.58441e+08 1.2288e+08 8.16198e+07 4.66482e+07 7.08736e+08 1.33455e+09
reading
5.09434e+08 6.58441e+08 1.2288e+08 8.16198e+07 4.66482e+07 7.08736e+08 1.33455e+09
reading
5.09434e+08 6.58441e+08 1.2288e+08 8.16198e+07 4.66482e+07 7.08736e+08 1.33455e+09
reading
tanking
tanking
tanking
tanking
...
But if I uncomment the sleep(1); in each thread_x_proc function, the output is right, tanking and reading follow each other, like this:
...
tanking
reading
1.80429e+09 8.46931e+08 1.68169e+09 1.71464e+09 1.95775e+09 4.24238e+08 7.19885e+08
tanking
reading
1.64976e+09 5.96517e+08 1.18964e+09 1.0252e+09 1.35049e+09 7.83369e+08 1.10252e+09
tanking
reading
2.0449e+09 1.96751e+09 1.36518e+09 1.54038e+09 3.04089e+08 1.30346e+09 3.50052e+07
...
Why? Should I use sleep() when using mutex?
Your code is technically correct, but it does not make a lot of sense, and it does not do what you assume.
What your code does is, it updates a section of data atomically, and reads from that section, atomically. However, you don't know in which order this happens, nor how often the data is written to before being read (or if at all!).
What you probably wanted is generate exactly one sequence of numbers in one thread every time and read exactly one new sequence each time in the other thread. For this, you would use either have to use an additional semaphore or better a single-producer-single-consumer queue.
In general the answer to "when should I use a mutex" is "never, if you can help it". Threads should send messages, not share state. This makes a mutex most of the time unnecessary, and offers parallelism (which is the main incentive for using threads in the first place).
The mutex makes your threads run lockstep, so you could as well just run in a single thread.
There is no implied order in which threads will get to run. This means you shall not expect any order. What's more it is possible to get on thread running over and over without letting the other to run. This is implementation specific and should be assumed random.
The case you presented falls much rather for a semaphor which is "posted" with each element added.
However if it has always to be like:
write 5 elements
read 5 elements
you should have two mutexes:
one that blocks producer until the consumer finished
one that blocks consumer until the producer finished
So the code should look something like that:
Producer:
while(true){
lock( &write_mutex )
[insert data]
unlock( &read_mutex )
}
Consumer:
while(true){
lock( &read_mutex )
[insert data]
unlock( &write_mutex )
}
Initially write_mutex should be unlocked and read_mutex locked.
As I said your code seems to be a better case for semaphores or maybe condition variables.
Mutexes are not meant for cases such as this (which doesn't mean you can't use them, it just means there are more handy tools to solve that problem).
You have no right to assume that just because you want your threads to run in a particular order, the implementation will figure out what you want and actually run them in that order.
Why shouldn't thread2 run before thread1? And why shouldn't each thread complete its loop several times before the other thread gets a chance to run up to the line where it acquires the mutex?
If you want execution to switch between two threads in a predictable way, then you need to use a semaphore, condition variable, or other mechanism for messaging between the two threads. sleep appears to result in the order you want on this occasion, but even with the sleep you haven't done enough to guarantee that they will alternate. And I have no idea why the sleep makes a difference to which thread gets to run first -- is that consistent across several runs?
If you have two functions that should execute sequentially, i.e. F1 should finish before F2 starts, then you shouldn't be using two threads. Run F2 on the same thread as F1, after F1 returns.
Without threads, you won't need the mutex either.
It isn't really the issue here.
The sleep only lets the 'other' thread access the mutex lock (by chance, it is waiting for the lock so Probably it will have the mutex), there is no way you can be sure the first thread won't re-lock the mutex though and let the other thread access it.
Mutex is for protecting data so two threads don't :
a) write simultaneously
b) one is writing when another is reading
It is not for making threads work in a certain order (if you want that functionality, ditch the threaded approach or use a flag to tell that the 'tank' is full for example).
By now, it should be clear, from the other answers, what are the mistakes in the original code. So, let's try to improve it:
/* A flag that indicates whose turn it is. */
char tanked = 0;
void* thread_1_proc(void*)
{
while(1) { //make it work forever
pthread_mutex_lock(&mu); //lock the mutex
if(!tanked) { // is it my turn?
cout << "tanking\n";
for(int i=0; i<5; i++)
bucket[i] = rand(); //actually, rand() returns int, doesn't matter
tanked = 1;
}
pthread_mutex_unlock(&mu); // unlock the mutex
}
}
void* thread_2_proc(void*)
{
while(1) {
pthread_mutex_lock(&mu);
if(tanked) { // is it my turn?
cout << "reading\n";
for(int i=0; i<5; i++)
cout << bucket[i] << " "; //read each element in the bucket
tanked = 0;
}
pthread_mutex_unlock(&mu); // unlock the mutex
}
}
The code above should work as expected. However, as others have pointed out, the result would be better accomplished with one of these two other options:
Sequentially. Since the producer and the consumer must alternate, you don't need two threads. One loop that tanks and then reads would be enough. This solution would also avoid the busy waiting that happens in the code above.
Using semaphores. This would be the solution if the producer was able to run several times in a row, accumulating elements in a bucket (not the case in the original code, though).
http://en.wikipedia.org/wiki/Producer-consumer_problem#Using_semaphores