Related
Could you please help me to understand what std::memory_order should be used in std::atomic_flag::test_and_set to do some work only once by a set of threads and why? The work should be done by whatever thread gets to it first, and all other threads should just check as quickly as possible that someone is already going the work and continue working on other tasks.
In my tests of the example below, any memory order works, but I think that it is just a coincidence. I suspect that Release-Acquire ordering is what I need, but, in my case, only one memory_order can be used in both threads (it is not the case that one thread can use memory_order_release and the other can use memory_order_acquire since I do not know which thread will arrive to doing the work first).
#include <atomic>
#include <iostream>
#include <thread>
std::atomic_flag done = ATOMIC_FLAG_INIT;
const std::memory_order order = std::memory_order_seq_cst;
//const std::memory_order order = std::memory_order_acquire;
//const std::memory_order order = std::memory_order_relaxed;
void do_some_work_that_needs_to_be_done_only_once(void)
{ std::cout<<"Hello, my friend\n"; }
void run(void)
{
if(not done.test_and_set(order))
do_some_work_that_needs_to_be_done_only_once();
}
int main(void)
{
std::thread a(run);
std::thread b(run);
a.join();
b.join();
// expected result:
// * only one thread said hello
// * all threads spent as little time as possible to check if any
// other thread said hello yet
return 0;
}
Thank you very much for your help!
Following up on some things in the comments:
As has been discussed, there is a well-defined modification order M for done on any given run of the program. Every thread does one store to done, which means one entry in M. And by the nature of atomic read-modify-writes, the value returned by each thread's test_and_set is the value that immediately precedes its own store in the order M. That's promised in C++20 atomics.order p10, which is the critical clause for understanding atomic RMW in the C++ memory model.
Now there are a finite number of threads, each corresponding to one entry in M, which is a total order. Necessarily there is one such entry that precedes all the others. Call it m1. The test_and_set whose store is entry m1 in M must return the preceding value in M. That can only be the value 0 which initialized done. So the thread corresponding to m1 will see test_and_set return 0. Every other thread will see it return 1, because each of their modifications m2, ..., mN follows (in M) another modification, which must have been a test_and_set storing the value 1.
We may not be bothering to observe all of the total order M, but this program does determine which of its entries is first on this particular run. It's the unique one whose test_and_set returns 0. A thread that sees its test_and_set return 1 won't know whether it came 2nd or 8th or 96th in that order, but it does know that it wasn't first, and that's all that matters here.
Another way to think about it: suppose it were possible for two threads (tA, tB) both to load the value 0. Well, each one makes an entry in the modification order; call them mA and mB. M is a total order so one has to go before the other. And bearing in mind the all-important [atomics.order p10], you will quickly find there is no legal way for you to fill out the rest of M.
All of this is promised by the standard without any reference to memory ordering, so it works even with std::memory_order_relaxed. The only effect of relaxed memory ordering is that we can't say much about how our load/store will become visible with respect to operations on other variables. That's irrelevant to the program at hand; it doesn't even have any other variables.
In the actual implementation, this means that an atomic RMW really has to exclusively own the variable for the duration of the operation. We must ensure that no other thread does a store to that variable, nor the load half of a read-modify-write, during that period. In a MESI-like coherent cache, this is done by temporarily locking the cache line in the E state; if the system makes it possible for us to lose that lock (like an LL/SC architecture), abort and start again.
As to your comment about "a thread reading false from its own cache/buffer": the implementation mustn't allow that in an atomic RMW, not even with relaxed ordering. When you do an atomic RMW, you must read it while you hold the lock, and use that value in the RMW operation. You can't use some old value that happens to be in a buffer somewhere. Likewise, you have to complete the write while you still hold the lock; you can't stash it in a buffer and let it complete later.
relaxed is fine if you just need to determine the winner of the race to set the flag1, so one thread can start on the work and later threads can just continue on.
If the run_once work produces data that other threads need to be able to read, you'll need a release store after that, to let potential readers know that the work is finished, not just started. If it was instead just something like printing or writing to a file, and other threads don't care when that finishes, then yeah you have no ordering requirements between threads beyond the modification order of done which exists even with relaxed. An atomic RMW like test_and_set lets you determines which thread's modification was first.
BTW, you should check read-only before even trying to test-and-set; unless run() is only called very infrequently, like once per thread startup. For something like a static int foo = non_constant; local var, compilers use a guard variable that's loaded (with an acquire load) to see if init is already complete. If it's not, branch to code that uses an atomic RMW to modify the guard variable, with one thread winning, the rest effectively waiting on a mutex for that thread to init.
You might want something like that if you have data that all threads should read. Or just use a static int foo = something_to_run_once(), or some type other than int, if you actually have some data to init.
Or perhaps use C++11 std::call_once to solve this problem for you.
On normal systems, atomic_flag has no advantage over and atomic_bool. done.exchange(true) on a bool is equivalent to test_and_set of a flag. But atomic_bool is more flexible in terms of the operations it supports, like plain read that isn't part of an RMW test-and-set.
C++20 does add a test() method for atomic_flag. ISO C++ guarantees that atomic_flag is lock-free, but in practice so is std::atomic<bool> on all real-world systems.
Footnote 1: why relaxed guarantees a single winner
The memory_order parameter only governs ordering wrt. operations on other variables by the same thread.
Does calling test_and_set by a thread force somehow synchronization of the flag with values written by other threads?
It's not a pure write, it's an atomic read-modify-write, so the result of the one that went first is guaranteed to be visible to the one that happens to be second. That's the whole point of test-and-set as a primitive building block for mutual exclusion.
If two TAS operations could both load the original value (false), and then both store true, they would be atomic. They'd have overlapped with each other.
Two atomic RMWs on the same atomic object must happen in some order, the modification-order of that object. (Because they're not read-only: an RMW includes a modification. But also includes a read so you can see what the value was immediately before the new value; that read is tied to the modification order, unlike a plain read).
Every atomic object separately has a modification-order that all threads can agree on; this is guaranteed by ISO C++. (With orders less than seq_cst, ordering between objects can be different from source order, and not guaranteed that all threads even agree which store happened first, the IRIW problem.)
Being an atomic RMW guarantees that exactly one test_and_set will return false in thread A or B. Same for fetch_add with multiple threads incrementing a counter: the increments have to happen in some order (i.e. serialized with each other), and whatever that order is becomes the modification-order of that atomic object.
Atomic RMWs have to work this way to not lose counts. i.e. to actually be atomic.
I'm trying to implement a gather function that waits for N processes to continue.
struct sembuf operations[2];
operaciones[0].sem_num = 0;
operaciones[0].sem_op = -1; // wait() or p()
operaciones[1].sem_num = 0;
operaciones[1].sem_op = 0; // wait until it becomes 0
semop ( this->id,operations,2 );
Initially, the value of the semaphore is N.
The problem is that it freezes even when all processes have executed the semop function. I think it is related to the fact that the operations are executed atomically (but I don't know exactly what it means). But I don't understand why it doesn't work.
Does the code subtract 1 from the semaphore and then block the process if it's not the last or is the code supposed to act in a different way?
It's hard to see what the code does without the whole function and algorithm.
By the looks of it, you apply 2 action in a single atomic action: subtract 1 from the semaphore and wait for 0.
There could be several issues if all processes freeze; the semaphore is not a shared between all processes, you got the number of processes wrong when initiating the semaphore or one process leaves the barrier, at a later point increases the semaphore and returns to the barrier.
I suggest debugging to see that all processes are actually in barrier, and maybe even printing each time you do any action on the semaphore (preferably on the same console).
As for what is an atomic action is; it is a single or sequence of operation that guarantied not to be interrupted while being executed. This means no other process/thread will interfere the action.
I have a question regarding threads. It is known that basically when we call for mutex(lock) that means that thread keeps on executing the part of code uninterrupted by other threads until it meets mutex(unlock). (At least that's what they say in the book) So my question is if it is actually possible to have several scoped WriteLocks which do not interfere with each other. For example something like this:
If I have a buffer with N elements without any new elements coming, however with high frequency updates (like change value of Kth element) is it possible to set a different lock on each element so that the only time threads would stall and wait is if actually 2 or more threads are trying to update the same element?
To answer your question about N mutexes: yes, that is indeed possible. What resources are protected by a mutex depends entirely on you as the user of that mutex.
This leads to the first (statement) part of your question. A mutex by itself does not guarantee that a thread will work uninterrupted. All it guarantees is MUTual EXclusion - if thread B attempts to lock a mutex which thread A has locked, thread B will block (execute no code) until thread A unlocks the mutex.
This means mutexes can be used to guarantee that a thread executes a block of code uninterrupted; but this works only if all threads follow the same mutex-locking protocol around that block of code. Which means it is your responsibility to assign semantics (or meaning) to each individual mutex, and correctly adhere to those semantics in your code.
If you decide for the semantics to be "I have an array a of N data elements and an array m of N mutexes, and accessing a[i] can only be done when m[i] is locked," then that's how it will work.
The need to consistently stick to the same protocol is why you should generally encapsulate the mutex and the code/data protected by it in a class in some way or another, so that outside code doesn't need to know the details of the protocol. It just knows "call this member function, and the synchronisation will happen automagically." This "automagic" will be the class correcrtly implementing the protocol.
A crucial consideration when deciding between a mutex per array and a mutex per element is whether there are operations - like tracking the number of "in-use" array elements, the "active" element, or moving a pointer-to-array to a larger buffer - that can only be done safely by one thread while all the others are blocked.
A lesser but sometimes important consideration is the amount of extra memory more mutexes use.
If you genuinely need to do this kind of update as quickly as possible in a highly contested multi-threaded program, you may also want to learn about lock-free atomic types and their compare-and-swap / exchange operations, but I'd recommend against considering that unless profiling the existing locking is significant in your overall program performance.
A mutex does not stop other threads from running completely, it only stops other threads from locking the same mutex. I.e. while one thread is keeping the mutex locked, the operating system continues to do context switches letting other threads run also, but if any other thread is trying to lock the same mutex its execution will be halted until the mutex is unlocked.
So yes, you can indeed have several different mutexes and lock/unlock them independently. Just beware of deadlocks, i.e. if one thread can lock more than one mutex at a time you can run into a situation where thread 1 has locked mutex A and is trying to lock mutex B but blocks because thread 2 already has mutex B locked and it is trying to lock mutex A..
Its not completely clear that your use case is:
the threads gets a buffer assigned on that they have to work
the threads have some results and request a special buffer to update.
On the first variant you need some assignment logic that assigns a buffer to a thread.
This logic has to be exectued in an atomic way. so the best is to use a mutex to protect the assignment logic.
On the other variant it may be the best to have a vector of mutexes, one for each buffer element.
In Both cases the buffer does not need a protection because it (or better each field of it) is only accessed from one thread at a time.
You also may inform yourself about 'semaphores'. These contain a counter that allows to manage ressources that have a limited amount but more than one. Mutexes are a special case of semaphores with n=1.
You can have mutex per entry, C++11 mutex can be easily converted into an adaptive-spinlock, so you can achieve good CPU/Latency tradeoff.
Or, if you need very low latency yet have enough CPUs you can use an atomic "busy" flag per entry and spin in a tight compare-exchange loop on contention.
From experience, though, the best performance and scalability are achieved when concurrent writes are serialized via a command queue (or a queue of smaller immutable buffers to be concatenated at destination) and a single thread processing the queue.
I have a cross platform c++ program where I'm using the boost libraries to create an asynchronous timer.
I have a global variable:
bool receivedInput = false;
One thread waits for and processes input
string argStr;
while (1)
{
getline(cin, argStr);
processArguments(argStr);
receivedInput = true;
}
The other thread runs a timer where a callback gets called every 10 seconds. In that callback, I check to see if I've received a message
if (receivedInput)
{
//set up timer to fire again in 10 seconds
receivedInput = false;
}
else
exit(1);
So is this safe? For the read in thread 2, I think it wouldn't matter since the condition will evaluate to either true or false. But I'm unsure what would happen if both threads try to set receivedInput at the same time. I also made my timer 3x longer than the period I expect to receive input so I'm not worried about a race condition.
Edit:
To solve this I used boost::unique_lock when I set receivedInput and boost::shared_lock when I read receivedInput. I used an example from here
This is fundamentally unsafe. After thread 1 has written true to receivedInput it isn't guaranteed that thread 2 will see the new value. For example, the compiler may optimize your code making certain assumptions about the value of receivedInput at the time it is used as the if condition or caching it in a register, so you are not guaranteed that main memory will actually be read at the time the if condition is evaluated. Also, both compiler and CPU may change the order of reads and writes for optimization, for example true may be written to receivedInput before getLine() and processArguments().
Moreover, relying on timing for synchronization is a very bad idea since often you have no guarantees as to the amount of CPU time each thread will get in a given time interval or whether it will be scheduled in a given time interval at all.
A common mistake is to think that making receivedInput volatile may help here. In fact, volatile guarantees that values are actually read/written to the main memory (instead of for example being cached in a register) and that reads and writes of the variable are ordered with respect to each other. However, it does not guarantee that the reads and writes of the volatile variable are ordered with respect to other instructions.
You need memory barriers or a proper synchronization mechanism for this to work as you expect.
You would have to check your threading standard. Assuming we're talking about POSIX threads, this is explicitly undefined behavior -- an object may not be accessed by one thread while another thread is or might be modifying it. Anything can happen.
If your threads use the value of receivedInput to control independent code blocks, but not to synchronize with each other, there is one simple solution:
add "volatile" before receivedInput, so the compiler will not do the optimization preventing the threads share the value of receivedInput.
let's i have this loop :
static a;
for (static int i=0; i<10; i++)
{
a++;
///// point A
}
to this loop 2 threads enters...
i'm not sure about something.... what will happen in case thread1 gets into POINT A , stay there, while THREAD2 gets into the loop 10 times, but after the 10'th loop after incrementing i's value to 10, before checking i's value if it's less then 10,
Thread1 is getting out of the loop and suppose to increment i and get into the loop again.
what's the value that Thread1 will increment (which i will he see) ? will it be 10 or 0 ?
is it posibble that Thread1 will increment i to 1, and then thread 2 will get to the loop again for 9 times (and them maybe 8 ,7 , etc...)
thanks
You have to realize that an increment operation is effectively really:
read the value
add 1
write the value back
You have to ask yourself, what happens if two of these happen in two independent threads at the same time:
static int a = 0;
thread 1 reads a (0)
adds 1 (value is 1)
thread 2 reads a (0)
adds 1 (value is 1)
thread 1 writes (1)
thread 2 writes (1)
For two simultaneous increments, you can see that it is possible that one of them gets lost because both threads read the pre-incremented value.
The example you gave is complicated by the static loop index, which I didn't notice at first.
Since this is c++ code, standard implementation is that the static variables are visible to all threads, thus there is only one loop counting variable for all threads. The sane thing to do would be to use a normal auto variable, because each thread would have its own, no locking required.
That means that while you will lose increments sometimes, you also may gain them because the loop itself may lose count and iterate extra times. All in all, a great example of what not to do.
If i is shared between multiple threads, all bets are off. It's possible for any thread to increment i at essentially any point during another thread's execution (including halfway through that thread's increment operation). There is no meaningful way to reason about the contents of i in the above code. Don't do that. Either give each thread its own copy of i, or make the increment and comparison with 10 a single atomic operation.
It's not really a delicate issue because you would never allow this in real code if the synchronization was going to be an issue.
I'm just going to use i++ in your loop:
for (static int i=0; i<10; i++)
{
}
Because it mimics a. (Note, static here is very strange)
Consider if Thread A is suspended just as it reaches i++. Thread B gets i all the way to 9, goes into i++ and makes it 10. If it got to move on, the loop would exist. Ah, but now Thread A is resumed! So it continues where it left off: increment i! So i becomes 11, and your loop is borked.
Any time threads share data, it needs to be protected. You could also make i++ and i < 10 happen atomically (never be interrupted), if your platform supports it.
You should use mutual exclusion to solve this problem.
And that is why, on multi-threaded environment, we are suppose to use locks.
In your case, you should write:
bool test_increment(int& i)
{
lock()
++i;
bool result = i < 10;
unlock();
return result;
}
static a;
for(static int i = -1 ; test_increment(i) ; )
{
++a;
// Point A
}
Now the problem disappears .. Note that lock() and unlock() are supposed to lock and unlock a mutex common to all threads trying to access i!
Yes, it's possible that either thread can do the majority of the work in that loop. But as Dynite explained, this would (and should) never show up in real code. If synchronization is an issue, you should provide mutual exclusion (a Boost, pthread, or Windows Thread) mutex to prevent race conditions such as this.
Why would you use a static loop counter?
This smells like homework, and a bad one at that.
Both the threads have their own copy of i, so the behavior can be anything at all. That's part of why it's such a problem.
When you use a mutex or critical section the threads will generally sync up, but even that is not absolutely guaranteed if the variable is not volatile.
And someone will no doubt point out "volatile has no use in multithreading!" but people say lots of stupid things. You don't have to have volatile but it is helpful for some things.
If your "int" is not the atomic machine word size (think 64 bit address + data emulating a 32-bit VM) you will "word-tear". In that case your "int" is 32 bits, but the machine addresses 64 atomically. Now you have to read all 64, increment half, and write them all back.
This is a much larger issue; bone up on processor instruction sets, and grep gcc for how it implements "volatile" everywhere if you really want the gory details.
Add "volatile" and see how the machine code changes. If you aren't looking down at the chip registers, please just use boost libraries and be done with it.
If you need to increment a value with multiple threads at the same time, then Look up "atomic operations". For linux, look up "gcc atomic operations". There is hardware support on most platforms to atomicly increment, add, compare and swap, and more. LOCKING WOULD BE OVERKILL for this....atomic inc is magnitudes faster than lock inc unlock. If you have to change a lot of fields at the same time you may need a lock, although you can change 128 bits worth of fields at a time with most atomic ops.
volatile is not the same as an atomic operation. Volatile helps the compiler know when its a bad idea to use a copy of a variable. Among its uses, volatile is important when you have multiple threads changing data that you would like to read the "most up to date version of" of without locking. Volatile will still not fix your a++ problem as two threads can read the value of "a" at the same time and then both increment the same "a" and then the last one to write "a" wins and you lost an inc. Volatile will slow down optimized code by not letting the compiler hold values in registers and what not.