Related
I try to use atomic variable as possible as instead of mutex.
Because Atomic variable works on user mode but mutex works on kernel mode, So atomic is much faster than mutex.
But i have question about atomic variable.
Does Atomic variable do acquire, release only cache line where it is at.
If it is right, To Release All cache lines, Should I use mutex???
All operations on the atomic object itself are atomic: there is a single ordering for all modifications across all threads (for that specific atomic object). The acquire/release stuff is therefore about the visibility of modifying other memory relative to some modification of an atomic variable (though memory_order_relaxed also affects how operations on that atomic object relate to other operations on that atomic object).
For example, if one thread sets some data to a value, and then sets an atomic variable to some agreed-upon value, if another thread sees the atomic variable with that value, proper use of the acquire/release flags on the atomic read/write allows the other thread to also safely access the other data set by the thread that set the atomic variable.
So those flags apply to potentially all memory operations in the process.
Also, you shouldn't really think of it in terms of "releasing cache lines" or whatever. The C++ memory model should be read in terms of itself: what data is visible and accessible to which threads due to what sequence of operations. Let the implementations work out the details.
If I lock a std::mutex will I always get a memory fence? I am unsure if it implies or enforces you to get the fence.
Update:
Found this reference following up on RMF's comments.
Multithreaded programming and memory visibility
As I understand this is covered in:
1.10 Multi-threaded executions and data races
Para 5:
The library defines a number of atomic operations (Clause 29) and operations on mutexes (Clause 30)
that are specially identified as synchronization operations. These operations play a special role in making assignments in one thread visible to another. A synchronization operation on one or more memory locations is either a consume operation, an acquire operation, a release operation, or both an acquire and release operation. A synchronization operation without an associated memory location is a fence and can be either an acquire fence, a release fence, or both an acquire and release fence. In addition, there are relaxed atomic operations, which are not synchronization operations, and atomic read-modify-write operations, which have special characteristics. [Note: For example, a call that acquires a mutex will perform an acquire operation on the locations comprising the mutex. Correspondingly, a call that releases the same mutex will perform a release operation on those same locations. Informally, performing a release operation on A forces prior side effects on other memory locations to become visible to other threads that later perform a consume or an acquire operation on A. “Relaxed” atomic operations are not synchronization operations even though, like synchronization operations, they cannot contribute to data races. —end note]
Unlocking a mutex synchronizes with locking the mutex. I don't know what options the compiler has for the implementation, but you get the same effect of a fence.
A mutex operation (lock or unlock) on a particular mutex M is only useful for any purpose related to synchronization or memory visibility if M is shared by different threads and they perform these operations. A mutex defined locally and only used by one thread does not provide any meaningful synchronization.
[Note: The optimizations I describe here are probably not done by many compilers, which might view these mutex and atomic synchronization operations as "black boxes" that cannot be optimized (or even that should not optimized in order to preserve the predictability of code generation, and some particular patterns, which is a bogus argument). I would not be surprised if zero compiler did the optimization in even the simpler case but there is no doubt that they are legal.]
A compiler can easily determine that some variables are never used by multiple threads (or any asynchronous execution), notably for an automatic variable whose address is not taken (nor a reference to it). Such objects are called here "thread private". (All automatic variables candidate for register allocation are thread private.)
For a thread private mutex, no meaningful code needs to be generated for lock/unlock operations: no atomic compare and exchange, no fencing, and often no state needs to be kept at all, except for the case of "safe mutex" where the behavior of recursive locking is well defined and should fail (to make the sequence m.lock(); bool locked = m.try_lock(); work you need to keep at least a boolean state).
This is also true for any thread private atomic objects: only the naked non atomic type is needed and normal operations can be performed (so a fetch-add 1 becomes as regular post increment).
The reason why these transformations are legal:
the obvious observation that if an object is accessed by only of thread or parallel execution (they aren't even accessed by an asynchronous signal handler) so the use of non atomic operation in assembly is not detectable by any mean
the less obvious remark that no ordering/memory visibility is implied by any use of thread private synchronization object.
All synchronization objects are specified as tools for inter thread communication: they can guarantee that the side effects in one thread are visible in another thread; they cause a well defined order of operations to exist not just in one thread (the sequential order of execution of operations of one thread) but in multiple threads.
A common example is the publication of an information with an atomic pointer type:
The shared data is:
atomic<T*> shared; // null pointer by default
The publishing thread does:
T *p = new T;
*p = load_info();
shared.store(p, memory_order_release);
The consuming thread can check whether the data is available by loading the atomic object value, as a consumer:
T *p = shared.load(memory_order_acquire);
if (p) use *p;
(There is no defined way to wait for availability here, it's a simple example to illustrate publication and consumption of the published value.)
The publishing thread only needs to set the atomic variable after it has finished the initialization of all fields; the memory order is a release to communicate the fact the memory manipulations are finished.
The other threads only need an acquire memory order to "connect" with the release operation if there was one. If the value is still zero, the thread has learn nothing about the world and the acquire is meaningless; it can't act on it. (By the time the thread check the pointer and see a null, the shared variable might have been changed already. It doesn't matter as the designer considered that not having a value in that thread is manageable, or it would have performed the operation in a sequence.)
All atomic operations are intended to be possibly lock less, that is to finish in a short finite time whatever other threads are doing, and even if they are stuck. It means that you can't depend on another thread having finished a job.
At the other end of the spectrum of thread communication primitives, mutexes don't hold a value that can be used to carry information between threads (*) but they ensure that one thread can enter a lock-ops-unlock sequence only after another thread has finished his own lock-ops-unlock sequence.
(*) not even a boolean value, as using a mutex to as a general boolean signal (= binary semaphore) between threads is specifically prohibited
A mutex is always used in connection with a set of shared variables: the protected variables or objects V; these V are used to carry information between threads and the mutex make access to that information well ordered (or serialized) between threads. In technical terms, all but the first mutex lock (on M) operation pair with previous unlock operation on M:
a lock of M is an acquire operation on M
an unlock of M is a release operation on M
The semantic of locking/unlocking is defined on a single M so let's stop repeating "on M"; we have threads A and B. The lock by B is an acquire that pairs with the unlock by A. Both operations together form an inter thread synchronization.
[What about a thread that happens to lock M often and will often re-lock M without any other thread acting on M in the meantime? Nothing interesting, the acquire is still paired with a release but A = B so nothing is accomplished. The unlock was sequenced in the same thread of execution so it's pointless in that particular case, but in general a thread can't tell that it's pointless. It isn't even special cased by the language semantic.]
The synchronization that occurs is between the set of threads T acting on a mutex: no other thread is guaranteed to be able to view any memory operation performed by these T. Note that in practice on most real computers, once a memory modification hits the cache, all CPU will view it if they check that same address, by the power of the cache consistency. But C/C++ threads(#) are not specified in term of a globally consistent cache, and not in term of ordering visible on a CPU, as the compiler itself can assume that non atomic objects are not mutated in arbitrary ways by the program without synchronization (the CPU cannot assume any such thing as it doesn't a concept of atomic vs. non atomic memory locations). That means that a guarantee available at the CPU/memory system you are targeting is not in general available at the C/C++ high level model. You absolutely cannot use normal C/C++ code as a high level assembly; only by dousing your code with volatile (almost everywhere) can you even vaguely approach high level assembly (but not quite).
(#) "C/C++ thread/semantics" not "C/C++ programming language thread semantics": C and C++ are based on the same specification for synchronization primitives, that doesn't mean that there is a C/C++ language)
Since the effect of mutex operations on M is only to serialize access to some data by threads that use M, it's clear that other threads don't see any effect. In technical terms, the synchronize with relation is between threads using the same synchronization objects (mutexes in that context, same atomic objects in the context of atomic use).
Even when the compiler emits memory fences in assembly language, it doesn't have to assume that an unlock operation makes changes performed before the unlock to threads outside the set T.
That allows the decomposition of sets of threads for program analysis: if a programs runs in parallel two sets of threads U and V, and U and V are created such that U and V can't access any common synchronization object (but they can access common non atomic objects), then you can analyse the interactions of U and of V separately from the point of view of thread semantics, as U and V cannot exchange information in well defined inter threads ways (they can still exchange information via the system, for example via disk files, sockets, for system specific shared memory).
(That observation might allow the compiler to optimize some threads without doing a full program analysis, even if some common mutable objects are "pulled it" via a third party class that has static members.)
Another way to explain that is to say that the semantics of these primitives is not leaking: only those threads that participate get a defined result.
Note that this is only true at the specification level for acquire and release operations, not sequentially consistent operations (which is the default order for operations on atomic object with you don't specify a memory order): all sequentially consistent actions (operations on an atomic object or fences) occur in a well defined global order. That however doesn't mean anything for independent threads having no atomic objects in common.
An order of operations is unlike an order of elements in a container where you can really navigate the container, or saying that files are presented as ordered by names. Only objects are observable, the orders of operations are not. Saying that there is a well defined order only means that values don't appear to have provably changed backward (vs. some abstract order).
If you have two unrelated sets that are ordered, say the integers with usual order and the words with lexicographical order), you can define the sum of these sets as having an order compatible with both orders. You might put the numbers before, after, or alternated with the words. You are free to do what you want because the elements in the sum of two sets doesn't have any relation with each other when they don't come from the same set.
You could say that there is a global order of all mutex operations, it's just not useful, like defining the order of the sum of unrelated sets.
I am new to multi-threading programming, and confused about how Mutex works. In the Boost::Thread manual, it states:
Mutexes guarantee that only one thread can lock a given mutex. If a code section is surrounded by a mutex locking and unlocking, it's guaranteed that only a thread at a time executes that section of code. When that thread unlocks the mutex, other threads can enter to that code region:
My understanding is that Mutex is used to protect a section of code from being executed by multiple threads at the same time, NOT protect the memory address of a variable. It's hard for me to grasp the concept, what happen if I have 2 different functions trying to write to the same memory address.
Is there something like this in Boost library:
lock a memory address of a variable, e.g., double x, lock (x); So
that other threads with a different function can not write to x.
do something with x, e.g., x = x + rand();
unlock (x)
Thanks.
The mutex itself only ensures that only one thread of execution can lock the mutex at any given time. It's up to you to ensure that modification of the associated variable happens only while the mutex is locked.
C++ does give you a way to do that a little more easily than in something like C. In C, it's pretty much up to you to write the code correctly, ensuring that anywhere you modify the variable, you first lock the mutex (and, of course, unlock it when you're done).
In C++, it's pretty easy to encapsulate it all into a class with some operator overloading:
class protected_int {
int value; // this is the value we're going to share between threads
mutex m;
public:
operator int() { return value; } // we'll assume no lock needed to read
protected_int &operator=(int new_value) {
lock(m);
value = new_value;
unlock(m);
return *this;
}
};
Obviously I'm simplifying that a lot (to the point that it's probably useless as it stands), but hopefully you get the idea, which is that most of the code just treats the protected_int object as if it were a normal variable.
When you do that, however, the mutex is automatically locked every time you assign a value to it, and unlocked immediately thereafter. Of course, that's pretty much the simplest possible case -- in many cases, you need to do something like lock the mutex, modify two (or more) variables in unison, then unlock. Regardless of the complexity, however, the idea remains that you centralize all the code that does the modification in one place, so you don't have to worry about locking the mutex in the rest of the code. Where you do have two or more variables together like that, you generally will have to lock the mutex to read, not just to write -- otherwise you can easily get an incorrect value where one of the variables has been modified but the other hasn't.
No, there is nothing in boost(or elsewhere) that will lock memory like that.
You have to protect the code that access the memory you want protected.
what happen if I have 2 different functions trying to write to the same
memory address.
Assuming you mean 2 functions executing in different threads, both functions should lock the same mutex, so only one of the threads can write to the variable at a given time.
Any other code that accesses (either reads or writes) the same variable will also have to lock the same mutex, failure to do so will result in indeterministic behavior.
It is possible to do non-blocking atomic operations on certain types using Boost.Atomic. These operations are non-blocking and generally much faster than a mutex. For example, to add something atomically you can do:
boost::atomic<int> n = 10;
n.fetch_add(5, boost:memory_order_acq_rel);
This code atomically adds 5 to n.
In order to protect a memory address shared by multiple threads in two different functions, both functions have to use the same mutex ... otherwise you will run into a scenario where threads in either function can indiscriminately access the same "protected" memory region.
So boost::mutex works just fine for the scenario you describe, but you just have to make sure that for a given resource you're protecting, all paths to that resource lock the exact same instance of the boost::mutex object.
I think the detail you're missing is that a "code section" is an arbitrary section of code. It can be two functions, half a function, a single line, or whatever.
So the portions of your 2 different functions that hold the same mutex when they access the shared data, are "a code section surrounded by a mutex locking and unlocking" so therefore "it's guaranteed that only a thread at a time executes that section of code".
Also, this is explaining one property of mutexes. It is not claiming this is the only property they have.
Your understanding is correct with respect to mutexes. They protect the section of code between the locking and unlocking.
As per what happens when two threads write to the same location of memory, they are serialized. One thread writes its value, the other thread writes to it. The problem with this is that you don't know which thread will write first (or last), so the code is not deterministic.
Finally, to protect a variable itself, you can find a near concept in atomic variables. Atomic variables are variables that are protected by either the compiler or the hardware, and can be modified atomically. That is, the three phases you comment (read, modify, write) happen atomically. Take a look at Boost atomic_count.
I have a parent and a worker thread that share a bool flag and a std::vector. The parent only reads (i.e., reads the bool or calls my_vector.empty()); the worker only writes.
My questions:
Do I need to mutex protect the bool flag?
Can I say that all bool read/writes are inherently atomic operations? If you say Yes or No, where did you get your information from?
I recently heard about GCC Atomic-builtin. Can I use these to make my flag read/writes atomic without having to use mutexes? What is the difference? I understand Atomic builtins boil down to machine code, but even mutexes boil down to CPU's memory barrier instructions right? Why do people call mutexes an "OS-level" construct?
Do I need to mutex protect my std::vector? Recall that the worker thread populates this vector, whereas the parent only calls empty() on it (i.e., only reads it)
I do not believe mutex protection is necessary for either the bool or the vector. I rationalize as follows, "Ok, if I read the shared memory just before it was updated.. thats still fine, I will get the updated value the next time around. More importantly, I do not see why the writer should be blocked while the reading is reading, because afterall, the reader is only reading!"
If someone can point me in the right direction, that would be just great. I am on GCC 4.3, and Intel x86 32-bit.
Thanks a lot!
Do I need to mutex protect the bool flag?
Not necessarily, an atomic instruction would do. By atomic instruction I mean a compiler intrinsic function that a) prevents compiler reordering/optimization and b) results in atomic read/write and c) issues an appropriate memory fence to ensure visibility between CPUs (not necessary for current x86 CPUs which employ MESI cache coherency protocol). Similar to gcc atomic builtins.
Can I say that all bool read/writes are inherently atomic operations? If you say Yes or No, where did you get your information from?
Depends on the CPU. For Intel CPUs - yes. See Intel® 64 and IA-32 Architectures Software Developer's Manuals.
I recently heard about GCC Atomic-builtin. Can I use these to make my flag read/writes atomic without having to use mutexes? What is the difference? I understand Atomic builtins boil down to machine code, but even mutexes boil down to CPU's memory barrier instructions right? Why do people call mutexes an "OS-level" construct?
The difference between atomics and mutexes is that the latter can put the waiting thread to sleep until the mutex is released. With atomics you can only busy-spin.
Do I need to mutex protect my std::vector? Recall that the worker thread populates this vector, whereas the parent only calls empty() on it (i.e., only reads it)
You do.
I do not believe mutex protection is necessary for either the bool or the vector. I rationalize as follows, "Ok, if I read the shared memory just before it was updated.. thats still fine, I will get the updated value the next time around. More importantly, I do not see why the writer should be blocked while the reading is reading, because afterall, the reader is only reading!"
Depending on the implementation, vector.empty() may involve reading two buffer begin/end pointers and subtracting or comparing them, hence there is a chance that you read a new version of one pointer and an old version of another one without a mutex. Surprising behaviour may ensue.
From the C++11 standards point of view, you have to protect the bool with a mutex, or alternatively use std::atomic<bool>. Even when you are sure that your bool is read and written to atomically anyways, there is still the chance that the compiler can optimize away accesses to it because it does not know about other threads that could potentially access it.
If for some reason you absolutely need the latest bit of performance of your platform, consider reading the "Intel 64 and IA-32 Architectures Software Developer's Manual", which will tell you how things work under the hood on your architecture. But of course, this will make your program unportable.
Answers:
You will need to protect the bool (or any other variable for that matter) that has the possibility of being operated on by two or more threads at the same time. You can either do this with a mutex or by operating on the bool atomically.
Bool reads and bool writes may be atomic operations, but two sequential operations are certainly not (e.g., a read and then a write). More on this later.
Atomic builtins provide a solution to the problem above: the ability to read and write a variable in a step that cannot be interrupted by another thread. This makes the operation atomic.
If you are using the bool flag as your 'mutex' (that is, only the thread that sets the bool flag to true has permission to modify the vector) then you're OK. The mutual exclusion is managed by the boolean, and as long as you're modifying the bool using atomic operations you should be all set.
To answer this, let me use an example:
bool flag(false);
std::vector<char> my_vector;
while (true)
{
if (flag == false) // check to see if the mutex is owned
{
flag = true; // obtain ownership of the flag (the mutex)
// manipulate the vector
flag = false; // release ownership of the flag
}
}
In the above code in a multithreaded environment it is possible for the thread to be preempted between the if statement (the read) and the assignment (the write), which means it possible for two (or more) threads with this kind of code to both "own" the mutex (and the rights to the vector) at the same time. This is why atomic operations are crucial: they ensure that in the above scenario the flag will only be set by one thread at a time, therefore ensuring the vector will only be manipulated by one thread at a time.
Note that setting the flag back to false need not be an atomic operation because you this instance is the only one with rights to modify it.
A rough (read: untested) solution may look something like:
bool flag(false);
std::vector<char> my_vector;
while (true)
{
// check to see if the mutex is owned and obtain ownership if possible
if (__sync_bool_compare_and_swap(&flag, false, true))
{
// manipulate the vector
flag = false; // release ownership of the flag
}
}
The documentation for the atomic builtin reads:
The “bool” version returns true if the comparison is successful and newval was written.
Which means the operation will check to see if flag is false and if it is set the value to true. If the value was false true is returned, otherwise false. All of this happens in an atomic step, so it is guaranteed not to be preempted by another thread.
I don't have the expertise to answer your entire question but your last bullet is incorrect in cases in which reads are non-atomic by default.
A context switch can happen anywhere, the reader can get context switched partway through a read, the writer can get switched in and do the full write, and then the reader would finish their read. The reader would see neither the first value, nor the second value, but potentially some wildly inaccurate intermediate value.
Assume there are two threads running Thread1() and Thread2() respectively. The thread 1 just sets a global flag to tell thread 2 to quit and thread 2 periodically checks if it should quit.
volatile bool is_terminate = false;
void Thread1()
{
is_terminate = true;
}
void Thread2()
{
while (!is_terminate) {
// ...
}
}
I want to ask if the above code is safe assuming that access to is_terminate is atomic. I already know many materials state that volatile can not insure thread-safety generally. But in the situation that only one atomic variable is shared, do we really need to protect the shared variable using a lock?
It is probably sort of thread-safe.
Thread safety tends to depend on context. Updating a bool is always thread safe, if you never read from it.
And if you do read from it, then the answer depends on when you read from it, and what that read signifies.
On some CPUs, but not all, writes to an object of type bool will be atomic. x86 CPUs will generally make it atomic, but others might not. If the update isn't atomic, then adding volatile won't help you.
But the next problem is reordering. The compiler (and CPU) will carry out reads/writes to volatile variables in the order specified, without any reordering. So that's good.
But it makes no guarantee about reordering one volatile memory access relative to all the non-volatile ones. So a common example is that you define some kind of flag to protect access to a resource, you make the flag volatile, and then the compiler moves the resource access up so it happens before you check the flag. It's allowed to do that, because it's not reordering the internal ordering of two volatile accesses, but merely a volatile and a non-volatile one.
Honestly, the question I'd ask is why not just do it properly?
It is possible that volatile will work in this situation, but why not save yourself the trouble, and make it clearer that it's correct? Slap a memory barrier around it instead.
It is not thread safe.
If the threads, for example, are run on CPUs with separate caches there are no language rules saying that the caches are to be synchronized when writing a volatile variable. The other thread may not see the change for a very long time, if ever.
To answer in another way:
If volatile is enough to be thread safe, why is C++0x adding an entire chapter with atomic operations?
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2047.html
First, volatile is used for disabling compile optimization in c/c++. see this for understanding volatile.
The core of atomic is word align and size of is_terminate, if size of is_terminate is less than machine native size and aligned, then R and W of it is atomic.
In your context, with or without volatile, thread2 may read old value after thread1 modified it, but thread2 can read it eventually.
If eventually-read is OK for you, then your codes are thread safety.
it's safe because one thread is only reading and one is only writing.
The threads aren't really sharing that flag, one is reading, one is writing. You can't have a race because the other thread will never write a bad result, and the reading thread will never read a bad result. simple.
No, it is not. It could be thread safe if the value access was atomic, but in C++ you can't assume that variables access is thread-safe unless you use some compiler-specific constructs or synchronization primitives.
It is still not safe. You should use synchronizaton to access is_terminate Access to the bool is not guaranteed to be an atomic operation.
I believe that this code is safe, until both the threads are not writing the bool (already you have mentioned that value access is atomic).
The big problem with assuming that the volatile keyword imposes any kind of thread safety, is that the C or C++ standards have no concept of threads in the abstract machine they describe.
The guarantees that the standard imposes on the volatile keyword, are only valid within a thread - not between multiple threads.
This leaves implementors with full liberty to do whatever they please when it comes to threads. If they chose to implement the volatile keyword to be safe across threads, then you're lucky. More often than not, that's not the case though.
This code isn't seems to be thread safe. Reason can be explained easily.
Problem lies in below code line
"is_terminate = true;"
Even if access to "is_terminate" is atomic, above statement is not atomic.
This statement includes more than 1 operations. Like load "is_terminate" and update "is_terminate".
Now gotcha is if is_terminate is loaded and not updated and thread switches to another one.
Now thread 2 expected result to be true but it won't get it.
Make sure "is_terminate = true;" is atomic. So lock it.
Hope it helps.