Boost, mutex concept - c++

I am new to multi-threading programming, and confused about how Mutex works. In the Boost::Thread manual, it states:
Mutexes guarantee that only one thread can lock a given mutex. If a code section is surrounded by a mutex locking and unlocking, it's guaranteed that only a thread at a time executes that section of code. When that thread unlocks the mutex, other threads can enter to that code region:
My understanding is that Mutex is used to protect a section of code from being executed by multiple threads at the same time, NOT protect the memory address of a variable. It's hard for me to grasp the concept, what happen if I have 2 different functions trying to write to the same memory address.
Is there something like this in Boost library:
lock a memory address of a variable, e.g., double x, lock (x); So
that other threads with a different function can not write to x.
do something with x, e.g., x = x + rand();
unlock (x)
Thanks.

The mutex itself only ensures that only one thread of execution can lock the mutex at any given time. It's up to you to ensure that modification of the associated variable happens only while the mutex is locked.
C++ does give you a way to do that a little more easily than in something like C. In C, it's pretty much up to you to write the code correctly, ensuring that anywhere you modify the variable, you first lock the mutex (and, of course, unlock it when you're done).
In C++, it's pretty easy to encapsulate it all into a class with some operator overloading:
class protected_int {
int value; // this is the value we're going to share between threads
mutex m;
public:
operator int() { return value; } // we'll assume no lock needed to read
protected_int &operator=(int new_value) {
lock(m);
value = new_value;
unlock(m);
return *this;
}
};
Obviously I'm simplifying that a lot (to the point that it's probably useless as it stands), but hopefully you get the idea, which is that most of the code just treats the protected_int object as if it were a normal variable.
When you do that, however, the mutex is automatically locked every time you assign a value to it, and unlocked immediately thereafter. Of course, that's pretty much the simplest possible case -- in many cases, you need to do something like lock the mutex, modify two (or more) variables in unison, then unlock. Regardless of the complexity, however, the idea remains that you centralize all the code that does the modification in one place, so you don't have to worry about locking the mutex in the rest of the code. Where you do have two or more variables together like that, you generally will have to lock the mutex to read, not just to write -- otherwise you can easily get an incorrect value where one of the variables has been modified but the other hasn't.

No, there is nothing in boost(or elsewhere) that will lock memory like that.
You have to protect the code that access the memory you want protected.
what happen if I have 2 different functions trying to write to the same
memory address.
Assuming you mean 2 functions executing in different threads, both functions should lock the same mutex, so only one of the threads can write to the variable at a given time.
Any other code that accesses (either reads or writes) the same variable will also have to lock the same mutex, failure to do so will result in indeterministic behavior.

It is possible to do non-blocking atomic operations on certain types using Boost.Atomic. These operations are non-blocking and generally much faster than a mutex. For example, to add something atomically you can do:
boost::atomic<int> n = 10;
n.fetch_add(5, boost:memory_order_acq_rel);
This code atomically adds 5 to n.

In order to protect a memory address shared by multiple threads in two different functions, both functions have to use the same mutex ... otherwise you will run into a scenario where threads in either function can indiscriminately access the same "protected" memory region.
So boost::mutex works just fine for the scenario you describe, but you just have to make sure that for a given resource you're protecting, all paths to that resource lock the exact same instance of the boost::mutex object.

I think the detail you're missing is that a "code section" is an arbitrary section of code. It can be two functions, half a function, a single line, or whatever.
So the portions of your 2 different functions that hold the same mutex when they access the shared data, are "a code section surrounded by a mutex locking and unlocking" so therefore "it's guaranteed that only a thread at a time executes that section of code".
Also, this is explaining one property of mutexes. It is not claiming this is the only property they have.

Your understanding is correct with respect to mutexes. They protect the section of code between the locking and unlocking.
As per what happens when two threads write to the same location of memory, they are serialized. One thread writes its value, the other thread writes to it. The problem with this is that you don't know which thread will write first (or last), so the code is not deterministic.
Finally, to protect a variable itself, you can find a near concept in atomic variables. Atomic variables are variables that are protected by either the compiler or the hardware, and can be modified atomically. That is, the three phases you comment (read, modify, write) happen atomically. Take a look at Boost atomic_count.

Related

Does a mutex lock itself, or the memory positions in question?

Let's say we've got a global variable, and a global non-member function.
int GlobalVariable = 0;
void GlobalFunction();
and we have
std::mutex MutexObject;
then inside one of the threads, we have this block of code:
{
std::lock_guard<std::mutex> lock(MutexObject);
GlobalVairable++;
GlobalFunction()
}
now, inside another thread running in parallel, what happens if we do thing like this:
{
//std::lock_guard<std::mutex> lock(MutexObject);
GlobalVairable++;
GlobalFunction()
}
So the question is, does a mutex lock only itself from getting owned while being owned by another thread, not caring in the process about what is being tried to be accessed in the critical code? or does the compiler, or in run-time, the OS actually designate the memory location being accessed in the critical code as blocked for now by MutexObject?
My guess is the former, but I need to hear from an experienced programmer; Thanks for taking the time to read my question.
It’s the former. There’s no relationship between the mutex and the objects you’re protecting with the mutex. (In general, it's not possible for the compiler to deduce exactly which objects a given block of code will modify.) The magic behind the mutex comes entirely from the temporal ordering guarantees it makes: that everything the thread does before releasing the mutex is visible to the next thread after it’s grabbed the mutex. But the two threads both need to actively use the mutex for that to happen.
A system which actually cares about what memory locations a thread has accessed and modified, and builds safely on top of that, is “transactional memory”. It’s not widely used.

Multithreading - synchronised value vs mutexes?

When writing multithreaded code, I often need to read / write to shared memory. To prevent data races, the go - to solution would be to use something like lock_guard. However recently, I came across the concept of "synchronised values" which are usually implemented something in the lines of :
template <typename T>
class SynchronizedValue {
T value;
std::mutex lock;
/* Public helper functions to read/write to a value, making sure the lock is locked when the value is written to*/
};
This class Synchronised value will have a method SetValueTo which will lock the mutex, write to the value, and unlock the mutex, making sure that you can write to a value safely without any data races.
This makes writing multithreaded code so much easier! However, are there any drawbacks / performance overhead of using these synchronised values in contrast to mutexes / lock_guard?
are there any drawbacks / performance overhead of using these SynchronisedValues...?
Before you ask whether there is any drawback, You first ought to ask whether there is any benefit. The standard C++ library already defines std::atomic<T>. You didn't say what /* public helper functions...*/ you had in mind, but if they're just getters and setters for value, then what does your SynchronizedValues<T> class offer that you don't already get from std::atomic<T> ?
There's an important reason why "atomic" variables don't eliminate the need for mutexes, B.T.W. Mutexes aren't just about ensuring "visibility" of memory updates: The most important way to think about mutexes is that they can protect relationships between data in a program.
E.g., Imagine a program that has multiple containers for some class of object, imagine that the program needs to move objects from container to container, and imagine that it is important for some thread to occasionally count all of the objects, and be guaranteed to get an accurate count.
The program can use a mutex to make that possible. It just has to obey two simple rules; (1) No thread may remove an object from any container unless it has the mutex locked, and (2) no thread may release the mutex until every object is in a container. If all of the threads obey those two rules, then the thread that counts the objects can be guaranteed to find all of them if it locks the mutex before it starts counting.
The thing is, you can't guarantee that just by making all of the variables atomic, because atomic doesn't protect any relationship between the variable in question and any other variable. At most, it only protects relationships between the value of the variable before and after some "atomic" operation such as an atomic increment.
When there's more than one variable participating in the relationship, then you must have a mutex (or something equivalent to a mutex.)
If you look under the hood at what is actually happening in each case you just find different ways of saying and doing the same thing.

How to make an object in c++ volatile?

`struct MyClass {
~MyClass() {
// Asynchronously invoke deletion (erase) of entries from my_map;
// Different entries are deleted in different threads.
// Need to spin as 'this' object is shared among threads and
// destruction of the object will result in seg faults.
while(my_map.size() > 0); // This spins for ever due to complier optimization.
}
unordered_map<key, value> my_map;
};`
I have the above class in which elements of the unordered map are deleted asynchronoulsy in the destructor and I must spin/sleep as the object is shared among other threads. I cannot declare my_map as volatile as it results in compilation errors. What else can I do here ? How do I tell the complier that my_map.size() will result in 0 at some point in time. Please do not tell me why/how this design is bad; I cannot change the design as it is bound due to the reason I cannot explain unless I write thousands of lines of code here.
Edit: my_map is protected using a version of spinlock. So, threads do grab the spinlock before erasing the entries. Just the while(my_map.size() > 0); was the only "naive" spin I had in the code. I converted it to grab the spinlock and then check the size (in a loop) and it worked. Though using a condition_variable would be the right way of doing it, we use asynchronous programming model (like SEDA) which binds us to not use any sleeping/yeilding calls.
volatile is not the solution to this problem. volatile has exactly three uses: 1. Accessing memory mapped devices in a driver, 2. signal handlers, 3. setjmp usage.
Read the following, over and over until it sinks in. volatile is useless in multithreading.
A naive spin lock like that has three problems:
The compiler is permitted to cache results, therefore you see the "spin forever" behavior you're seeing.
In the classic case, you have the risk of a race condition: thread A may check the lock variable, find the resource is accessible, but then get pre-empted before setting the lock variable. Along comes thread B who also finds the lock variable showing the resource as accessible, so it then locks it and starts to access the resource, Then thread A wakes back up, locks the variable again, and also accesses the resource.
There is a data write order problem. If a protected variable is written to, and then a lock variable is changed, you have no guarantees that a different thread will not see the protected variable changed even though it may also see the lock variable claiming it has been written. Both the compiler and the Out of order execution on the CPU are permitted do this.
volatile only solves the first of these problems, it does nothing to address the other two. With one caveat, by default MSVC on x86 / x64 adds a memory fence to volatile accesses, even though it's not required by the standard. That happens to solve the third problem, but it still doesn't fix the second one.
The only solutions to all three of these problems involves use of correct synchronization primitives: std::atomic<> if you really must spin lock, preferably std::mutex and maybe std::condition_variable for a lock that will put the thread to sleep till something interesting happens.

C++ constructor memory synchronization

Assume that I have code like:
void InitializeComplexClass(ComplexClass* c);
class Foo {
public:
Foo() {
i = 0;
InitializeComplexClass(&c);
}
private:
ComplexClass c;
int i;
};
If I now do something like Foo f; and hand a pointer to f over to another thread, what guarantees do I have that any stores done by InitializeComplexClass() will be visible to the CPU executing the other thread that accesses f? What about the store writing zero into i? Would I have to add a mutex to the class, take a writer lock on it in the constructor and take corresponding reader locks in any methods that accesses the member?
Update: Assume I hand a pointer over to a bunch of other threads once the constructor has returned. I'm not assuming that the code is running on x86, but could be instead running on something like PowerPC, which has a lot of freedom to do memory reordering. I'm essentially interested in what sorts of memory barriers the compiler has to inject into the code when the constructor returns.
In order for the other thread to be able to know about your new object, you have to hand over the object / signal other thread somehow. For signaling a thread you write to memory. Both x86 and x64 perform all memory writes in order, CPU does not reorder these operations with regards to each other. This is called "Total Store Ordering", so CPU write queue works like "first in first out".
Given that you create an object first and then pass it on to another thread, these changes to memory data will also occur in order and the other thread will always see them in the same order. By the time the other thread learns about the new object, the contents of this object was guaranteed to be available for that thread even earlier (if the thread only somehow knew where to look).
In conclusion, you do not have to synchronise anything this time. Handing over the object after it has been initialised is all the synchronisation you need.
Update: On non-TSO architectures you do not have this TSO guarantee. So you need to synchronise. Use MemoryBarrier() macro (or any interlocked operation), or some synchronisation API. Signalling the other thread by corresponding API causes also synchronisation, otherwise it would not be synchronisation API.
x86 and x64 CPU may reorder writes past reads, but that is not relevant here. Just for better understanding - writes can be ordered after reads since writes to memory go through a write queue and flushing that queue may take some time. On the other hand, read cache is always consistent with latest updates from other processors (that have went through their own write queue).
This topic has been made so unbelievably confusing for so many, but in the end there is only a couple of things a x86-x64 programmer has to be worried about:
- First, is the existence of write queue (and one should not at all be worried about read cache!).
- Secondly, concurrent writing and reading in different threads to same variable in case of non-atomic variable length, which may cause data tearing, and for which case you would need synchronisation mechanisms.
- And finally, concurrent updates to same variable from multiple threads, for which we have interlocked operations, or again synchronisation mechanisms.)
If you do :
Foo f;
// HERE: InitializeComplexClass() and "i" member init are guaranteed to be completed
passToOtherThread(&f);
/* From this point, you cannot guarantee the state/members
of 'f' since another thread can modify it */
If you're passing an instance pointer to another thread, you need to implement guards in order for both threads to interact with the same instance. If you ONLY plan to use the instance on the other thread, you do not need to implement guards. However, do not pass a stack pointer like in your example, pass a new instance like this:
passToOtherThread(new Foo());
And make sure to delete it when you are done with it.

Shared Variables in C++11

So I took an OS class last semester and we had a concurrency/threading project. It was an airport sim that landed planes / had them take off into the direction that the wind was blowing. We had to do it in Java. So now that finals are over and I'm bored, I'm trying to do it in C++11. In Java I used a synchronized variable for the wind (0 - 360) in main and passed it to the 3 threads I was using. My question is: Can you do that in C++11? It's a basic reader/writer, one thread writes/updates the wind, the other 2 (takeoff/land) read.
I got it working by having a global wind variable in my "threads.cpp" implementation file. But is there a way to pass a variable to as many threads as I want and all of them keep up with it? Or is it actually better for me to just use the global variable and not pass anything?(why/why not?) I was looking at std::ref() but that didn't work.
EDIT: I'm already using mutex and lock_guard. I'm just trying to figure out how to pass and keep a variable up to date in all threads. Right now it only updates in the write thread.
You can use a std::mutex with std::lock_guard to synchronize access to the shared data. Or if the shared data fits in an integer, you can use std::atomic<int> without locking.
If you want to avoid global variables, simply pass the address of the shared state to the thread functions when you launch them. For example:
void thread_entry1(std::atomic<int>* val) {}
void thread_entry2(std::atomic<int>* val) {}
std::atomic<int> shared_value;
std::thread t1(thread_entry1, &shared_value);
std::thread t2(thread_entry2, &shared_value);
Using std::mutex and std::lock_guard mimicks what a Java synchronized variable does (only in Java this happens secretly without you knowing, in C++ you do it explicitly).
However, having one producer (there is just one direction of wind) and otherwise only consumers, it suffices to write to a e.g. std::atomic<int> variable with relaxed ordering, and to read from that variable from each consumer, again with relaxed ordering. Unless you have the requirement that the global view of all airplanes are consistent (but then you would have to run a lockstep simulation, which makes threading nonsensical), there is no need for synchronization, you only have to make sure that any value that any airplane reads at any time is eventually correct and that no garbled intermediate results can occur. In other words, you need an atomic update.
Relaxed memory ordering is sufficient too, since if all you read is one value, you do not need any happens-before guarantees.
An atomic update (or rather, atomic write) is at least an order of magnitude, if not more, faster. Atomic reads and writes with relaxed ordering are indeed plain normal reads and writes on many (most) mainstream architectures.
The variable needs not be global, you can as well keep it in the main thread's simultion loop's scope and pass a reference (or pointer) to the threads.
You might want to create say, the wind object, on the heap with new through an std::shared_ptr. Pass this pointer to all interested threads and use a std::mutex and std::lock_guard to change it.