This question already has an answer here:
C++ thread safety without atomics [closed]
(1 answer)
Closed 1 year ago.
If I have a struct with some members in it, and I want to have differents threads that read into these structs. In this case, I don't need to do anything because I only read, but if I ever want to write to a member, do I have to use atomics and force everyone to explicitly load the atomic everywhere? Just because of one single write that I occasionally do?
Make the variables as private and access them through getters and setters, and guard their access with a std::scoped_lock:
in cpp:
std::mutex aMutex;
int getInteger() const
{
std::scoped_lock<std::mutex> aLock(aMutex);
return m_integer;
}
void setInteger(int iInteger)
{
std::scoped_lock<std::mutex> aLock(aMutex);
m_integer = iInteger;
}
If you have multi reader and writer you should use synchronization methods.
in your case because your read and write not in proper order you should synchronize them. imagine you read and write value in same time if you dont have enough luck your reader thread will read fast and store value in register(or other location) and immediately you write new value and therefore your value updated and previous value is gone.
most famous synchronization method is mutex.
example:
#include <mutex>
std::mutex mutex;
mutex.lock();
//enter critical section
mutex.unlock(); //exit critical section
another good synchronization method is reader and writer lock.
in this method if you going to read value you can lock multi times and read value from multi thread but if you going to write when you call lock on writer side other thread cannot read or write value because it is currently under writing.
another good synchronization method is spin lock in this method you just run busy loop until you can gain access for manipulate variable. the good point of this method is that because we run busy loop therefore it's faster than mutex because in mutex you switching to kernel mode which is slow(but beware you should hold spinlock for just little amount of time)
they are many other synchronization method but i recommend you to use mutex or atomic variables.
Related
I have a situation where I have multiple threads accessing the clipboard and performing an action with some sleeps. Basically I don't want to copy the wrong thing to the clipboard and perform something in an incorrect fashion.
Currently, I have an idea of using a mutex to lock a std::string variable and store the required data in it, then to the clipboard. After I finish the action, I unlock it. Then the other thread accesses the variable, then clipboard, and action.
As I have never used a mutex, my question is - will it work by just doing the above?
I'm using the following library to manage the clipboard: https://github.com/dacap/clip
From a quick look at the library, you may not need to use a mutex at all. For eg. the set_text function:
bool set_text(const std::string& value) {
lock l;
if (l.locked()) {
l.clear();
return l.set_data(text_format(), value.c_str(), value.size());
}
else
return false;
}
You can see that a lock is being created (which locks the resource based on the os implementation at construction of lock l) which should provide you some thread safety.
Currently, I have an idea of using a mutex to lock a std::string variable and store the required data in it, then to the clipboard.
From the way you explain this it is clear you do not understand how it works. Mutex does not lock a variable whatever type it has. Mutex is locked itself. And mutex has a property that only one thread can lock it at a time. Using that property you can organize your program the way so only one thread will access some critical data and that will make such access thread safe. It is not that you link a mutex to an unrelated variable, then that mutex would somehow lock that variable and make it thread safe. Now if you understand what is different btw those you would understand that yes you can make access to your std::string variable thread safe but that will require all threads accessing it doing that under mutex lock, not that only one thread locks it and all other will obey that lock magically.
I don't know if this is good practice or not but I am doing work on a real time stream of input data and using pthreads in lockstep order to allow one thread at a time to do different operations at the same. This is my program flow for each thread:
void * my_thread() {
pthread_mutex_lock(&read_mutex);
/*
read data from a stream such as stdin into global buffer
*/
pthread_mutex_lock(&operation_mutex);
pthread_mutex_unlock(&read_mutex);
/*
perform some work on the data you read
*/
pthread_mutex_lock(&output_mutex);
pthread_mutex_unlock(&operation_mutex);
/*
Write the data to output such as stdout
*/
pthread_mutex_unlock(&output_mutex);
}
I know there is pthread conditional lock, but is my approach a good idea or a bad idea? I tested this on various size streams and I am trying to think of corner cases to make this deadlock, produce race condition, or both. I know mutexes don't guarantee thread order execution but I need help to think of scenarios that will break this.
UPDATE:
I stepped away from this, but had sometime recently to rethink about this. I rewrote the code using C++ threads and mutexes. I am trying to use condition variables but have no such luck. This is my approach to the problem:
void my_thread_v2() {
//Let only 1 thread read in at a time
std::unique_lock<std::mutex> stdin_lock(stdin_mutex);
stdin_cond.wait(stdin_lock);
/*
Read from stdin stream
*/
//Unlock the stdin mutex
stdin_lock.unlock();
stdin_cond.notify_one();
//Lock step
std::unique_lock<std::mutex> operation_lock(operation_mutex);
operation_cond.wait(operation_lock);
/*
Perform work on the data that you read in
*/
operation_lock.unlock();
operation_cond.notify_one();
std::unique_lock<std::mutex> stdout_lock(stdout_mutex);
stdout_cond.wait(stdout_lock);
/*
Write the data out to stdout
*/
//Unlock the stdout mutex
stdout_lock.unlock();
stdout_cond.notify_one();
}
I know the issue with this code is that there is no way to signal the first condition. I definitely am not understanding the proper use of the condition variable. I looked at various examples on cpp references, but can't seem to get away from the thought that the initial approach maybe the only way of doing what I want to do which is to lock step the threads. Can someone shed some light on this?
UPDATE 2:
So I implemented a simple Monitor class that utilizes C++ condition_variable and unique_lock:
class ThreadMonitor{
public:
ThreadMonitor() : is_occupied(false) {}
void Wait() {
std::unique_lock<std::mutex> lock(mx);
while(is_occupied) {
cond.wait(lock);
}
is_occupied = true;
}
void Notify() {
std::unique_lock<std::mutex> lock(mx);
is_occupied = false;
cond.notify_one();
}
private:
bool is_occupied;
std::condition_variable cond;
std::mutex mx;
};
This is my initial approach assuming i have three ThreadMonitors called stdin_mon, operation_mon, and stdout_mon:
void my_thread_v3() {
//Let only 1 thread read in at a time
stdin_mon.Wait();
/*
Read from stdin stream
*/
stdin_mon.Notify();
operation_mon.Wait();
/*
Perform work on the data that you read in
*/
operation_mon.Notify();
stdout_mon.Wait();
/*
Write the data out to stdout
*/
//Unlock the stdout
stdout_mon.notify();
}
The issue with this was that the data was still being corrupted so I had to change back to the original logic of lock stepping the threads:
void my_thread_v4() {
//Let only 1 thread read in at a time
stdin_mon.Wait();
/*
Read from stdin stream
*/
operation_mon.Wait();
stdin_mon.Notify();
/*
Perform work on the data that you read in
*/
stdout_mon.Wait();
operation_mon.Notify();
/*
Write the data out to stdout
*/
//Unlock the stdout
stdout_mon.notify();
}
I am beginning to suspect that if thread order matters that this is the only way to handle it. I am also questioning what the benefit is of using a Monitor that utilizes condition_variable over just using a mutex.
The problem with your approach is that you still can modify the data while another thread is reading it:
Thread A acquired read, then operation and released read again, and starts writing some data, but is interrupted.
Now thread B operates, acquires read and can read the partially modified, possibly inconsistent data!
I assume you want to allow multiple threads reading the same data without blocking, but as soon as writing, the data shall be protected. Finally, while outputting data, we are just reading the modified data again and thus can do this concurrently again, but need to prevent simultaneous write.
Instead of having multiple mutex instances, you can do this better with a read/write mutex:
Any function only reading the data acquires the read lock.
Any function intending to write acquires write lock right from the start (be aware that first acquiring read, then write lock without releasing the read lock in between can result in dead-lock; if you release read lock in between, though, your data handling needs to be robust against data being modified by another thread in between as well!).
Reducing write lock to shared without releasing in between is safe, so we can do so now before outputting. If data must not be modified in between writing data and outputting it, we even need to do this without entirely releasing the lock.
Last point is problematic as not supported neither by C++ standard's thread support library nor by pthreads library.
For C++ boost provides a solution; if you don't want to or cannot (C!) use boost a simple, but possibly not most efficient approach would be protecting acquiring write lock via another mutex:
acquire standard (non-rw) mutex protecting the read write mutex
acquire RW mutex for writing
release protecting mutex
read data, write modified data
acquire protecting mutex
release RW mutex
re-acquire RW mutex for reading; it does not matter if another thread acquired for reading as well, we only need to protect against locking for write here
release protecting mutex
output
release RW mutex (no need to protect)...
Non-modifying functions can just acquire the read lock without any further protection, there aren't any conflicts with...
In C++, you'd prefer using the thread support library and additionally gain platform independent code for free, in C, you would use a standard pthread mutex for protecting acquiring the write lock just as you did before and use the RW variants from pthread for the read write lock.
I'm wondering what the best practice for locking and unlocking mutexes for variables withing an object that is shared between threads.
This is what I have been doing and seems to work just fine so far, just wondering if this is excessive or not though:
class sharedobject
{
private:
bool m_Var1;
pthread_mutex_t var1mutex;
public:
sharedobject()
{
var1mutex = PTHREAD_MUTEX_INITIALIZER;
}
bool GetVar1()
{
pthread_mutex_lock(&var1mutex);
bool temp = m_Var1;
pthread_mutex_unlock(&var1mutex);
return temp;
}
void SetVar1(bool status)
{
pthread_mutex_lock(&var1mutex);
m_Var1 = status;
pthread_mutex_unlock(&var1mutex);
}
};
this isn't my actual code, but it shows how i am using mutexes for every variable that is shared in an object between threads. The reason i don't have a mutex for the entire object is because one thread might take seconds to complete an operation on part of the object, while another thread checks the status of the object, and again another thread gets data from the object
my question is, is it a good practice to create a mutex for every variable within an object that is accessed between threads, then lock and unlock that variable when it is read or written to?
I use trylock for variables i'm checking the status to (so i don't create extra threads while the variable is being processed, and don't make the program wait to get a lock)
I haven't had a lot of experience working with threading. I would like the make the program thread safe, but it also needs to perform as best as possible.
if the members you're protecting are read-write, and may be accessed at any time by more than one thread, then what you're doing is not excessive - it's necessary.
If you can prove that a member will not change (is immutable) then there is no need to protect it with a mutex.
Many people prefer multi-threaded solutions where each thread has an immutable copy of data rather than those in which many threads access the same copy. This eliminates the need for memory barriers and very often improves execution times and code safety.
Your mileage may vary.
Consider the the following situation:
I have a QThread that constantly modifies a variable (let's call it counter) and a QTimer that periodically reads counter. I know that I have to synchronize variables that might be modified by multiple threads at the same time but - do I need synchronization in this case as well, when there is only one thread reading and one thread writing a variable?
The scenario you describe isn't safe, you will still need synchronization. There are several classes in Qt that can help you with that, via a locking or a lock-free mechanism.
Take a peek at QMutex, QReadWriteLock, QSemaphore, QWaitCondition, QFuture, QFutureWatcher, QAtomicInt and QAtomicPointer. Plus you have std::atomic<T> in C++11.
Yes, you always need synchronisation — if for no other reason than that the standard says that your program has undefined behaviour if there is a data race.
You can either synchronize with a mutex that guards the counter variable, which I suppose is the "traditional" way, or you can use an std::atomic<int> variable for your counter, which you can access without creating a data race.
Protect your counter variable with a QReadWriteLock. When you're editing the counter variable in your thread(s), have them lock it with a QWriteLocker, which will lock out any other attempts to write OR read. When your main thread checks the value of counter, lock it with a QReadLocker, which will only lock if a write lock is current active.
If I have the following code:
#include <boost/date_time.hpp>
#include <boost/thread.hpp>
boost::shared_mutex g_sharedMutex;
void reader()
{
boost::shared_lock<boost::shared_mutex> lock(g_sharedMutex);
boost::this_thread::sleep(boost::posix_time::seconds(10));
}
void makeReaders()
{
while (1)
{
boost::thread ar(reader);
boost::this_thread::sleep(boost::posix_time::seconds(3));
}
}
boost::thread mr(makeReaders);
boost::this_thread::sleep(boost::posix_time::seconds(5));
boost::unique_lock<boost::shared_mutex> lock(g_sharedMutex);
...
the unique lock will never be acquired, because there are always going to be readers. I want a unique_lock that, when it starts waiting, prevents any new read locks from gaining access to the mutex (called a write-biased or write-preferred lock, based on my wiki searching). Is there a simple way to do this with boost? Or would I need to write my own?
Note that I won't comment on the win32 implementation because it's way more involved and I don't have the time to go through it in detail. That being said, it's interface is the same as the pthread implementation which means that the following answer should be equally valid.
The relevant pieces of the pthread implementation of boost::shared_mutex as of v1.51.0:
void lock_shared()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
while(state.exclusive || state.exclusive_waiting_blocked)
{
shared_cond.wait(lk);
}
++state.shared_count;
}
void lock()
{
boost::this_thread::disable_interruption do_not_disturb;
boost::mutex::scoped_lock lk(state_change);
while(state.shared_count || state.exclusive)
{
state.exclusive_waiting_blocked=true;
exclusive_cond.wait(lk);
}
state.exclusive=true;
}
The while loop conditions are the most relevant part for you. For the lock_shared function (read lock), notice how the while loop will not terminate as long as there's a thread trying to acquire (state.exclusive_waiting_blocked) or already owns (state.exclusive) the lock. This essentially means that write locks have priority over read locks.
For the lock function (write lock), the while loop will not terminate as long as there's at least one thread that currently owns the read lock (state.shared_count) or another thread owns the write lock (state.exclusive). This essentially gives you the usual mutual exclusion guarantees.
As for deadlocks, well the read lock will always return as long as the write locks are guaranteed to be unlocked once they are acquired. As for the write lock, it's guaranteed to return as long as the read locks and the write locks are always guaranteed to be unlocked once acquired.
In case you're wondering, the state_change mutex is used to ensure that there's no concurrent calls to either of these functions. I'm not going to go through the unlock functions because they're a bit more involved. Feel free to look them over yourself, you have the source after all (boost/thread/pthread/shared_mutex.hpp) :)
All in all, this is pretty much a text book implementation and they've been extensively tested in a wide range of scenarios (libs/thread/test/test_shared_mutex.cpp and massive use across the industry). I wouldn't worry too much as long you use them idiomatically (no recursive locking and always lock using the RAII helpers). If you still don't trust the implementation, then you could write a randomized test that simulates whatever test case you're worried about and let it run overnight on hundreds of thread. That's usually a good way to tease out deadlocks.
Now why would you see that a read lock is acquired after a write lock is requested? Difficult to say without seeing the diagnostic code that you're using. Chances are that the read lock is acquired after your print statement (or whatever you're using) is completed and before state_change lock is acquired in the write thread.