This is the code where i would be inserting values in a unordererd map and would also query those values at regular intervals.
class MemoryMap
{
private:
std::unordered_map<std::string, std::string> maps_;
std::mutex mutex_;
public:
void AddMap(std::string key, std::string value);
std::string GetMap(std::string key);
void PrintMemoryMap(std::string key);
};
void MemoryMap::AddMap(std::string key, std::string value)
{
std::unique_lock<std::mutex> lock(mutex_);
maps_[key] = value;
}
std::string MemoryMap::GetMap(std::string key)
{
std::unique_lock<std::mutex> lock(mutex_);
if (maps_.find(key) == maps_.end())
return "";
return maps_.at(key);
}
I would be using this object in two different threads and i want when insertion would be happening through AddMap function than GetMap function should wait for the insertion to finish. Also GetMap function would be called concurrently.
Is my current code sufficient to address this issue ?
It is sufficient. The mutex lock guarantees at most one thread get call get or set at the same time.
However, your code might be not optimized if you want to achieve concurrent reads. In C++, unordered_map is a container, which has thread safety like this: https://en.cppreference.com/w/cpp/container#Thread_safety Two threads can safely call get at the same time because it is a constant function, if no thread is modifying the container.
Related
I have a struct containing two elements.
struct MyStruct {
int first_element_;
std::string second_element_;
}
The struct is shared between threads and therefore requires locking. My use case requires to lock access to the whole struct instead of just a specific member, so for example:
// start of Thread 1's routine
<Thread 1 "locks" struct>
<Thread 1 gets first_element_>
<Thread 1 sets second_elements_>
<Thread 2 wants to access struct -> blocks>
<Thread 1 sets first_element_>
// end of Thread 1's routine
<Thread 1 releases lock>
<Thread 2 reads/sets ....>
What's the most elegant way of doing that?
EDIT: To clarify, basically this question is about how to enforce any thread using this struct to lock a mutex (stored wherever) at the start of its routine and unlock the mutex at the end of it.
EDIT2: My current (ugly) solution is to have a mutex inside MyStruct and lock that mutex at the start of each thread's routine which uses MyStruct. However, if one thread "forgets" to lock that mutex, I run into synchronization problems.
You can have a class instead of the struct and implement getters and setters for first_element_ and second_element_. Besides those class members, you will also need a member of type std::mutex.
Eventually, your class could look like this:
class Foo {
public:
// ...
int get_first() const noexcept {
std::lock_guard<std::mutex> guard(my_mutex_);
return first_element_;
}
std::string get_second() const noexcept {
std::lock_guard<std::mutex> guard(my_mutex_);
return second_element_;
}
private:
int first_element_;
std::string second_element_;
std::mutex my_mutex_;
};
Please, note that getters are returning copies of the data members. If you want to return references (like std::string const& get_second() const noexcept) then you need to be careful because the code that gets the reference to the second element has no lock guard and there might be a race condition in such case.
In any case, your way to go is using std::lock_guards and std::mutexes around code that can be used by more than one thread.
You could implement something like this that combines the lock with the data:
class Wrapper
{
public:
Wrapper(MyStruct& value, std::mutex& mutex)
:value(value), lock(mutex) {}
MyStruct& value;
private:
std::unique_lock<std::mutex> lock;
};
class Container
{
public:
Wrapper get()
{
return Wrapper(value, mutex);
}
private:
MyStruct value;
std::mutex mutex;
};
The mutex is locked when you call get and unlocked automatically when Wrapper goes out of scope.
I am dealing with the multi-threading project with C++ and I doubt about std::mutex
Let's assume that I have a stack.
#include <exception>
#include <memory>
#include <mutex>
#include <stack>
struct empty_stack: std::exception
{
const char* what() const throw();
};
template<typename T>
class threadsafe_stack
{
private:
std::stack<T> data;
mutable std::mutex m;
public:
threadsafe_stack(){}
threadsafe_stack(const threadsafe_stack& other)
{
std::lock_guard<std::mutex> lock(other.m);
data=other.data;
}
threadsafe_stack& operator=(const threadsafe_stack&) = delete;
void push(T new_value)
{
std::lock_guard<std::mutex> lock(m);
data.push(new_value);
}
std::shared_ptr<T> pop()
{
std::lock_guard<std::mutex> lock(m);
if(data.empty()) throw empty_stack();
std::shared_ptr<T> const res(std::make_shared<T>(data.top()));
data.pop();
return res;
}
void pop(T& value)
{
std::lock_guard<std::mutex> lock(m);
if(data.empty()) throw empty_stack();
value=data.top();
data.pop();
}
bool empty() const
{
std::lock_guard<std::mutex> lock(m);
return data.empty();
}
};
Someone said that using this stack can avoid race condition. However I think that problem here is that mutex aka mutual exclusion here only ensure for individual function not together. For example, I can have the threads call push and pop. Those function still have problem of race condition.
For example:
threadsafe_stack st; //global varibale for simple
void fun1(threadsafe_stack st)
{
std::lock_guard<std::mutex> lock(m);
st.push(t);
t = st.pop();
//
}
void fun2(threadsafe_stack st)
{
std::lock_guard<std::mutex> lock(m);
T t,t2;
t = st.pop();
// Do big things
st.push(t2);
//
}
If a thread fun1 and fun2 call the same stack (global variable for simple). So it can be a race condition(?)
I have only solution I can think is using some kind of atomic transaction means instead of calling directly push(), pop(), empty(), I call them via a function with a "function pointer" to those function and with only one mutex.
For example:
#define PUSH 0
#define POP 1
#define EMPTY 2
changeStack(int kindOfFunction, T* input, bool* isEmpty)
{
std::lock_guard<std::mutex> lock(m);
switch(kindOfFunction){
case PUSH:
push(input);
break;
case POP:
input = pop();
break;
case EMPTY:
isEmpty = empty();
break;
}
}
Is my solution good? Or I just overthinking and the first solution my friend told me is good enough? Are there any other solution for this? The solution can avoid "atomic transaction" like I suggest.
A given mutex is a single lock and can be held by a single thread at any one time.
If a thread (T1) is holding the lock on a given object in push() another thread (T2) cannot acquire it in pop() and will be blocked until T1 releases it. At that point of release T2 (or another thread also blocked by the same mutex) will be unblocked and allowed to proceed.
You do not need to do all the locking and unlocking in one member.
The point where you may still be introducing a race condition is constructs like this if they appear in consumer code:
if(!stack.empty()){
auto item=stack.pop();//Guaranteed?
}
If another thread T2 enters pop() after thread T1 enters empty() (above) and gets blocked waiting on the mutex then the pop() in T1 may fail because T2 'got there first'. Any number of actions might take place between the end of empty() and the start of pop() in that snippet unless other synchronization is handling it.
In this case you should imagine T1 & T2 literally racing to pop() though of course they may be racing to different members and still invalidate each other...
If you want to build code like that you usually have to then add further atomic member functions like try_pop() which returns (say) an empty std::shared_ptr<> if the stack is empty.
I hope this sentence isn't confusing:
Locking the object mutex inside member functions avoids race
conditions between calls to those member functions but not in
between calls to those member functions.
The best way to solve that is by adding 'composite' functions that are doing the job of more than one 'logical' operation. That tends to go against good class design in which you design a logical set of minimal operations and the consuming code combines them.
The alternative is to allow the consuming code access to the mutex. For example expose void lock() const; and void unlock() cont; members. That is usually not preferred because (a) it becomes very easy for consumer code to create deadlocks and (b) you either use a recursive lock (with its overhead) or double up member functions again:
void pop(); //Self locking version...
void pop_prelocked(); //Caller must hold object mutex or program invalidated.
Whether you expose them as public or protected or not that would make try_pop() look something like this:
std::shared_ptr<T> try_pop(){
std::lock_guard<std::mutex> guard(m);
if(empty_prelocked()){
return std::shared_ptr<T>();
}
return pop_prelocked();
}
Adding a mutex and acquiring it at the start of each member is only the start of the story...
Footnote: Hopefully that explains mutual exlusion (mut****ex). There's a whole other topic round memory barriers lurking below the surface here but if you use mutexes in this way you can treat that as an implementation detail for now...
You misunderstand something. You don't need that changeStack function.
If you forget about lock_guard, here's what it looks like (with lock_guard, the code does the same, but lock_guard makes it convenient: makes unlock automatic):
push() {
m.lock();
// do the push
m.unlock();
}
pop() {
m.lock();
// do the pop
m.unlock();
}
When push is called, mutex will be locked. Now, imagine, that on other thread, there is pop called. pop tries to lock the mutex, but it cannot lock it, because push already locked it. So it has to wait for push to unlock the mutex. When push unlocks the mutex, then pop can lock it.
So, in short, it is std::mutex which does the mutual exclusion, not the lock_guard.
EDIT: I moved this question to codereview https://codereview.stackexchange.com/questions/105742/thread-safe-holder
I have implemented a thread safe holder to safely pass data between threads.
User can set value many times, but only the first SetIfEmpty call stores the value, then user may read the value many times.
template <typename T>
class ThreadSafeHolder {
public:
ThreadSafeHolder() : is_value_set_(false) {
}
void SetIfEmpty(const T& value) {
std::lock_guard<std::mutex> lock(mutex_);
// memory_order_relaxed is enough because storing to
// `is_value_set_` happens only in `SetIfEmpty` methods
// which are protected by mutex.
if (!is_value_set_.load(std::memory_order_relaxed)) {
new(GetPtr()) T(value);
is_value_set_.store(true, std::memory_order_release);
}
}
void SetIfEmpty(T&& value) {
std::lock_guard<std::mutex> lock(mutex_);
if (!is_value_set_.load(std::memory_order_relaxed)) {
new(GetPtr()) T(std::move(value));
is_value_set_.store(true, std::memory_order_release);
}
}
//! This method might be safely call only if previous `IsEmpty()`
//! call returned `false`.
const T& Get() const {
assert(!IsEmpty());
return *GetPtr();
}
bool IsEmpty() const {
// memory_order_acquire loading to become synchronize with
// memory_order_release storing in `SetIfEmpty` methods.
return !is_value_set_.load(std::memory_order_acquire);
}
~ThreadSafeHolder() {
if (!IsEmpty()) {
GetPtr()->~T();
}
}
private:
T* GetPtr() {
return reinterpret_cast<T*>(value_place_holder_);
}
const T* GetPtr() const {
return reinterpret_cast<const T*>(value_place_holder_);
}
// Reserved place for user data.
char value_place_holder_[sizeof(T)];
// Mutex for protecting writing access to placeholder.
std::mutex mutex_;
// Boolean indicator whether value was set or not.
std::atomic<bool> is_value_set_;
};
Questions
Is the code correct in general?
Is access to is_value_set_ member properly synchronized?
Might be access to is_value_set_ member even more relaxed?
Application
I wanted to develop such holder to pass active exceptions from worker threads to main thread.
Main thread:
ThreadSafeHolder<std::exception_ptr> exceptionPtrHolder;
// Run many workers.
// Join workers.
if (!exceptionPtrHolder.IsEmpty()) {
std::rethrow_exception(exceptionPtrHolder.Get());
}
Worker thread:
try {
while (exceptionPtrHolder.IsEmpty()) {
// Do hard work...
}
} catch (...) {
exceptionPtrHolder.SetIfEmpty(std::current_exception());
}
Note about std::promise
std::promise is not suitable here (despite the fact that std::promise::set_value is thread safe) because
An exception is thrown if there is no shared state or the shared state already stores a value or exception.
No, this code is not correct: T::~T() may be called multiple times. Probably, you should use shared_ptr.
What do you mean at active exception? Does worker thread continue execution after exception is thrown and how?
I mean
if an exception is handled then there is no reason to forward it into another thread, it is already handled.
else worker thread should be unwinded with exception forwarding and, probably, restarted by the main thread and std::promise seems not too bad for this purposes.
So, how is it possible to re-set another exception in worker thread and what for?
I wrote a Link class for passing data between two nodes in a network. I've implemented it with two deques (one for data going from node 0 to node 1, and the other for data going from node 1 to node 0). I'm trying to multithread the application, but I'm getting threadlocks. I'm trying to prevent reading from and writing to the same deque at the same time. In reading more about how I originally implemented this, I think I'm using the condition variables incorrectly (and maybe shouldn't be using the boolean variables?). Should I have two mutexes, one for each deque? Please help if you can. Thanks!
class Link {
public:
// other stuff...
void push_back(int sourceNodeID, Data newData);
void get(int destinationNodeID, std::vector<Data> &linkData);
private:
// other stuff...
std::vector<int> nodeIDs_;
// qVector_ has two deques, one for Data from node 0 to node 1 and
// one for Data from node 1 to node 0
std::vector<std::deque<Data> > qVector_;
void initialize(int nodeID0, int nodeID1);
boost::mutex mutex_;
std::vector<boost::shared_ptr<boost::condition_variable> > readingCV_;
std::vector<boost::shared_ptr<boost::condition_variable> > writingCV_;
std::vector<bool> writingData_;
std::vector<bool> readingData_;
};
The push_back function:
void Link::push_back(int sourceNodeID, Data newData)
{
int idx;
if (sourceNodeID == nodeIDs_[0]) idx = 1;
else
{
if (sourceNodeID == nodeIDs_[1]) idx = 0;
else throw runtime_error("Link::push_back: Invalid node ID");
}
boost::unique_lock<boost::mutex> lock(mutex_);
// pause to avoid multithreading collisions
while (readingData_[idx]) readingCV_[idx]->wait(lock);
writingData_[idx] = true;
qVector_[idx].push_back(newData);
writingData_[idx] = false;
writingCV_[idx]->notify_all();
}
The get function:
void Link::get(int destinationNodeID,
std::vector<Data> &linkData)
{
int idx;
if (destinationNodeID == nodeIDs_[0]) idx = 0;
else
{
if (destinationNodeID == nodeIDs_[1]) idx = 1;
else throw runtime_error("Link::get: Invalid node ID");
}
boost::unique_lock<boost::mutex> lock(mutex_);
// pause to avoid multithreading collisions
while (writingData_[idx]) writingCV_[idx]->wait(lock);
readingData_[idx] = true;
std::copy(qVector_[idx].begin(),qVector_[idx].end(),back_inserter(linkData));
qVector_[idx].erase(qVector_[idx].begin(),qVector_[idx].end());
readingData_[idx] = false;
readingCV_[idx]->notify_all();
return;
}
and here's initialize (in case it's helpful)
void Link::initialize(int nodeID0, int nodeID1)
{
readingData_ = std::vector<bool>(2,false);
writingData_ = std::vector<bool>(2,false);
for (int i = 0; i < 2; ++i)
{
readingCV_.push_back(make_shared<boost::condition_variable>());
writingCV_.push_back(make_shared<boost::condition_variable>());
}
nodeIDs_.reserve(2);
nodeIDs_.push_back(nodeID0);
nodeIDs_.push_back(nodeID1);
qVector_.reserve(2);
qVector_.push_back(std::deque<Data>());
qVector_.push_back(std::deque<Data>());
}
I'm trying to multithread the application, but I'm getting threadlocks.
What is a "threadlock"? It's difficult to see what your code is trying to accomplish. Consider, first, your push_back() code, whose synchronized portion looks like this:
boost::unique_lock<boost::mutex> lock(mutex_);
while (readingData_[idx]) readingCV_[idx]->wait(lock);
writingData_[idx] = true;
qVector_[idx].push_back(newData);
writingData_[idx] = false;
writingCV_[idx]->notify_all();
Your writingData[idx] boolean starts off as false, and becomes true only momentarily while a thread has the mutex locked. By the time the mutex is released, it is false again. So for any other thread that has to wait to acquire the mutex, writingData[idx] will never be true.
But in your get() code, you have
boost::unique_lock<boost::mutex> lock(mutex_);
// pause to avoid multithreading collisions
while (writingData_[idx]) writingCV_[idx]->wait(lock);
By the time a thread gets the lock on the mutex, writingData[idx] is back to false and so the while loop (and wait on the CV) is never entered.
An exactly symmetric analysis applies to the readingData[idx] boolean, which also is always false outside the mutex lock.
So your condition variables are never waited on. You need to completely rethink your design.
Start with one mutex per queue (the deque is overkill for simply passing data), and for each queue associate a condition variable with the queue being non-empty. The get() method will thus wait until the queue is non-empty, which will be signalled in the push_back() method. Something like this (untested code):
template <typename Data>
class BasicQueue
{
public:
void push( Data const& data )
{
boost::unique_lock _lock( mutex_ );
queue_.push_back( data );
not_empty_.notify_all();
}
void get ( Data& data )
{
boost::unique_lock _lock( mutex_ );
while ( queue_.size() == 0 )
not_empty_.wait( _lock ); // this releases the mutex
// mutex is reacquired here, with queue_.size() > 0
data = queue_.front();
queue_.pop_front();
}
private:
std::queue<Data> queue_;
boost::mutex mutex_;
boost::condition_variable not_empty_;
};
Yes. You need two mutexes. Your deadlocks are almost certainly a result of contention on the single mutex. If you break into your running program with a debugger you will see where the threads are hanging. Also I don't see why you would need the bools. (EDIT: It may be possible to come up with a design that uses a single mutex but it's simpler and safer to stick with one mutex per shared data structure)
A rule of thumb would be to have one mutex per shared data structure you are trying to protect. That mutex guards the data structure against concurrent access and provides thread safety. In your case one mutex per deque. E.g.:
class SafeQueue
{
private:
std::deque<Data> q_;
boost::mutex m_;
boost::condition_variable v_;
public:
void push_back(Data newData)
{
boost::lock_guard<boost::mutex> lock(m_);
q_.push_back(newData);
// notify etc.
}
// ...
};
In terms of notification via condition variables see here:
Using condition variable in a producer-consumer situation
So there would also be one condition_variable per object which the producer would notify and the consumer would wait on. Now you can create two of these queues for communicating in both directions. Keep in mind that with only two threads you can still deadlock if both threads are blocked (waiting for data) and both queues are empty.
So having simple class
class mySafeData
{
public:
mySafeData() : myData(0)
{
}
void Set(int i)
{
boost::mutex::scoped_lock lock(myMutex);
myData = i; // set the data
++stateCounter; // some int to track state chages
myCondvar.notify_all(); // notify all readers
}
void Get( int& i)
{
boost::mutex::scoped_lock lock(myMutex);
// copy the current state
int cState = stateCounter;
// waits for a notification and change of state
while (stateCounter == cState)
myCondvar.wait( lock );
}
private:
int myData;
int stateCounter;
boost::mutex myMutex;
};
and array of threads in infinite loops calling each one function
Get()
Set()
Get()
Get()
Get()
will they always call functions in the same order and only once per circle (by circle I mean will all boost threads run in same order each time so that each thread would Get() only once after one Set())?
No. You can never make any assumptions of which order the threads will be served. This is nothing related to boost, it is the basics of multiprogramming.
The threads should acquire the lock in the same order that they reach the scoped_lock constructor (I think). But there's no guarantee that they will reach that point in any fixed order!
So in general: don't rely on it.
No, the mutex only prevents two threads from accessing the variable at the same time. It does not affect the thread scheduling order or execution time, which can for all intents and purposes be assumed to be random.