Make accesible my stack from other threads c++ - c++

I have a list of thread like this;
Kitchen::Kitchen(double multiplier, size_t cooks, size_t restock) :
_multiplier(multiplier), _ncooks(cooks), _restock(restock)
{
Cook *cook;
std::stack<APizza *> orders;
this->_ingredients = new Stock();
this->_ordersNow = 0;
this->_socket = initControlSocket();
for (size_t i = 0; i != _ncooks; i++) {
cook = new Cook(this, this->_multiplier);
this->_cooks.push_back(std::thread(&Cook::Run, cook));
}
dprintf(this->_socket, "%d\r\n", KITCHEN_OPENED);
}
i Want to make the std::stack< APizza *> orders; accessible and usable by all my threads in this->_cooks

Note that your threads may need a way to synchronize access to the stack, using a mutex for example, so you need to pass a reference to a mutex as well.
Have Cook::Run accept these arguments by reference:
void Run(std::mutex &, std::stack<APizza *> &);
Then pass a reference to them when you create the thread:
this->_cooks.push_back(std::thread(
&Cook::Run, cook,
std::ref(orders_mutex),
std::ref(orders)
));
As mentioned by others in the comment section, the stack will be destroyed when control leaves the Kitchen constructor. To prevent this, you could make the stack a data member of Kitchen.
Alternatively, since you need a mutex on this stack to synchronize concurrent access, you could create a wrapper class that holds the stack and the mutex, and guards access to the stack using the mutex.
template <typename T>
class threadsafe_stack {
public:
void push(T value) {
std::unique_lock<std::mutex> lock{mutex};
stack.push(std::move(value));
}
std::optional<T> pop() {
std::unique_lock<std::mutex> lock{mutex};
if (!stack.empty()) {
T val{std::move(stack.top())};
stack.pop();
return val;
}
return {};
}
private:
std::stack<T> stack;
std::mutex mutex;
};

Related

std::unique_ptr with RAII for mutex?

We have a lot of legacy C++98 code that we are slowly upgrading to c++11 and we have a RAII implementation for custom Mutex class:
class RaiiMutex
{
public:
RaiiMutex() = delete;
RaiiMutex(const RaiiMutex&) = delete;
RaiiMutex& operator= (const RaiiMutex&) = delete;
RaiiMutex(Mutex& mutex) : mMutex(mutex)
{
mMutex.Lock();
}
~RaiiMutex()
{
mMutex.Unlock();
}
private:
Mutex& mMutex;
};
Is it ok to make an std::unique_ptr of this object? We would still benefit from automatically calling the destructor when the object dies (thus unlocking) and would also gain the ability of unlocking before non-critical operations.
Example legacy code:
RaiiMutex raiiMutex(mutex);
if (!condition)
{
loggingfunction();
return false;
}
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
if (!condition)
{
raiiMutex = nullptr;
loggingfunction(); // log without locking the mutex
return false;
}
It would also remove the use of unnecessary brackets:
Example legacy code:
Data data;
{
RaiiMutex raiiMutex(mutex);
data = mQueue.front();
mQueue.pop_front();
}
data.foo();
After:
auto raiiMutex = std::unique_ptr<RaiiMutex>(new RaiiMutex(mutex));
Data data = mQueue.front();
mQueue.pop_front();
raiiMutex = nullptr;
data.foo();
Does it make sense?
Edit:
Cannot use unique_lock due to custom Mutex class:
class Mutex
{
public:
Mutex();
virtual ~Mutex();
void Unlock(bool yield = false);
void Lock();
bool TryLock();
bool TimedLock(uint64 pWaitIntervalUs);
private:
sem_t mMutex;
};
Add Mutex::lock(), Mutex::unlock() and Mutex::try_lock() methods to Mutex. They just forward to the Lock etc methods.
Then use std::unique_lock<Mutex>.
If you cannot modify Mutex, wrap it.
struct SaneMutex: Mutex {
void lock() { Lock(); }
// etc
using Mutex::Mutex;
};
A SaneMutex replaces a Mutex everywhere you can.
Where you can't:
struct MutexRef {
void lock() { m.Lock(); }
// etc
MutexRef( Mutex& m_in ):m(m_in) {}
private:
Mutex& m;
};
include an adapter.
these match the C++ standard lockable requirements. If you want timed lockable, you have to write a bit of glue code.
auto l = std::unique_lock<MutexRef>( mref );
or
auto l = std::unique_lock<SaneMutex>( m );
you now have std::lock, std::unique_lock, std::scoped_lock support.
And your code is one step closer to using std::mutex.
As for your unique_ptr solution, I wouldn't add the overhead of a memory allocation on every time you lock a mutex casually.

Valgrind shows Leak_DefinitelyLost even after releasing the pointer

In some file called Tasks.h, I have the following function :-
void source_thread_func(BlockingQueue<Task> &bq, int num_ints)
{
std::cout<<"On source thread func"<<std::endl; // Debug
for (int i = 1; i <= num_ints; i++)
{
//Valgrind does not like this
std::unique_ptr<Task> task(new Task(i, i == num_ints));
std::cout<<"Pushing value = "<<i<<std::endl; // Debug
bq.push(task);
Task* tp = task.release();
assert (task.get() == nullptr);
delete tp;
}
}
and the relevant push function in the BlockingQueue is
void push(std::unique_ptr<T>& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push(std::move(item));
mlock.unlock();
cond_.notify_one();
}
But, this still causes a leak when checking with Valgrind. Could you tell me where the leak is? I am attaching a screenshot of the valgrind result. How more can I delete this pointer?
Edit : Task doesn't contain a copy constructor (I've deleted it)
Further Edit : full example
//Tasks.h
namespace threadsx
{
class Task
{
public:
Task(int val, bool sentinel = false)
{
m_val = val;
Sent = sentinel;
}
int m_val;
int Sent;
//disable copying
Task (const Task&) = delete;
};
void source_thread_func(BlockingQueue<Task> &bq, int num_ints)
{
std::cout<<"On source thread func"<<std::endl; // Debug
for (int i = 1; i <= num_ints; i++)
{
std::unique_ptr<Task> task(new Task(i, i == num_ints));
std::cout<<"Pushing value = "<<i<<std::endl; // Debug
bq.push(task);
Task* tp = task.release();
assert (task.get() == nullptr);
delete tp;
}
}
}
+++++++++++++++++++++++++++++++
///BlockingQueue.h
namespace threadsx
{
// -- Custom Blocking Q
template <typename T>
class BlockingQueue
{
private:
std::queue<std::unique_ptr<T>> queue_;
std::mutex mutex_;
std::condition_variable cond_;
void push(std::unique_ptr<T>& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push(std::move(item));
mlock.unlock();
cond_.notify_one();
}
BlockingQueue()=default;
BlockingQueue(const BlockingQueue&) = delete; // disable copying
BlockingQueue& operator=(const BlockingQueue&) = delete; // disable assignment
};
}
+++++++++++++++++++++++++++++++
//main.cpp
int main(int argc, char **argv)
{
int num_ints = 30;
int threshold = 5;
threadsx::BlockingQueue<threadsx::Task> q;
std::vector<int> t;
std::thread source_thread(threadsx::source_thread_func, std::ref(q), num_ints);
if(source_thread.joinable())
source_thread.join();
return 0;
}
The program that you show does not delete the Task that was allocated. push moves the ownership away from task, so tp is always null.
The ownership of the resource is transferred into queue_, and how that pointer is leaked (assuming valgrind is correct) is not shown in the example program.
Few quality issues:
As pointed out in the comments, it is usually a bad design to pass unique pointers by non-const reference. Pass by value when you intend to transfer ownership.
I've deleted the copy constructor on Task. Would passing by value still work?
Whether Task is copyable is irrelevant to whether a unique pointer can be passed by value. Unique pointer is movable regardless of the type of the pointed object, and therefore can be passed by value.
Don't release from a unique pointer just in order to delete the memory. Simply let the unique pointer go out of scope - its destructor takes care of deletion.
You are not allowed to delete the raw task, since the ownership is no longer yours.
void source_thread_func(BlockingQueue<Task>& bq, int num_ints)
{
std::cout<<"On source thread func"<<std::endl; // Debug
for (int i = 1; i <= num_ints; i++)
{
std::unique_ptr<Task> task = std::make_unique<Task>(i, i == num_ints);
bq.push(std::move(task));
}
}
Blocking Queue:
#include <memory>
#include <mutex>
#include <condition_variable>
#include <deque>
template <typename T>
class BlockingQueue {
public:
void push(std::unique_ptr<T>&& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push_back(std::move(item));
cond_.notify_one();
}
std::unique_ptr<T> pop()
{
std::unique_lock<std::mutex> mlock(mutex_);
if (queue_.empty()) {
cond_.wait(mlock, [this] { return !queue_.empty(); });
}
std::unique_ptr<T> ret = std::unique_ptr<T>(queue_.front().release());
queue_.pop_front();
return ret;
}
private:
std::deque<std::unique_ptr<T>> queue_;
std::mutex mutex_;
std::condition_variable cond_;
};
If you want to spare yourself the headache of std::move, use shared_ptr instead

How to instantiate an "empty" object from a class that provides only copy-constructor?

I implemented a thread-safe templatized queue:
template<class T> class queue {
private:
boost::mutex mutex;
boost::condition_variable emptyCondition;
boost::condition_variable fullCondition;
boost::scoped_ptr< std::queue<T> > std_queue;
...
public:
...
T pop() {
T r; // [*]
{
boost::mutex::scoped_lock popLock(mutex);
while (queueIsEmpty())
emptyCondition.wait(popLock);
r = std_queue->front();
std_queue->pop();
}
fullCondition.notify_one();
return r;
}
...
I cannot instantiate object in the way I do (where marked with [*]) because of the lack of constructor for T, without formal parameters.
So: is there a way, maybe using a pointer to T and the copy-constructor (that I know it is implemented for each T), to avoid many template specializations?
Edit 1
I also thought to this possible solution.
T pop() {
boost::mutex::scoped_lock popLock(mutex);
while (queueIsEmpty())
emptyCondition.wait(popLock);
T r(std_queue->front());
std_queue->pop();
// update overall number of pop
popNo++;
popLock.unlock();
fullCondition.notify_one();
return r;
}
Would it work?
An option for this scenario is to use boost::optional:
T pop() {
boost::optional<T> r;
{
boost::mutex::scoped_lock popLock(mutex);
while (queueIsEmpty())
emptyCondition.wait(popLock);
r = std_queue->front();
std_queue->pop();
}
fullCondition.notify_one();
return *r; // r is guaranteed to be engaged at this point
}
boost::optional takes care at runtime of tracking whether its contained T has been constructed, and so whether it needs to be destroyed. (Note that here you don't actually need the full functionality of boost::mutex::scoped_lock; you can useboost::lock_guard`.)
The alternative is to notice that scoped_lock can be released:
T pop() {
boost::mutex::scoped_lock popLock(mutex);
while (queueIsEmpty())
emptyCondition.wait(popLock);
T r = std_queue->front();
std_queue->pop();
popLock.release();
fullCondition.notify_one();
return r;
}
The disadvantage here is that it is less clear what the scope of popLock is, and a code change could result in unsafe code or deadlock.
If you manually unlock you can get rid of the brackets, which removes the need to precreate the T:
T pop() {
boost::mutex::scoped_lock popLock(mutex);
while (queueIsEmpty())
emptyCondition.wait(popLock);
T r = std_queue->front();
std_queue->pop();
popLock.unlock();
fullCondition.notify_one();
return r;
}

How to safely destroy an object, that is frequently accessed by two different threads?

I currently ran into the Problem that an Object (Instance), that is frequently accessed by two different threads, has to be freed. For me it does not really matter which of the two threads is destructing the instance but i would prefer the one, that also creates it, although i think this does not matter at all.
So in a scenario where the thread that is supposed to destroy the object, detects that it should be deleted, and while calling the destructor, the other thread is accessing a member (function) of that object, probably some sort of runtime error will occur.
I did some research on this topic, but i could just figure out people saying: "Why deleting an Object that is still needed to exist". But in my case it should stop being needed after the piece of code that one thread is executing decides to destroy it.
I would appreciate an answer, like a hint to a nice book or article which covers this topic, but feel free to write how you would solve this problem.
You would need a double indirection, to manage concurrent access to the data:
class Object {
public;
class Data {
int get_value() const;
void set_value(int);
};
class Guard {
public:
Guard(Nutex& mutex, Data* data)
: data(data)
{
if( ! data) throw std::runtime_error("No Data");
mutex.lock();
}
~Guard()
{
mutex.unlock();
}
Data& data() { return *m_data; }
private:
Data* data;
};
class ConstGuard {
public:
ConstGuard(Mutex& mutex, const Data* data)
: data(data)
{
if( ! data) throw std::runtime_error("No Data");
mutex.lock();
}
~ConstGuard()
{
mutex.unlock();
}
const Data& data() const { return *m_data; }
private:
const Data* data;
};
private:
friend std::shared_ptr<Object> make_object(const Data&);
Object(const Data& data)
: data(new Data(data))
{}
public:
~Object()
{
delete data;
}
/// Self destruction.
void dispose() {
Guard guard(get());
delete data;
data = 0;
}
// For multiple operations use a Guard.
Guard get() { return Guard(mutex, data); }
ConstGuard get() const { return ConstGuard(mutex, data); }
// For single operations you may provide convenience functions.
int get_value() const { return ConstGuard(mutex, data).data().get_value(); }
void set_value(int value) { Guard(mutex, data).data().set_value(value); }
private:
mutable Mutex mutex;
data* data;
};
std::shared_ptr<Object> make_object(const Object::Data& data) {
return std::make_shared<Object>(data);
}
(Note: The above code is just a sketch, I have not compiled it)
That was the long story. The short one is [20.7.2.5] shared_ptr atomic access:
Concurrent access to a shared_ptr object from multiple threads does not
introduce a data race if the access is done exclusively via the functions
in this section and the instance is passed as their first argument.
shared_ptr atomic_load(const shared_ptr* p);
void atomic_store(shared_ptr* p, shared_ptr r);
...
But I am not familiar with these functions.
The thread which should not destroy the object should hold it using a std::weak_ptr, except for the times when it is actively using it. During these times, the weak_ptr can be upgraded to a std::shared_ptr.
The thread which should destroy it now can hold on to its shared_ptr for as long as it thinks appropriate, and discard that shared_ptr afterwards. If the other thread only has a weak_ptr (not actively using it), the object will go away. if the other thread also has a shared_ptr, it holds on to the object until the operation finishes.

Thread Safety Of a single variable

I understand the concept of thread safety. I am looking for advice to simplify thread safety when trying to protect a single variable.
Say I have a variable:
double aPass;
and I want to protect this variable, so i create a mutex:
pthread_mutex_t aPass_lock;
Now there are two good ways i can think of doing this but they both have annoying disadvantages. The first is to make a thread safe class to hold the variable:
class aPass {
public:
aPass() {
pthread_mutex_init(&aPass_lock, NULL);
aPass_ = 0;
}
void get(double & setMe) {
pthread_mutex_lock(aPass_lock);
setMe = aPass_
pthread_mutex_unlock(aPass_lock);
}
void set(const double setThis) {
pthread_mutex_lock(aPass_lock);
aPass_ = setThis;
pthread_mutex_unlock(aPass_lock);
}
private:
double aPass_;
pthread_mutex_t aPass_lock;
};
Now this will keep aPass totally safe, nothing can be mistaken and ever touch it, YAY! however look at all that mess and imagine the confusion when accessing it. gross.
The other way is to have them both accessible and to make sure you lock the mutex before you use aPass.
pthread_mutex_lock(aPass_lock);
do something with aPass
pthread_mutex_unlock(aPass_lock);
But what if someone new comes on the project, what if you forget one time to lock it. I don't like debugging thread problems they are hard.
Is there a good way to (using pthreads because i have to use QNX which has little boost support) To lock single variables without needing a big class and that is safer then just creating a mutex lock to go with it?
std::atomic<double> aPass;
provided you have C++11.
To elabourate on my solution, it would be something like this.
template <typename ThreadSafeDataType>
class ThreadSafeData{
//....
private:
ThreadSafeDataType data;
mutex mut;
};
class apass:public ThreadSafeData<int>
Additionally, to make it unique, it might be best to make all operators and members static. For this to work you need to use CRTP
i.e
template <typename ThreadSafeDataType,class DerivedDataClass>
class ThreadSafeData{
//....
};
class apass:public ThreadSafeData<int,apass>
You can easily make your own class that locks the mutex on construction, and unlocks it on destruction. This way, no matter what happens the mutex will be freed on return, even if an exception is thrown, or any path is taken.
class MutexGuard
{
MutexType & m_Mutex;
public:
inline MutexGuard(MutexType & mutex)
: m_Mutex(mutex)
{
m_Mutex.lock();
};
inline ~MutexGuard()
{
m_Mutex.unlock();
};
}
class TestClass
{
MutexType m_Mutex;
double m_SharedVar;
public:
TestClass()
: m_SharedVar(4.0)
{ }
static void Function1()
{
MutexGuard scopedLock(m_Mutex); //lock the mutex
m_SharedVar+= 2345;
//mutex automatically unlocked
}
static void Function2()
{
MutexGuard scopedLock(m_Mutex); //lock the mutex
m_SharedVar*= 234;
throw std::runtime_error("Mutex automatically unlocked");
}
}
The variable m_SharedVar is ensured mutual exclusion between Function1() and Function2() , and will always be unlocked on return.
boost has build in types to accomplish this: boost::scoped_locked, boost::lock_guard.
You can create a class which act as a generic wrapper around your variable synchronising the access to it.
Add operator overloading for the assignment and you are done.
Consider use RAII idiom, below code is just the idea, it's not tested:
template<typename T, typename U>
struct APassHelper : boost::noncoypable
{
APassHelper(T&v) : v_(v) {
pthread_mutex_lock(mutex_);
}
~APassHelper() {
pthread_mutex_unlock(mutex_);
}
UpdateAPass(T t){
v_ = t;
}
private:
T& v_;
U& mutex_;
};
double aPass;
int baPass_lock;
APassHelper<aPass,aPass_lock) temp;
temp.UpdateAPass(10);
You can modify your aPass class by using operators instead of get/set:
class aPass {
public:
aPass() {
pthread_mutex_init(&aPass_lock, NULL);
aPass_ = 0;
}
operator double () const {
double setMe;
pthread_mutex_lock(aPass_lock);
setMe = aPass_;
pthread_mutex_unlock(aPass_lock);
return setMe;
}
aPass& operator = (double setThis) {
pthread_mutex_lock(aPass_lock);
aPass_ = setThis;
pthread_mutex_unlock(aPass_lock);
return *this;
}
private:
double aPass_;
pthread_mutex_t aPass_lock;
};
Usage:
aPass a;
a = 0.5;
double b = a;
This could of course be templated to support other types. Note however that a mutex is overkill in this case. Generally, memory barriers are enough when protecting loads and stores of small data-types. If possible you should use C++11 std::atomic<double>.