In some file called Tasks.h, I have the following function :-
void source_thread_func(BlockingQueue<Task> &bq, int num_ints)
{
std::cout<<"On source thread func"<<std::endl; // Debug
for (int i = 1; i <= num_ints; i++)
{
//Valgrind does not like this
std::unique_ptr<Task> task(new Task(i, i == num_ints));
std::cout<<"Pushing value = "<<i<<std::endl; // Debug
bq.push(task);
Task* tp = task.release();
assert (task.get() == nullptr);
delete tp;
}
}
and the relevant push function in the BlockingQueue is
void push(std::unique_ptr<T>& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push(std::move(item));
mlock.unlock();
cond_.notify_one();
}
But, this still causes a leak when checking with Valgrind. Could you tell me where the leak is? I am attaching a screenshot of the valgrind result. How more can I delete this pointer?
Edit : Task doesn't contain a copy constructor (I've deleted it)
Further Edit : full example
//Tasks.h
namespace threadsx
{
class Task
{
public:
Task(int val, bool sentinel = false)
{
m_val = val;
Sent = sentinel;
}
int m_val;
int Sent;
//disable copying
Task (const Task&) = delete;
};
void source_thread_func(BlockingQueue<Task> &bq, int num_ints)
{
std::cout<<"On source thread func"<<std::endl; // Debug
for (int i = 1; i <= num_ints; i++)
{
std::unique_ptr<Task> task(new Task(i, i == num_ints));
std::cout<<"Pushing value = "<<i<<std::endl; // Debug
bq.push(task);
Task* tp = task.release();
assert (task.get() == nullptr);
delete tp;
}
}
}
+++++++++++++++++++++++++++++++
///BlockingQueue.h
namespace threadsx
{
// -- Custom Blocking Q
template <typename T>
class BlockingQueue
{
private:
std::queue<std::unique_ptr<T>> queue_;
std::mutex mutex_;
std::condition_variable cond_;
void push(std::unique_ptr<T>& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push(std::move(item));
mlock.unlock();
cond_.notify_one();
}
BlockingQueue()=default;
BlockingQueue(const BlockingQueue&) = delete; // disable copying
BlockingQueue& operator=(const BlockingQueue&) = delete; // disable assignment
};
}
+++++++++++++++++++++++++++++++
//main.cpp
int main(int argc, char **argv)
{
int num_ints = 30;
int threshold = 5;
threadsx::BlockingQueue<threadsx::Task> q;
std::vector<int> t;
std::thread source_thread(threadsx::source_thread_func, std::ref(q), num_ints);
if(source_thread.joinable())
source_thread.join();
return 0;
}
The program that you show does not delete the Task that was allocated. push moves the ownership away from task, so tp is always null.
The ownership of the resource is transferred into queue_, and how that pointer is leaked (assuming valgrind is correct) is not shown in the example program.
Few quality issues:
As pointed out in the comments, it is usually a bad design to pass unique pointers by non-const reference. Pass by value when you intend to transfer ownership.
I've deleted the copy constructor on Task. Would passing by value still work?
Whether Task is copyable is irrelevant to whether a unique pointer can be passed by value. Unique pointer is movable regardless of the type of the pointed object, and therefore can be passed by value.
Don't release from a unique pointer just in order to delete the memory. Simply let the unique pointer go out of scope - its destructor takes care of deletion.
You are not allowed to delete the raw task, since the ownership is no longer yours.
void source_thread_func(BlockingQueue<Task>& bq, int num_ints)
{
std::cout<<"On source thread func"<<std::endl; // Debug
for (int i = 1; i <= num_ints; i++)
{
std::unique_ptr<Task> task = std::make_unique<Task>(i, i == num_ints);
bq.push(std::move(task));
}
}
Blocking Queue:
#include <memory>
#include <mutex>
#include <condition_variable>
#include <deque>
template <typename T>
class BlockingQueue {
public:
void push(std::unique_ptr<T>&& item)
{
std::unique_lock<std::mutex> mlock(mutex_);
queue_.push_back(std::move(item));
cond_.notify_one();
}
std::unique_ptr<T> pop()
{
std::unique_lock<std::mutex> mlock(mutex_);
if (queue_.empty()) {
cond_.wait(mlock, [this] { return !queue_.empty(); });
}
std::unique_ptr<T> ret = std::unique_ptr<T>(queue_.front().release());
queue_.pop_front();
return ret;
}
private:
std::deque<std::unique_ptr<T>> queue_;
std::mutex mutex_;
std::condition_variable cond_;
};
If you want to spare yourself the headache of std::move, use shared_ptr instead
Related
Problem
I believe the following code should lead to runtime issues, but it doesn't. I'm trying to update the underlying object pointed to by the shared_ptr in one thread, and access it in another thread.
struct Bar {
Bar(string tmp) {
var = tmp;
}
string var;
};
struct Foo {
vector<Bar> vec;
};
std::shared_ptr<Foo> p1, p2;
std::atomic<bool> cv1, cv2;
void fn1() {
for(int i = 0 ; i < p1->vec.size() ; i++) {
cv2 = false;
cv1.wait(true);
std::cout << p1->vec.size() << " is the new size\n";
std::cout << p1->vec[i].var.data() << "\n";
}
}
void fn2() {
cv2.wait(true);
p2->vec = vector<Bar>();
cv1 = false;
}
int main()
{
p1 = make_shared<Foo>();
p1->vec = vector<Bar>(2, Bar("hello"));
p2 = p1;
cv1 = true;
cv2 = true;
thread t1(fn1);
thread t2(fn2);
t2.join();
t1.join();
}
Description
weirdly enough, the output is as follows. prints the new size as 0 (empty), but is still able to access the first element from the previous vector.
0 is the new size
hello
Is my understanding that the above code is not thread safe correct? am I missing something?
OR
According to the docs
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object.
Since I'm using ->/* member functions, does it mean that the code is thread safe? This part is kind of confusing as I'm performing read and write simultaneously without synchronization.
As for the shared_ptr:
In general, you can call all member functions of DIFFERENT instances of the shared_ptr from multiple threads without synchronization. However, if you want to call these functions from multiple threads on the SAME shared_ptr instance then it may lead to a race condition. When we talk about thread safety guarantee in the case of shrared_ptr, it is only guaranteed for the internals of the shared_ptr as explained above NOT FOR THE underlying object.
Having that said, consider the following code and read the comments. You can also play with it here: https://godbolt.org/z/8hvcW19q9
#include <memory>
#include <mutex>
#include <thread>
std::mutex widget_mutex;
class Widget
{
std::string value;
public:
void set_value(const std::string& str) { value = str; }
};
//This is not safe, you're calling member function of the same instance, taken by ref
void mt_reset_not_safe(std::shared_ptr<Widget>& w)
{
w.reset(new Widget());
}
//This is safe, you have a separate instance of shared_ptr
void mt_reset_safe(std::shared_ptr<Widget> w)
{
w.reset(new Widget());
}
//This is not safe, underlying object is not protected from race conditions
void mt_set_value_not_safe(std::shared_ptr<Widget> w)
{
w->set_value("Test value, test value");
}
//This is safe, we use mutex to safetly update the underlying object
void mt_set_value_safe(std::shared_ptr<Widget> w)
{
auto lock = std::scoped_lock{widget_mutex};
w->set_value("Test value, test value");
}
template<class Callable, class... Args>
void run(Callable callable, Args&&... args)
{
auto th1 = std::thread(callable, std::forward<Args>(args)...);
auto th2 = std::thread(callable, std::forward<Args>(args)...);
th1.join();
th2.join();
}
void run_not_safe_reset()
{
auto widget = std::make_shared<Widget>();
run(mt_reset_not_safe, std::ref(widget));
}
void run_safe_reset()
{
auto widget = std::make_shared<Widget>();
run(mt_reset_safe, widget);
}
void run_mt_set_value_not_safe()
{
auto widget = std::make_shared<Widget>();
run(mt_set_value_not_safe, widget);
}
void run_mt_set_value_safe()
{
auto widget = std::make_shared<Widget>();
run(mt_set_value_safe, widget);
}
int main()
{
//Uncommne to see the result
// run_not_safe_reset();
// run_safe_reset();
// run_mt_set_value_not_safe();
// run_mt_set_value_safe();
}
In the example below, suppose I have a multithreaded queue i.e. supports multiple writes and reads from different threads (e.g. using mutex). We can see that 1. the pointer this as well as 2. the shared pointer m_mulque are passed to the readers and writers and my question is: Dereferencing the pointers this and m_mulque is thread safe or not? In other words is my following code thread safe or not meaning should I worry about any undefined behaviour if I run this please?
#include <mutex>
#include <queue>
struct multithreaded_queue
{
void push(const std::size_t i)
{
std::lock_guard lock(m_mutex);
m_queue.push(i);
};
void try_pop()
{
std::lock_guard lock(m_mutex);
m_queue.pop();
};
private:
std::queue<std::size_t> m_queue;
mutable std::mutex m_mutex;
};
class example
{
public:
example()
{
m_mulque = std::make_shared<multithreaded_queue>();
};
void run()
{
auto writer = [this]()
{
for (std::size_t i = 0; i < 1000; i++)
{
m_mulque->push(i);
};
};
auto reader = [this]()
{
for (std::size_t i = 0; i < 1000; i++)
{
m_mulque->try_pop();
};
};
std::thread writer1(writer);
std::thread reader1(reader);
std::thread reader2(reader);
writer1.join();
reader1.join();
reader2.join();
};
private:
std::shared_ptr<multithreaded_queue> m_mulque;
};
int main(int argc, char* argv[])
{
example ex;
ex.run(); //Is this thread safe to call?
};
I'm declaring a pointer to a thread in my class.
class A{
std::thread* m_pThread;
bool StartThread();
UINT DisableThread();
}
Here is how I call a function using a thread.
bool A::StartThread()
{
bool mThreadSuccess = false;
{
try {
m_pThread= new std::thread(&A::DisableThread, this);
mThreadSuccess = true;
}
catch (...) {
m_pDisable = false;
}
if(m_pThread)
{
m_pThread= nullptr;
}
}
return mThreadSuccess;
}
Here is the function called by my thread spawned.
UINT A::DisableThread()
{
//print something here.
return 0;
}
If I call this StartThread() function 10 times. Will it have a memory leak?
for (i = 0; i<10; i++){
bool sResult = StartThread();
if (sResult) {
m_pAcceptStarted = true;
}
}
What is the correct way of freeing
m_pThread= new std::thread(&A::DisableThread, this);
The correct way to free a non-array object created using allocating new is to use delete.
Avoid bare owning pointers and avoid unnecessary dynamic allocation. The example doesn't demonstrate any need for dynamic storage, and ideally you should use a std::thread member instead of a pointer.
If I call this StartThread() function 10 times. Will it have a memory leak?
Even a single call will result in a memory leak. The leak happens when you throw away the pointer value here:
m_pThread= nullptr;
could you add your better solution
Here's one:
auto future = std::async(std::launch::async, &A::DisableThread, this);
// do something while the other task executes in another thread
do_something();
// wait for the thread to finish and get the value returned by A::DisableThread
return future.get()
I'd personally would prefer using a threadpool in a real project but this example should give you an idea of how you could handle threads without new/delete.
#include <iostream>
#include <thread>
#include <vector>
class A
{
public:
template<typename Fn>
void CallAsync(Fn fn)
{
// put thread in vector
m_threads.emplace_back(std::thread(fn));
}
~A()
{
for (auto& thread : m_threads)
{
thread.join();
}
}
void someHandler()
{
std::cout << "*";
};
private:
std::vector<std::thread> m_threads;
};
int main()
{
A a;
for (int i = 0; i < 10; ++i)
{
a.CallAsync([&a] { a.someHandler(); });
}
}
I have written a multi-threaded app in Qt/C++11 , Windows.
The idea was to have and recycle some strings from a pool, using smart pointers.
Here is stringpool.cpp:
#include "stringpool.h"
QMutex StringPool::m_mutex;
int StringPool::m_counter;
std::stack<StringPool::pointer_type<QString>> StringPool::m_pool;
StringPool::pointer_type<QString> StringPool::getString()
{
QMutexLocker lock(&m_mutex);
if (m_pool.empty())
{
add();
}
auto inst = std::move(m_pool.top());
m_pool.pop();
return inst;
}
void StringPool::add(bool useLock, QString * ptr)
{
if(useLock)
m_mutex.lock();
if (ptr == nullptr)
{
ptr = new QString();
ptr->append(QString("pomo_hacs_%1").arg(++m_counter));
}
StringPool::pointer_type<QString> inst(ptr, [this](QString * ptr) { add(true, ptr); });
m_pool.push(std::move(inst));
if(useLock)
m_mutex.unlock();
}
And here is stringpool.h:
#pragma once
#include <QMutex>
#include <QString>
#include <functional>
#include <memory>
#include <stack>
class StringPool
{
public:
template <typename T> using pointer_type = std::unique_ptr<T, std::function<void(T*)>>;
//
StringPool() = default;
pointer_type<QString> getString();
private:
void add(bool useLock = false, QString * ptr = nullptr);
//
static QMutex m_mutex;
static int m_counter;
static std::stack<pointer_type<QString>> m_pool;
};
And here is the test app:
#include <QtCore>
#include "stringpool.h"
static StringPool Pool;
class Tester : public QThread
{
public:
void run() override
{
for(int i = 0; i < 20; i++)
{
{
auto str = Pool.getString();
fprintf(stderr, "Thread %p : %s \n", QThread::currentThreadId(), str->toUtf8().data());
msleep(rand() % 500);
}
}
fprintf(stderr, "Thread %p : FINITA! \n", QThread::currentThreadId());
}
};
#define MAX_TASKS_NBR 3
int main(int argc, char *argv[])
{
QCoreApplication app(argc, argv);
Tester tester[MAX_TASKS_NBR];
for(auto i = 0; i < MAX_TASKS_NBR; i++)
tester[i].start();
for(auto i = 0; i < MAX_TASKS_NBR; i++)
tester[i].wait();
//
return 0;
}
It compiles ok, it runs and produces the following result:
Well, the idea is that the app runs (apparently) OK.
But immediately after it finishes, I have this error:
Does anyone have an idea how can I fix this?
The reason for this error has to do with the smart pointer and not the multithreading.
You define pointer_type as an alias for unique_ptr with a custom deleter
template <typename T> using pointer_type = std::unique_ptr<T, std::function<void(T*)>>;
You create strings with custom deleters
void StringPool::add(bool useLock, QString * ptr)
{
if (ptr == nullptr)
{
ptr = new QString();
ptr->append(QString("pomo_hacs_%1").arg(++m_counter));
}
StringPool::pointer_type<QString> inst(ptr, [this](QString * ptr) { add(true, ptr); }); // here
m_pool.push(std::move(inst));
}
At the end of the program, m_pool goes out of scope and runs its destructor.
Consider the path of execution...m_pool will try to destroy all its members. For each member, the custom deleter. The custom deleter calls add. add pushes the pointer to the stack.
Logically this is an infinite loop. But it's more likely to create some kind of undefined behavior by breaking the consistency of the data structure. (i.e. The stack shouldn't be pushing new members while it is being destructed). An exception might occur due to function stack overflow or literal stack overflow (heh) when there is not enough memory to add to the stack data structure. Since the exception occurs in a destructor unhandled, it ends the program immediately. But it could also very likely be a seg fault due to the pushing while destructing.
Fixes:
I already didn't like your add function.
StringPool::pointer_type<QString> StringPool::getString()
{
QMutexLocker lock(&m_mutex);
if (m_pool.empty())
{
auto ptr = new QString(QString("pomo_hacs_%1").arg(++m_counter));
return pointer_type<QString>(ptr, [this](QString* ptr) { reclaim(ptr); });
}
auto inst = std::move(m_pool.top());
m_pool.pop();
return inst;
}
void StringPool::reclaim(QString* ptr)
{
QMutexLocker lock(&m_mutex);
if (m_teardown)
delete ptr;
else
m_pool.emplace(ptr, [this](QString* ptr) { reclaim(ptr); });
}
StringPool::~StringPool()
{
QMutexLocker lock(&m_mutex);
m_teardown = true;
}
StringPool was a static class but with this fix it must now be a singleton class.
It might be tempting to pull m_teardown out of the critical section, but it is shared data, so doing will open the door for race conditions. As a premature optimization, you could make m_teardown an std::atomic<bool> and perform a read check before entering the critical section (can skip the critical section if false) but this requires 1) you check the value again in the critical section and 2) you change from true to false exactly once.
I am curious if I have been dealing with undefined behavior when clearing the data_received vector in client.cpp after passing by reference? I have never had problems with invalid data but I can see where this might be an issue lurking. The vector is being passed by reference all the way to the final queue - meanwhile another thread will be dequeing at its own rate only after queue_event.notify_all() fires.
If this is an issue I believe a solution could be moving the clear just after the blocking client->receive call. Thoughts?
blocking_queue.h
template <typename T>
class BlockingQueue {
...
std::queue<T> queue;
...
};
blocking_queue.cpp
template <class T>
void BlockingQueue<T>::enqueue(T const &item)
{
std::unique_lock<std::mutex> lk (queue_lock);
queue.push(item);
lk.unlock();
queue_event.notify_all();
}
template <class T>
T BlockingQueue<T>::dequeue()
{
std::unique_lock<std::mutex> lk (queue_lock);
if(queue_event.wait_for(lk, std::chrono::milliseconds(dequeue_timeout)) == std::cv_status::no_timeout)
{
T rval = queue.front();
queue.pop();
return rval;
}
else
{
throw std::runtime_error("dequeue timeout");
}
}
client.cpp
void Client::read_from_server()
{
std::vector<uint8_t> data_received;
while(run)
{
if (client->is_connected())
{
uint8_t buf[MAX_SERVER_BUFFER_SIZE];
int returned;
memset(buf, 0, MAX_SERVER_BUFFER_SIZE);
returned = client->receive(client->get_socket_descriptor(), buf, MAX_SERVER_BUFFER_SIZE);
// should probably move data_received.clear() to here!!
if (returned > 0)
{
for (int i = 0; i < returned; i++)
{
data_received.push_back(buf[i]);
}
if (incoming_queue)
{
incoming_queue->enqueue(data_received);
}
data_received.clear();
}
else
{
client->set_connected(false);
}
}
}
}
I don't see any potential UB due to data_received.clear(); because the std::queue<T> queue; will hold copies of the passed items (vectors) when incoming_queue->enqueue(data_received); is called.
If the access to the queue is well synchronized, which seems to be the case, then the code should be safe.