I need to share a boost::deadline_timer between two threads. The boost documentation says "The shared instances are not threadsafe". Here is an example code:
ClassA : public enable_shared_from_this<ClassA>{
ClassA()
{
m_timer = new boost::deadline_timer(m_io_service);
}
destroy()
{
m_timer->cancel();
delete m_timer;
m_timer = NULL;
}
thread_method()
{
m_timer->expire_from_now(...);
m_timer->async_wait(...);
}
run()
{
boost::thread t(ClassA::thread_method, shared_from_this);
}
}
My question is "To synchronize timer access between destroy() and thread_method(), can I use boost::atomic ?
Header:
boost::atomic<boost::deadline_timer*> m_timer;
Constructor:
m_timer = new boost::deadline_timer(m_io_service);
Is it thread-safe ?
Thank you.
No that won't help.
The atomic only makes stores/loads of the pointer indivisible. When you dereference it, you're just accessing the deadline_timer directly, unsynchronized.
So you can either
just traditional thread synchronization around all accesses to the deadline timer (e.g. using a mutex)
use an Asio strand to create a 'logical' thread of execution, and take care to only access the dead line timer from that strand.
The strand approach is potentially more efficient but requires you to think about the flow of execution more accurately so you don't accidentally create a data race
Related
I need to work with several objects, where each operation may take a lot of time.
The processing could not be placed in a GUI (main) thread, where I start it.
I need to make all the communications with some objects on asynchronous operations, something similar to std::async with std::future or QtConcurrent::run() in my main framework (Qt 5), with QFuture, etc., but it doesn't provide thread selection. I need to work with a selected object (objects == devices) in only one additional thread always,
because:
I need to make a universal solution and don't want to make each class thread-safe
For example, even if make a thread-safe container for QSerialPort, Serial port in Qt cannot be accessed in more than one thread:
Note: The serial port is always opened with exclusive access (that is, no other process or thread can access an already opened serial port).
Usually a communication with a device consists of transmit a command and receive an answer. I want to process each Answer exactly in the place where Request was sent and don't want to use event-driven-only logic.
So, my question.
How can the function be implemented?
MyFuture<T> fut = myAsyncStart(func, &specificLiveThread);
It is necessary that one live thread can be passed many times.
Let me answer without referencing to Qt library since I don't know its threading API.
In C++11 standard library there is no straightforward way to reuse created thread. Thread executes single function and can be only joined or detachted. However, you can implement it with producer-consumer pattern. The consumer thread needs to execute tasks (represented as std::function objects for instance) which are placed in queue by producer thread. So if I am correct you need a single threaded thread pool.
I can recommend my C++14 implementation of thread pools as tasks queues. It isn't commonly used (yet!) but it is covered with unit tests and checked with thread sanitizer multiple times. The documentation is sparse but feel free to ask anything in github issues!
Library repository: https://github.com/Ravirael/concurrentpp
And your use case:
#include <task_queues.hpp>
int main() {
// The single threaded task queue object - creates one additional thread.
concurrent::n_threaded_fifo_task_queue queue(1);
// Add tasks to queue, task is executed in created thread.
std::future<int> future_result = queue.push_with_result([] { return 4; });
// Blocks until task is completed.
int result = future_result.get();
// Executes task on the same thread as before.
std::future<int> second_future_result = queue.push_with_result([] { return 4; });
}
If you want to follow the Active Object approach here is an example using templates:
The WorkPackage and it's interface are just for storing functions of different return type in a vector (see later in the ActiveObject::async member function):
class IWorkPackage {
public:
virtual void execute() = 0;
virtual ~IWorkPackage() {
}
};
template <typename R>
class WorkPackage : public IWorkPackage{
private:
std::packaged_task<R()> task;
public:
WorkPackage(std::packaged_task<R()> t) : task(std::move(t)) {
}
void execute() final {
task();
}
std::future<R> get_future() {
return task.get_future();
}
};
Here's the ActiveObject class which expects your devices as a template. Furthermore it has a vector to store the method requests of the device and a thread to execute those methods one after another. Finally the async function is used to request a method call from the device:
template <typename Device>
class ActiveObject {
private:
Device servant;
std::thread worker;
std::vector<std::unique_ptr<IWorkPackage>> work_queue;
std::atomic<bool> done;
std::mutex queue_mutex;
std::condition_variable cv;
void worker_thread() {
while(done.load() == false) {
std::unique_ptr<IWorkPackage> wp;
{
std::unique_lock<std::mutex> lck {queue_mutex};
cv.wait(lck, [this] {return !work_queue.empty() || done.load() == true;});
if(done.load() == true) continue;
wp = std::move(work_queue.back());
work_queue.pop_back();
}
if(wp) wp->execute();
}
}
public:
ActiveObject(): done(false) {
worker = std::thread {&ActiveObject::worker_thread, this};
}
~ActiveObject() {
{
std::unique_lock<std::mutex> lck{queue_mutex};
done.store(true);
}
cv.notify_one();
worker.join();
}
template<typename R, typename ...Args, typename ...Params>
std::future<R> async(R (Device::*function)(Params...), Args... args) {
std::unique_ptr<WorkPackage<R>> wp {new WorkPackage<R> {std::packaged_task<R()> { std::bind(function, &servant, args...) }}};
std::future<R> fut = wp->get_future();
{
std::unique_lock<std::mutex> lck{queue_mutex};
work_queue.push_back(std::move(wp));
}
cv.notify_one();
return fut;
}
// In case you want to call some functions directly on the device
Device* operator->() {
return &servant;
}
};
You can use it as follows:
ActiveObject<QSerialPort> ao_serial_port;
// direct call:
ao_serial_port->setReadBufferSize(size);
//async call:
std::future<void> buf_future = ao_serial_port.async(&QSerialPort::setReadBufferSize, size);
std::future<Parity> parity_future = ao_serial_port.async(&QSerialPort::parity);
// Maybe do some other work here
buf_future.get(); // wait until calculations are ready
Parity p = parity_future.get(); // blocks if result not ready yet, i.e. if method has not finished execution yet
EDIT to answer the question in the comments: The AO is mainly a concurrency pattern for multiple reader/writer. As always, its use depends on the situation. And so this pattern is commonly used in distributed systems/network applications, for example when multiple clients request a service from a server. The clients benefit from the AO pattern as they are not blocked, when waiting for the server to answer.
One reason why this pattern is not used so often in fields other then network apps might be the thread overhead. When creating a thread for every active object results in a lot of threads and thus thread contention if the number of CPUs is low and many active objects are used at once.
I can only guess why people think it is a strange issue: As you already found out it does require some additional programming. Maybe that's the reason but I'm not sure.
But I think the pattern is also very useful for other reasons and uses. As for your example, where the main thread (and also other background threads) require a service from singletons, for example some devices or hardware interfaces, which are only availabale in a low number, slow in their computations and require concurrent access, without being blocked waiting for a result.
It's Qt. It's signal-slot mechanism is thread-aware. On your secondary (non-GUI) thread, create a QObject-derived class with an execute slot. Signals connected to this slot will marshal the event to that thread.
Note that this QObject can't be a child of a GUI object, since children need to live in their parents thread, and this object explicitly does not live in the GUI thread.
You can handle the result using existing std::promise logic, just like std::future does.
In my multi-threaded programs I often use an approach like shown below to synchronize access to data:
class MyAsyncClass
{
public: // public thread safe interface of MyAsyncClass
void start()
{
// add work to io_service
_ioServiceWork.reset(new boost::asio::io_service::work(_ioService));
// start io service
_ioServiceThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&boost::asio::io_service::run, &_ioService)));
}
void stop()
{
_ioService.post(boost::bind(&MyAsyncClass::stop_internal, this));
// QUESTION:
// how do I wait for stop_internal to finish here?
// remove work
_ioServiceWork.reset();
// wait for the io_service to return from run()
if (_ioServiceThread && _ioServiceThread->joinable())
_ioServiceThread->join();
// allow subsequent calls to run()
_ioService.reset();
// delete thread
_ioServiceThread.reset();
}
void doSometing()
{
_ioService.post(boost::bind(&MyAsyncClass::doSometing_internal, this));
}
private: // internal handlers
void stop_internal()
{
_myMember = 0;
}
void doSomething_internal()
{
_myMember++;
}
private: // private variables
// io service and its thread
boost::asio::io_service _ioService;
boost::shared_ptr<boost::thread> _ioServiceThread;
// work object to prevent io service from running out of work
std::unique_ptr<boost::asio::io_service::work> _ioServiceWork;
// some member that should be modified only from _ioServiceThread
int _myMember;
};
The public interface of this class is thread-safe in the sense that its public methods can be called from any thread and boost::asio::io_service takes care that access to the private members of this class are synchronized. Therefore the public doSomething() does nothing but posting the actual work into the io_service.
The start() and stop() methods of MyAsyncClass obviously start and stop processing in MyAsyncClass. I want to be able to call MyAsyncClass::stop() from any thread and it should not return before the uninitialization of MyAsyncClass has finished.
Since in this particular case I need to modify one of my private members (that needs synchronized access) when stopping, I introduced a stop_internal() method which I post to the io_service from stop().
Now the question is: How can I wait for the execution of stop_internal() to finish inside stop()? Note that I cannot call stop_internal() directly because it would run in the wrong thread.
Edit:
It would be nice to have a solution that also works if MyAsyncClass::stop() is called from the _ioServiceThread, so that MyAsyncClass can also stop itself.
I just found a very nice solution myself:
Instead of removing work (resetting _ioServiceWork) in stop(), I do it at the end of stop_internal(). This means that _ioServiceThread->join() blocks until stop_internal() has finished - exactly what I want.
The nice thing about this solution is that it doesn't need any mutex or condition variable or stuff like this.
I learn from the following links that sub classing a QThread is not a correct way of using it...the proper way is to subclass the QObject and then move the object of QObject class to the respective thread using moveToThread() function...i followed the following links..
link 1 and link 2...but my question is then how will i be able to use msleep() and usleep() protected static functions ? or will i use QTimer to make a thread wait for some time ?
No need for timers. For waiting, Qt provides QWaitCondition. You can implement something like this:
#include <QWaitCondition>
#include <QMutex>
void threadSleep(unsigned long ms)
{
QMutex mutex;
mutex.lock();
QWaitCondition waitCond;
waitCond.wait(&mutex, ms);
mutex.unlock();
}
This is a normal function. You can of course implement it as a member function too, if you want (it can be a static member in that case.)
One solution would be to create a timer:
class Worker: public QObject
{
///...
private slots:
void doWork()
{
//...
QTimer::singleShot(delay, this, SLOT(continueDoingWork()));
}
void continueDoingWork()
{
}
};
Sometimes, you only need to run an operation in a different thread and all this event loops and threads are overhead. Then you can use QtConcurent framework:
class Worker
{
public:
void doWork()
{
//...
}
} worker;
//...
QtConcurent::run(worker, &Worker::doWork);
Then, I usually use mutexes to simulate the sleep operations:
QMutex m;
m.lock();
m.tryLock(delay);
The canonical answer is "use signals and slots."
For example, if you want a QObject in a thread to "wake itself up" after a certain period of time, consider QTimer::singleShot(), passing in the slot as the third argument. This can be called from the slot in question, resulting in periodic execution.
You can't let a different thread sleep without cooperation of this thread, thats the reason the member functions of QThread are protected. If you want to sleep a different thread you need to use a condition variable or a timer inside
If you want so sleep the current thread with usleep(), the simplest way is to subclass it - its perfectly fine as long as you don't need QThreadPool, a thread local event loop or similar.
I'm using a QThread and inside its run method I have a timer invoking a function that performs some heavy actions that take some time. Usually more than the interval that triggers the timer (but not always).
What I need is to protect this method so it can be invoked only if it has completed its previous job.
Here is the code:
NotificationThread::NotificationThread(QObject *parent)
: QThread(parent),
bWorking(false),
m_timerInterval(0)
{
}
NotificationThread::~NotificationThread()
{
;
}
void NotificationThread::fire()
{
if (!bWorking)
{
m_mutex.lock(); // <-- This is not protection the GetUpdateTime method from invoking over and over.
bWorking = true;
int size = groupsMarkedForUpdate.size();
if (MyApp::getInstance()->GetUpdateTime(batchVectorResult))
{
bWorking = false;
emit UpdateNotifications();
}
m_mutex.unlock();
}
}
void NotificationThread::run()
{
m_NotificationTimer = new QTimer();
connect(m_NotificationTimer,
SIGNAL(timeout()),
this,
SLOT(fire(),
Qt::DirectConnection));
int interval = val.toInt();
m_NotificationTimer->setInterval(3000);
m_NotificationTimer->start();
QThread::exec();
}
// This method is invoked from the main class
void NotificationThread::Execute(const QStringList batchReqList)
{
m_batchReqList = batchReqList;
start();
}
You could always have a thread that needs to run the method connected to an onDone signal that alerts all subscribers that it is complete. Then you should not run into the problems associated with double lock check and memory reordering. Maintain the run state in each thread.
I'm assuming you want to protect your thread from calls from another thread. Am I right? If yes, then..
This is what QMutex is for. QMutex gives you an interface to "lock" the thread until it is "unlocked", thus serializing access to the thread. You can choose to unlock the thread until it is done doing its work. But use it at your own risk. QMutex presents its own problems when used incorrectly. Refer to the documentation for more information on this.
But there are many more ways to solve your problem, like for example, #Beached suggests a simpler way to solve the problem; your instance of QThread would emit a signal if it's done. Or better yet, make a bool isDone inside your thread which would then be true if it's done, or false if it's not. If ever it's true then it's safe to call the method. But make sure you do not manipulate isDone outside the thread that owns it. I suggest you only manipulate isDone inside your QThread.
Here's the class documentation: link
LOL, I seriously misinterpreted your question. Sorry. It seems you've already done my second suggestion with bWorking.
I'm in a scenario when I have to terminate a thread while the thread is running according to user action on GUI. I'm using Qt 4.5.2 on Windows. One way to do that is the following:
class MyThread : public QThread
{
QMutex mutex;
bool stop;
public:
MyThread() : stop(false) {}
void requestStop()
{
QMutexLocker(&mutex);
stop = true;
}
void run()
{
while(counter1--)
{
QMutexLocker(&mutex);
if (stop) return;
while(counter2--)
{
}
}
}
};
Please note that the above code is minimal. The run function can take upto 20 seconds before finish so I want to avoid locking and unlocking the mutex variable in the loop. Is there any other way faster than this method.
Thanks in advance.
It doesn't directly answer your need, but can't you scope your mutex much tighter ?
while(counter1--) {
{
QMutexLocker(&mutex);
if (stop) return;
} // End locking scope : we won't read it anymore until next time
while(counter2--)
...
Firstly it doesn't look like you need a mutex around your entire inner loop, just around the if (stop) expression as the others say, but I may be missing some of your app context to definitively say that. Maybe you need requestStop() to block until the thread exits.
If the reduced mutex scope is adequate for you, then you don't need a mutex at all if you declare your stop variable as "volatile". The "volatile" keyword causes (at least under VC++) a read/write memory barrier to be placed around accesses to stop, which means your requestStop() call is guaranteed to be communicated to your thread and not cached away. The following code should work just fine on multicore processors.
class MyThread : public QThread
{
volatile bool stop;
public:
MyThread() : stop(false) {}
void requestStop()
{
stop = true;
}
void run()
{
while(counter1--)
{
if (stop) return;
while(counter2--)
{
}
}
}
};
The main problem in your code is that you are holding the lock for much longer than you actually need. You should unlock it after you check the stop variable. That should make it much faster (depending on what is done in the inner loop). A lock-free alternative is to use QAtomicInt.
You could use a critical section instead of a mutex. They have a bit less overhead.
Otherwise you have to use this approach. If you want the worker thread to terminate within some interval t seconds, then it needs to check for a termination event at least once every t seconds.
Why not use an event that can be checked periodically and let the underlying platform worry about whether a mutex is needed or not to handle the event (I assume that Qt has event objects - I'm not all that familiar with it). If you use an event object, the platform will scope any critical section need to handle that event to as short a time period as necessary.
Also, since there's likely not going to be much contention for that mutex (the only time would be when something wants to kill the thread), grabbing and releasing the mutex will likely have little performance impact. In a loop that's taking 20 seconds to run, I'd be surprised if the impact were anything that could even be measured. But maybe I'm wrong - try measuring it by timing the thread with and without the mutex being taken. See if it's something you really need to concern yourself with.
Qt doesn't seem to have the kind of event object I'm talking about (one along the lines of Win32's event objects), but a QSemaphore can be used just as easily:
class MyThread : public QThread
{
QSemaphore stopFlag;
public:
MyThread() : stopFlag( 1) {}
void requestStop()
{
stopFlag.tryAcquire(); // decrement the flag (if it hasn't been already)
}
void run()
{
while(counter1--)
{
if (!stopFlag.available()) return;
while(counter2--)
{
}
}
}
};