Any way to detect if a QObject belongs to a "dead" QThread? - c++

The story :
I make use of the QtConcurrent API for every "long" operation in my application.
It works pretty well, but I face some problems with the QObjects creation.
Consider this piece of code, which use a thread to create a "Foo" object :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
return result;
});
It works well, but if the "Foo" class is derived from QObject class, the "result" instance belongs to the QThread who has created the object.
So to use properly signal/slot with the "result" instance, one should do something like this :
QFuture<Foo*> = QtConcurrent::run([=]()
{
Data* data = /*long operation to acquire the data*/
Foo* result = new Foo(data);
// Move "result" to the main application thread
result->moveToThread(qApp->thread());
return result;
});
Now, all works as exepected, and I think this is the normal behaviour and the nominal solution.
The problem :
I have a lot of this kind of code, which sometimes create objects, which can also create objects. Most of them are created properly with a "moveToThread" call.
But sometimes, I miss one "moveToThread" call.
And then, a lot of things look like they doesn't work (because this object slots are "broken"), without any Qt warning.
Now, I sometimes spend a lot of time to figure why someting doesn't work, before understanding it's only because the slots are no more called on a particular object instance.
The question :
Is there any way to help me to prevent/detect/debug this kind of situation ?
For example :
having a warning logged every time a QThread is deleted but there are objects alive who belongs to it ?
having a warning logged every time a signal is emitted to an object which QThread is deleted ?
having a warning logged every time a signal is emitted to an object (in another thread) and not processed before a timeout ?
Thanks

It is possible to track an object's movement among threads. Just before an object is moved to the new thread, it is sent a ThreadChange event. You can filter that event and have your code run to take a note of when an object leaves a thread. But it's too early at that point to know of whether the object goes anywhere. To detect that, you need to post a metacall (see this question) to the object's queue to be executed as soon as the object's event processing resumes in the new thread. You'd also attach to QThread::finished to get a chance to look through your object list and check if any of them live on the thread that's about to die.
But all this is fairly involved: each thread will need its own tracker/filter object, as event filters must live in the object's thread. We're probably talking of more than 200 lines of code to do it right, handling all corner cases.
Instead, you can leverage RAII and hold your objects using handles that manage thread affinity as a resource (because it is one!):
// https://github.com/KubaO/stackoverflown/tree/master/questions/thread-track-38611886
#include <QtConcurrent>
template <typename T>
class MainResult {
Q_DISABLE_COPY(MainResult)
T * m_obj;
public:
template<typename... Args>
MainResult(Args&&... args) : m_obj{ new T(std::forward<Args>(args)...) } {}
MainResult(T * obj) : m_obj{obj} {}
T* operator->() const { return m_obj; }
operator T*() const { return m_obj; }
T* operator()() const { return m_obj; }
~MainResult() { m_obj->moveToThread(qApp->thread()); }
};
struct Foo : QObject { Foo(int) {} };
You can return a MainResult by value, but the return type of the functor must be explicitly given:
QFuture<Foo*> test1() {
return QtConcurrent::run([=]()->Foo*{ // explicit return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj; // return by value
});
}
Alternatively, you can return the result of calling MainResult; it's a functor itself to save a bit of typing but this might be considered a hack and perhaps you should convert operator()() to a method with a short name.
QFuture<Foo*> test2() {
return QtConcurrent::run([=](){ // deduced return type
MainResult<Foo> obj{1};
obj->setObjectName("Hello");
return obj(); // return by call
});
}
While it's preferable to construct the object along with the handle, it's also possible to pass an instance pointer to the handle's constructor:
MainResult<Foo> obj{ new Foo{1} };

Related

Using member shared_ptr from a member callback function running in different thread (ROS topic subscription)

I am not completely sure how to best title this question since I am not completely sure what the nature of the problem actually is (I guess "how fix segfault" is not a good title).
The situation is, I have written this code:
template <typename T> class LatchedSubscriber {
private:
ros::Subscriber sub;
std::shared_ptr<T> last_received_msg;
std::shared_ptr<std::mutex> mutex;
int test;
void callback(T msg) {
std::shared_ptr<std::mutex> thread_local_mutex = mutex;
std::shared_ptr<T> thread_local_msg = last_received_msg;
if (!thread_local_mutex) {
ROS_INFO("Mutex pointer is null in callback");
}
if (!thread_local_msg) {
ROS_INFO("lrm: pointer is null in callback");
}
ROS_INFO("Test is %d", test);
std::lock_guard<std::mutex> guard(*thread_local_mutex);
*thread_local_msg = msg;
}
public:
LatchedSubscriber() {
last_received_msg = std::make_shared<T>();
mutex = std::make_shared<std::mutex>();
test = 42;
if (!mutex) {
ROS_INFO("Mutex pointer is null in constructor");
}
else {
ROS_INFO("Mutex pointer is not null in constructor");
}
}
void start(ros::NodeHandle &nh, const std::string &topic) {
sub = nh.subscribe(topic, 1000, &LatchedSubscriber<T>::callback, this);
}
T get_last_msg() {
std::lock_guard<std::mutex> guard(*mutex);
return *last_received_msg;
}
};
Essentially what it is doing is subscribing to a topic (channel), meaning that a callback function is called each time a message arrives. The job of this class is to store the last received message so the user of the class can always access it.
In the constructor I allocate a shared_ptr to the message and for a mutex to synchronize access to this message. The reason for using heap memory here is so the LatchedSubscriber can be copied and the same latched message can still be read. (the Subscriber already implements this kind of behavior where copying it doesn't do anything except for the fact that the callback stops being called once the last instance goes out of scope).
The problem is basically that the code segfaults. I am pretty sure the reason for this is that my shared pointers become null in the callback function, despite not being null in the constructor.
The ROS_INFO calls print:
Mutex pointer is not null in constructor
Mutex pointer is null in callback
lrm: pointer is null in callback
Test is 42
I don't understand how this can happen. I guess I have either misunderstood something about shared pointers, ros topic subscriptions, or both.
Things I have done:
At first I had the subscribe call happening in the constructor. I think giving the this pointer to another thread before the constructor has returned can be bad, so I moved this into a start function which is called after the object has been constructed.
There are many aspects to the thread safety of shared_ptrs it seems. At first I used mutex and last_received_msg directly in the callback. Now I have copied them into local variables hoping this would help. But it doesn't seem to make a difference.
I have added a local integer variable. I can read the integer I assigned to this variable in the constructor from the callback. Just a sanity check to make sure that the callback is actually called on an instance created by my constructor.
I think I have figured out the problem.
When subscribing I am passing the this pointer to the subscribe function along with the callback. If the LatchedSubscriber is ever copied and the original deleted, that this pointer becomes invalid, but the sub still exists so the callback keeps being called.
I didn't think this happened anywhere in my code, but the LatcedSubscriber was stored as a member inside an object which was owned by a unique pointer. It looks like make_unique might be doing some copying internally? In any case it is wrong to use the this pointer for the callback.
I ended up doing the following instead
void start(ros::NodeHandle &nh, const std::string &topic) {
auto l_mutex = mutex;
auto l_last_received_msg = last_received_msg;
boost::function<void(const T)> callback =
[l_mutex, l_last_received_msg](const T msg) {
std::lock_guard<std::mutex> guard(*l_mutex);
*l_last_received_msg = msg;
};
sub = nh.subscribe<T>(topic, 1000, callback);
}
This way copies of the two smart pointers are used with the callback instead.
Assigning the closure to a variable of type boost::function<void(const T)> seems to be necessary. Probably due to the way the subscribe function is.
This appears to have fixed the issue. I might also move the subscription into the constructor again and get rid of the start method.

Lifetime issues of std::promise in an async API

I'm wondering how to develop an asynchronous API using promises and futures.
The application is using a single data stream that is used for both unsolicited periodic data and requesty/reply communication.
For the requesty/reply blocking until the reply is received is not an option and I don't want lo litter the code using callbacks, so I'd like to write some kind of a SendMessage that accepts the id of the expected reply and exits only upon reception. It's up to the caller to read the reply.
A candidate API could be:
std::future<void> sendMessage(Message msg, id expected)
{
// Write the message
auto promise = make_shared<std::promise<void>>();
// Memorize the promise somewhere accessible to the receiving thread
return promise->get_future();
}
The worker thread upon reception of a message should be able to query a data-structure to know if there is someone waiting for it and "release" the future.
Given that promises are not re-usable what I'm trying to understand is what kind of data-structure should I use to manage "in flight" promises.
This answer has been rewritten.
Setting the state of a shared flag can enable the worker to know whether the other side, say boss, is still expecting the result.
The shared flag along with the promise and the future can be enclosed into a class (template), say Request. The boss set the flag by destructing his copy of the request. And the worker query whether the boss is still expecting the request being done by calling certain member function on his own copy of the request.
Simultaneous reading/writing on the flag should be probably synchronized.
The boss may not access the promise and the worker may not access the future.
There should be at most two copies of the request, becaue the flag will be set on the destruction of the request object. For achieving this, we can delcare corresponding member functions as delete or private, and provide two copies of the request on construction.
Here follows a simple implementation of request:
#include <atomic>
#include <future>
#include <memory>
template <class T>
class Request {
public:
struct Detail {
std::atomic<bool> is_canceled_{false};
std::promise<T> promise_;
std::future<T> future_ = promise_.get_future();
};
static auto NewRequest() {
std::unique_ptr<Request> copy1{new Request()};
std::unique_ptr<Request> copy2{new Request(*copy1)};
return std::make_pair(std::move(copy1), std::move(copy2));
}
Request(Request &&) = delete;
~Request() {
detail_->is_canceled_.store(true);
}
Request &operator=(const Request &) = delete;
Request &operator=(Request &&) = delete;
// simple api
std::promise<T> &Promise(const WorkerType &) {
return detail_->promise_;
}
std::future<T> &Future(const BossType &) {
return detail_->future_;
}
// return value:
// true if available, false otherwise
bool CheckAvailable() {
return detail_->is_canceled_.load() == false;
}
private:
Request() : detail_(new Detail{}) {}
Request(const Request &) = default;
std::shared_ptr<Detail> detail_;
};
template <class T>
auto SendMessage() {
auto result = Request<T>::NewRequest();
// TODO : send result.second(the another copy) to the worker
return std::move(result.first);
}
New request is contructed by factroy function NewRequest, the return value is a std::pair which contains two std::unique_ptr, each hold a copy of the newly created request.
The worker can now use the member function CheckAvailable() to check whether the request is canceled.
And the shared state is managed proprely(I believe) by the std::shared_ptr.
Note on std::promise<T> &Promise(const WorkerType &): The const reference parameter(which should be replaced with a propre type according to your implementation) is for preventing the boss from calling this function by accident while the worker should be able to easily provide a propre argument for calling this function. The same for std::future<T> &Future(const BossType &).

Is this inter-thread object sharing strategy sound?

I'm trying to come up with a fast way of solving the following problem:
I have a thread which produces data, and several threads which consume it. I don't need to queue produced data, because data is produced much more slowly than it is consumed (and even if this failed to be the case occasionally, it wouldn't be a problem if a data point were skipped occasionally). So, basically, I have an object that encapsulates the "most recent state", which only the producer thread is allowed to update.
My strategy is as follows (please let me know if I'm completely off my rocker):
I've created three classes for this example: Thing (the actual state object), SharedObject<Thing> (an object that can be local to each thread, and gives that thread access to the underlying Thing), and SharedObjectManager<Thing>, which wraps up a shared_ptr along with a mutex.
The instance of the SharedObjectManager (SOM) is a global variable.
When the producer starts, it instantiates a Thing, and tells the global SOM about it. It then makes a copy, and does all of it's updating work on that copy. When it is ready to commit it's changes to the Thing, it passes the new Thing to the global SOM, which locks it's mutex, updates the shared pointer it keeps, and then releases the lock.
Meanwhile, the consumer threads all intsantiate SharedObject<Thing>. these objects each keep a pointer to the global SOM, as well as a cached copy of the shared_ptr kept by the SOM... It keeps this cached until update() is explicitly called.
I believe this is getting hard to follow, so here's some code:
#include <mutex>
#include <iostream>
#include <memory>
class Thing
{
private:
int _some_member = 10;
public:
int some_member() const { return _some_member; }
void some_member(int val) {_some_member = val; }
};
// one global instance
template<typename T>
class SharedObjectManager
{
private:
std::shared_ptr<T> objPtr;
std::mutex objLock;
public:
std::shared_ptr<T> get_sptr()
{
std::lock_guard<std::mutex> lck(objLock);
return objPtr;
}
void commit_new_object(std::shared_ptr<T> new_object)
{
std::lock_guard<std::mutex> lck (objLock);
objPtr = new_object;
}
};
// one instance per consumer thread.
template<typename T>
class SharedObject
{
private:
SharedObjectManager<T> * som;
std::shared_ptr<T> cache;
public:
SharedObject(SharedObjectManager<T> * backend) : som(backend)
{update();}
void update()
{
cache = som->get_sptr();
}
T & operator *()
{
return *cache;
}
T * operator->()
{
return cache.get();
}
};
// no actual threads in this test, just a quick sanity check.
SharedObjectManager<Thing> glbSOM;
int main(void)
{
glbSOM.commit_new_object(std::make_shared<Thing>());
SharedObject<Thing> myobj(&glbSOM);
std::cout<<myobj->some_member()<<std::endl;
// prints "10".
}
The idea for use by the producer thread is:
// initialization - on startup
auto firstStateObj = std::make_shared<Thing>();
glbSOM.commit_new_object(firstStateObj);
// main loop
while (1)
{
// invoke copy constructor to copy the current live Thing object
auto nextState = std::make_shared<Thing>(*(glbSOM.get_sptr()));
// do stuff to nextState, gradually filling out it's new value
// based on incoming data from other sources, etc.
...
// commit the changes to the shared memory location
glbSOM.commit_new_object(nextState);
}
The use by consumers would be:
SharedObject<Thing> thing(&glbSOM);
while(1)
{
// think about the data contained in thing, and act accordingly...
doStuffWith(thing->some_member());
// re-cache the thing
thing.update();
}
Thanks!
That is way overengineered. Instead, I'd suggest to do following:
Create a pointer to Thing* theThing together with protection mutex. Either a global one, or shared by some other means. Initialize it to nullptr.
In your producer: use two local objects of Thing type - Thing thingOne and Thing thingTwo (remember, thingOne is no better than thingTwo, but one is called thingOne for a reason, but this is a thing thing. Watch out for cats.). Start with populating thingOne. When done, lock the mutex, copy thingOne address to theThing, unlock the mutex. Start populating thingTwo. When done, see above. Repeat untill killed.
In every listener: (make sure the pointer is not nullptr). Lock the mutex. Make a copy of the object pointed two by the theThing. Unlock the mutex. Work with your copy. Burn after reading. Repeat untill killed.

Communication between 2 threads C++ UNIX

I need your help with wxWidgets. I have 2 threads (1 wxTimer and 1 wxThread), I need communicate between this 2 threads. I have a class that contains methods to read/write variable in this class. (Share Memory with this object)
My problem is: I instanciate with "new" this class in one thread but I don't know that necessary in second thread. Because if instanciate too, adress of variable are differents and I need communicate so I need even value in variable :/
I know about need wxSemaphore to prevent error when to access same time.
Thanks you for your help !
EDIT: My code
So, I need make a link with my code. Thanks you for all ;)
It's my declaration for my wxTimer in my class: EvtFramePrincipal (IHM)
In .h
EvtFramePrincipal( wxWindow* parent );
#include <wx/timer.h>
wxTimer m_timer;
in .cpp -Constructor EvtFramePrincipal
EvtFramePrincipal::EvtFramePrincipal( wxWindow* parent )
:
FramePrincipal( parent ),m_timer(this)
{
Connect(wxID_ANY,wxEVT_TIMER,wxTimerEventHandler(EvtFramePrincipal::OnTimer),NULL,this);
m_timer.Start(250);
}
So I call OnTimer method every 250ms with this line.
For my second thread start from EvtFramePrincipal (IHM):
in .h EvtFramePrincipal
#include "../Client.h"
Client *ClientIdle;
in .cpp EvtFramePrincipal
ClientIdle= new Client();
ClientIdle->Run();
In .h Client (Thread)
class Client: public wxThread
public:
Client();
virtual void *Entry();
virtual void OnExit();
In .cpp Client (Thread)
Client::Client() : wxThread()
{
}
So here, no probleme, thread are ok ?
Now I need that this class that use like a messenger between my 2 threads.
#ifndef PARTAGE_H
#define PARTAGE_H
#include "wx/string.h"
#include <iostream>
using std::cout;
using std::endl;
class Partage
{
public:
Partage();
virtual ~Partage();
bool Return_Capteur_Aval()
{ return Etat_Capteur_Aval; }
bool Return_Capteur_Amont()
{ return Etat_Capteur_Amont; }
bool Return_Etat_Barriere()
{ return Etat_Barriere; }
bool Return_Ouverture()
{ return Demande_Ouverture; }
bool Return_Fermeture()
{ return Demande_Fermeture; }
bool Return_Appel()
{ return Appel_Gardien; }
void Set_Ouverture(bool Etat)
{ Demande_Ouverture=Etat; }
void Set_Fermeture(bool Etat)
{ Demande_Fermeture=Etat; }
void Set_Capteur_Aval(bool Etat)
{ Etat_Capteur_Aval=Etat; }
void Set_Capteur_Amont(bool Etat)
{ Etat_Capteur_Amont=Etat; }
void Set_Barriere(bool Etat)
{ Etat_Barriere=Etat; }
void Set_Appel(bool Etat)
{ Appel_Gardien=Etat; }
void Set_Code(wxString valeur_code)
{ Code=valeur_code; }
void Set_Badge(wxString numero_badge)
{ Badge=numero_badge; }
void Set_Message(wxString message)
{
Message_Affiche=wxT("");
Message_Affiche=message;
}
wxString Get_Message()
{
return Message_Affiche;
}
wxString Get_Code()
{ return Code; }
wxString Get_Badge()
{ return Badge; }
protected:
private:
bool Etat_Capteur_Aval;
bool Etat_Capteur_Amont;
bool Etat_Barriere;
bool Demande_Ouverture;
bool Demande_Fermeture;
bool Appel_Gardien;
wxString Code;
wxString Badge;
wxString Message_Affiche;
};
#endif // PARTAGE_H
So in my EvtFramePrincipal(wxTimer), I make a new for this class. But in other thread (wxThread), what I need to do to communicate ?
If difficult to understand so sorry :/
Then main thread should create first the shared variable. After it, you can create both threads and pass them a pointer to the shared variable.
So, both of them, know how interact with the shared variable. You need to implement a mutex or wxSemaphore in the methods of the shared variable.
You can use a singleton to get access to a central object.
Alternatively, create the central object before creating the threads and pass the reference to the central object to threads.
Use a mutex in the central object to prevent simultaneous access.
Creating one central object on each thread is not an option.
EDIT 1: Adding more details and examples
Let's start with some assumptions. The OP indicated that
I have 2 threads (1 wxTimer and 1 wxThread)
To tell the truth, I know very little of the wxWidgets framework, but there's always the documentation. So I can see that:
wxTimer provides a Timer that will execute the wxTimer::Notify() method when the timer expires. The documentation doesn't say anything about thread-execution (although there's a note A timer can only be used from the main thread which I'm not sure how to understand). I can guess that we should expect the Notify method will be executed in some event-loop or timer-loop thread or threads.
wxThread provides a model for Thread execution, that runs the wxThread::Entry() method. Running a wxThread object will actually create a thread that runs the Entry method.
So your problem is that you need same object to be accessible in both wxTimer::Notify() and wxThread::Entry() methods.
This object:
It's not one variable but a lot of that store in one class
e.g.
struct SharedData {
// NOTE: This is very simplistic.
// since the information here will be modified/read by
// multiple threads, it should be protected by one or more
// mutexes
// so probably a class with getter/setters will be better suited
// so that access with mutexes can be enforced within the class.
SharedData():var2(0) { }
std::string var1;
int var2;
};
of which you have somewhere an instance of that:
std::shared_ptr<SharedData> myData=std::make_shared<SharedData>();
or perhaps in pointer form or perhaps as a local variable or object attribute
Option 1: a shared reference
You're not really using wxTimer or wxThread, but classes that inherit from them (at least the wxThread::Entry() is pure virtual. In the case of wxTimer you could change the owner to a different wxEvtHandler that will receive the event, but you still need to provide an implementation.
So you can have
class MyTimer: public wxTimer {
public:
void Notify() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyTimer timer;
timer.setData(myData);
timer.StartOnece(10000); // wake me up in 10 secs.
Similarly for the Thread
class MyThread: public wxThread {
public:
void Entry() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyThread *thread=new MyThread();
thread->setData(myData);
thread->Run(); // threads starts running.
Option2 Using a singleton.
Sometimes you cannot modify MyThread or MyTimer... or it is too difficult to route the reference to myData to the thread or timer instances... or you're just too lazy or too busy to bother (beware of your technical debt!!!)
We can tweak the SharedData into:
struct SharedData {
std::string var1;
int var2;
static SharedData *instance() {
// NOTE that some mutexes are needed here
// to prevent the case where first initialization
// is executed simultaneously from different threads
// allocating two objects, one of them leaked.
if(!sInstance) {
sInstance=new SharedData();
}
return sInstance
}
private:
SharedData():var2(0) { } // Note we've made the constructor private
static SharedData *sInstance=0;
};
This object (because it only allows the creation of a single object) can be accessed from
either MyTimer::Notify() or MyThread::Entry() with
SharedData::instance()->var1;
Interlude: why Singletons are evil
(or why the easy solution might bite you in the future).
What is so bad about singletons?
Why Singletons are Evil
Singletons Are Evil
My main reasons are:
There's one and only one instance... and you might think that you only need one now, but who knows what the future will hold, you've taken an easy solution for a coding problem that has far reaching consequences architecturally and that might be difficult to revert.
It will not allow doing dependency injection (because the actual class is used in the accessing the object).
Still, I don't think is something to completely avoid. It has its uses, it can solve your problem and it might save your day.
Option 3. Some middle ground.
You could still organize your data around a central repository with methods to access different instances (or different implementations) of the data.
This central repository can be a singleton (it is really is central, common and unique), but is not the shared data, but what is used to retrieve the shared data, e.g. identified by some ID (that might be easier to share between the threads using option 1)
Something like:
CentralRepository::instance()->getDataById(sharedId)->var1;
EDIT 2: Comments after OP posted (more) code ;)
It seems that your object EvtFramePrincipal will execute both the timer call back and it will contain the ClientIdle pointer to a Client object (the thread)... I'd do:
Make the Client class contain a Portage attribute (a pointer or a smart pointer).
Make the EvtFramePrincipal contain a Portage attribute (a pointer or smart pointer). I guess this will have the lifecycle of the whole application, so the Portage object can share that lifecycle too.
Add Mutexes locking to all methods setting and getting in the Portage attribute, since it can be accessed from multiple threads.
After the Client object is instantiated set the reference to the Portage object that the EvtFramePrincipal contains.
Client can access Portage because we've set its reference when it was created. When the Entry method is run in its thread it will be able to access it.
EvtFramePrincipal can access the Portage (because it is one of its attributes), so the event handler for the timer event will be able to access it.

Async constructor in C++11

Sometimes I need to create objects whose constructors take very long time to execute.
This leads to responsiveness problems in UI applications.
So I was wondering if it could be sensible to write a constructor designed to be called asynchronously, by passing a callback to it which will alert me when the object is available.
Below is a sample code:
class C
{
public:
// Standard ctor
C()
{
init();
}
// Designed for async ctor
C(std::function<void(void)> callback)
{
init();
callback();
}
private:
void init() // Should be replaced by delegating costructor (not yet supported by my compiler)
{
std::chrono::seconds s(2);
std::this_thread::sleep_for(s);
std::cout << "Object created" << std::endl;
}
};
int main(int argc, char* argv[])
{
auto msgQueue = std::queue<char>();
std::mutex m;
std::condition_variable cv;
auto notified = false;
// Some parallel task
auto f = []()
{
return 42;
};
// Callback to be called when the ctor ends
auto callback = [&m,&cv,&notified,&msgQueue]()
{
std::cout << "The object you were waiting for is now available" << std::endl;
// Notify that the ctor has ended
std::unique_lock<std::mutex> _(m);
msgQueue.push('x');
notified = true;
cv.notify_one();
};
// Start first task
auto ans = std::async(std::launch::async, f);
// Start second task (ctor)
std::async(std::launch::async, [&callback](){ auto c = C(callback); });
std::cout << "The answer is " << ans.get() << std::endl;
// Mimic typical UI message queue
auto done = false;
while(!done)
{
std::unique_lock<std::mutex> lock(m);
while(!notified)
{
cv.wait(lock);
}
while(!msgQueue.empty())
{
auto msg = msgQueue.front();
msgQueue.pop();
if(msg == 'x')
{
done = true;
}
}
}
std::cout << "Press a key to exit..." << std::endl;
getchar();
return 0;
}
Do you see any drawback in this design? Or do you know if there is a better approach?
EDIT
Following the hints of JoergB's answer, I tried to write a factory which will bear the responsibility to create an object in a sync or async way:
template <typename T, typename... Args>
class FutureFactory
{
public:
typedef std::unique_ptr<T> pT;
typedef std::future<pT> future_pT;
typedef std::function<void(pT)> callback_pT;
public:
static pT create_sync(Args... params)
{
return pT(new T(params...));
}
static future_pT create_async_byFuture(Args... params)
{
return std::async(std::launch::async, &FutureFactory<T, Args...>::create_sync, params...);
}
static void create_async_byCallback(callback_pT cb, Args... params)
{
std::async(std::launch::async, &FutureFactory<T, Args...>::manage_async_byCallback, cb, params...);
}
private:
FutureFactory(){}
static void manage_async_byCallback(callback_pT cb, Args... params)
{
auto ptr = FutureFactory<T, Args...>::create_sync(params...);
cb(std::move(ptr));
}
};
Your design seems very intrusive. I don't see a reason why the class would have to be aware of the callback.
Something like:
future<unique_ptr<C>> constructedObject = async(launchopt, [&callback]() {
unique_ptr<C> obj(new C());
callback();
return C;
})
or simply
future<unique_ptr<C>> constructedObject = async(launchopt, [&cv]() {
unique_ptr<C> ptr(new C());
cv.notify_all(); // or _one();
return ptr;
})
or just (without a future but a callback taking an argument):
async(launchopt, [&callback]() {
unique_ptr<C> ptr(new C());
callback(ptr);
})
should do just as well, shouldn't it? These also make sure that the callback is only ever called when a complete object is constructed (when deriving from C).
It shouldn't be too much effort to make any of these into a generic async_construct template.
Encapsulate your problem. Don't think about asynchronous constructors, just asynchronous methods which encapsulate your object creation.
It looks like you should be using std::future rather than constructing a message queue. std::future is a template class that holds a value and can retrieve the value blocking, timeout or polling:
std::future<int> fut = ans;
fut.wait();
auto result = fut.get();
I will suggest a hack using thread and signal handler.
1) Spawn a thread to do the task of the constructor. Lets call it child thread. This thread will intialise the values in your class.
2) After the constructor is completed, child thread uses the kill system call to send a signal to the parent thread. (Hint : SIGUSR1). The main thread on receiving the ASYNCHRONOUS handler call will know that the required object has been created.
Ofcourse, you can use fields like object-id to differentiate between multiple objects in creation.
My advice...
Think carefully about why you need to do such a long operation in a constructor.
I find often it is better to split the creation of an object into three parts
a) allocation
b) construction
c) initialization
For small objects it makes sense to do all three in one "new" operation. However, heavy weight objects, you really want to separate the stages. Figure out how much resource you need and allocate it. Construct the object in the memory into a valid, but empty state.
Then... do your long load operation into the already valid, but empty object.
I think I got this pattern a long time ago from reading a book (Scott Myers perhaps?) but I highly recommend it, it solves all sorts of problems. For example, if your object is a graphic object, you figure out how much memory it needs. If it fails, show the user an error as soon as possible. If not mark the object as not read yet. Then you can show it on screen, the user can also manipulate it, etc.
Initialize the object with an asynchronous file load, when it completes, set a flag in the object that says "loaded". When your update function sees it is loaded, it can draw the graphic.
It also REALLY helps with problems like construction order, where object A needs object B. You suddenly find you need to make A before B, oh no!! Simple, make an empty B, and pass it as a reference, as long as A is clever enough to know that be is empty, and wait to it is not before it uses it, all is well.
And... Not forgetting.. You can do the opposite on destruction.
Mark your object as empty first, so nothing new uses it (de-initialisation)
Free the resources, (destruction)
Then free the memory (deallocation)
The same benefits apply.
Having partially initialized objects could lead to bugs or unnecessarily complicated code, since you would have to check whether they're initialized or not.
I'd recommend using separate threads for UI and processing, and then use message queues for communicating between threads. Leave the UI thread for just handling the UI, which will then be more responsive all the time.
Place a message requesting creation of the object into the queue that the worker thread waits on, and then after the object has been created, the worker can put a message into UI queue indicating that the object is now ready.
Here's yet another pattern for consideration. It takes advantage of the fact that calling wait() on a future<> does not invalidate it. So, as long you never call get(), you're safe. This pattern's trade-off is that you incur the onerous overhead of calling wait() whenever a member function gets called.
class C
{
future<void> ready_;
public:
C()
{
ready_ = async([this]
{
this_thread::sleep_for(chrono::seconds(3));
cout << "I'm ready now." << endl;
});
}
// Every member function must start with ready_.wait(), even the destructor.
~C(){ ready_.wait(); }
void foo()
{
ready_.wait();
cout << __FUNCTION__ << endl;
}
};
int main()
{
C c;
c.foo();
return 0;
}