Simplest way to make std::thread exception safe - c++

std::thread class is inherently exception-unsafe since its destructor calls std::terminate.
std::thread t( function );
// do some work
// (might throw!)
t.join();
You could, of course, put everything in between construction and join() in a try-catch block, but this can get tedious and error-prone if you know you want to join or detach no matter what happens.
So I was thinking how would one go about writing the simplest possible wrappers around it, but that would also support other hypothetical types of threads. For instance, boost::thread or something completely different, as long as it had joinable(), join() and detach() methods. Here's how far I've got:
// handles threads safely
// Acts the same as the underlying thread type, except during destruction.
// If joinable, will call join (and block!) during destruction.
// Keep in mind that any exception handling will get delayed because of that;
// it needs to wait for the thread to finish its work first.
template <class UNDERLYING_THREAD = std::thread>
class scoped_thread: public UNDERLYING_THREAD
{
public:
typedef UNDERLYING_THREAD thread_type;
using thread_type::thread_type;
scoped_thread()
: thread_type() {}
scoped_thread( scoped_thread && other )
: thread_type( std::move( other ) ) {}
scoped_thread & operator = ( scoped_thread && other )
{
thread_type & ref = *this;
ref = std::move( other );
return *this;
}
~scoped_thread()
{
if( thread_type::joinable() )
thread_type::join();
}
};
// handles autonomous threads safely
// Acts the same as the underlying thread type, except during destruction.
// If joinable, will call detach during destruction.
// Make sure it doesn't use any scoped resources since the thread can remain
// running after they go out of scope!
template <class UNDERLYING_THREAD = std::thread>
class free_thread
{
// same except it calls detach();
}
This seems to work, but I'm wondering if there is a way to avoid manually defining the constructors and the move assignment operator. Probably the biggest issue I noticed is that compilation will fail if you supply a class with deleted move constructor as a template argument.
Do you have any suggestions about how to possibly avoid this? Or are there other, bigger issues with this approach?

If you want proper exception handling with asynchronous tasks, maybe you should use std::future rather than std::thread. Instead of using join(), you'd use get() on the future, and if the future threw an exception, then get() will result in the same exception.
A simple example:
#include <future>
#include <iostream>
int my_future_task(int my_arg) {
throw std::runtime_error("BAD STUFF!");
return my_arg;
}
int main(int argc, char* argv[]) {
auto my_future = std::async(my_future_task, 42);
try {
my_future.get();
}
catch(std::exception &e) {
std::cout << "Caught exception: " << e.what() << std::endl;
}
return 0;
}
See also:
std::future::get
std::future_error
Exception propagation and std::future

Related

C++20 stopping a detached std::jthread using an std::stop_token

In C++20 std::jthread was introduced as a safer version of std::thread; where std::jthread, as far as I understand, cleans up after itself when the thread exits.
Also, the concept of cooperative cancellation is introduced such that an std::jthread manages an std::stop_source that handles the state of the underlying thread, this std::stop_source exposes an std::stop_token that outsiders can use to read the state of the thread sanely.
What I have is something like this.
class foo {
std::stop_token stok;
std::stop_source ssource;
public:
void start_foo() {
// ...
auto calculation = [this](std::stop_token inner_tok) {
// ... (*this is used here)
while(!inner_tok.stop_requested()) {
// stuff
}
}
auto thread = std::jthread(calculation);
ctok = thread.get_stop_token();
ssource = thread.get_stop_source();
thread.detach(); // ??
}
void stop_foo() {
if (ssource.stop_possible()) {
ssource.request_stop();
}
}
~foo() {
stop_foo();
}
}
Note foo is managed by a std::shared_ptr, and there is no public constructor.
Somewhere along the line, another thread can call foo::stop_foo() on a possibly detached thread.
Is what I am doing safe?
Also, when detaching a thread, the C++ handle is no longer associated with the running thread, and the OS manages it, but does the thread keep receiving stop notifications from the std::stop_source?
Is there a better way to achieve what I need? In MVSC, this doesn't seem to raise any exceptions or halt program execution, and I've done a lot of testing to verify this.
So, is this solution portable?
What you wrote is potentially unsafe if the thread accesses this after the foo has been destroyed. It's also a bit convoluted. A simpler approach would just be to stick the jthread in the structure...
class foo {
std::jthread thr;
public:
void start_foo() {
// ...
jthr = std::jthread([this](std::stop_token inner_tok) {
// ... (*this is used here)
while(!inner_tok.stop_requested()) {
// stuff
}
});
}
void stop_foo() {
jthr.request_stop();
}
~foo() {
stop_foo();
// jthr.detatch(); // this is a bad idea
}
}
To match the semantics of your code, you would uncomment the jthr.detach() in the destructor, but this is actually a bad idea since then you could end up destroying foo while the thread is still accessing it. The code I wrote above is safe, but obviously whichever thread drops the last reference to the foo will have to wait for the jthread to exit. If that's really intolerable, then maybe you want to change the API to stick a shared_ptr in the thread itself, so that the thread can destroy foo if it is still running after the last external reference is dropped.

How can I pass an object's member variable (field) as a reference to a thread safely?

Let's say I start a new thread from a classmethod and pass "this" as a parameter to the lambda of the new thread. If the object is destroyed before the thread uses something from "this", then it's probably undefined behavior.
As a simple example:
#include <thread>
#include <iostream>
class Foo
{
public:
Foo() : m_bar{123} {}
void test_1()
{
std::thread thd = std::thread{[this]()
{
std::cout << m_bar << std::endl;
}};
thd.detach();
}
void test_2()
{
test_2(m_bar);
}
void test_2(int & bar)
{
std::thread thd = std::thread{[this, & bar]()
{
std::cout << bar << std::endl;
}};
thd.detach();
}
private:
int m_bar;
};
int main()
{
// 1)
std::thread thd_outer = std::thread{[]()
{
Foo foo;
foo.test_1();
}};
thd_outer.detach();
// 2)
{
Foo foo;
foo.test_1();
}
std::cin.get();
}
The outcomes
(For the original project, I have to use VS19, so the exception messages are originally coming from that IDE.)
Starting from thd_outer, test_1 and test_2 are either throwing an exception (Exception thrown: read access violation.) or printing 0 (instead of 123).
Without thd_outer they seem correct.
I've tried the same code with GCC under Linux, and they always print 123.
Which one is the correct behavior? I think it is UB, and in that case all are "correct". If it's not undefined, then why are they different?
I would expect 123 or garbage always because either the object is still valid (123) or was valid but destroyed and a) the memory is not reused yet (123) or reused (garbage). An exception is reasonable but what exactly is throwing it (VS only)?
I've came up with a possible solution to the problem:
class Foo2
{
public:
Foo2() : m_bar{123} {}
~Foo2()
{
for (std::thread & thd : threads)
{
try
{
thd.join();
}
catch (const std::system_error & e)
{
// handling
}
}
}
void test_1()
{
std::thread thd = std::thread{[this]()
{
std::cout << m_bar << std::endl;
}};
threads.push_back(std::move(thd));
}
private:
int m_bar;
std::vector<std::thread> threads;
};
Is it a safe solution, without undefined behaviors? Seems like it's working. Is there a better and/or more "standardized" way?
Forget about member variables or classes. The question then is, how do I make sure that a thread does not use a reference to an object that has been destroyed. Two approaches exist that both effectively ensure that the thread ends before the object is destroyed plus a third one that's more complicated.
Extend the object lifetime to that of the thread. The easiest way is to use dynamic allocation of the object. In addition, to avoid memory leaks, use smart pointers like std::shared_ptr.
Limit the thread runtime to that of the object. Before destroying the object, simply join the thread.
Tell the thread to let go of the object before destroying it. I'll only sketch this, because its the most complicated way, but if you somehow tell the thread that it must not use the object any more, you can then destroy the object without adverse side effects.
That said, one advise: You are sharing an object between (at least) two threads. Accessing it requires synchronization, which is a complex topic in and of itself.

How to start an empty thread using c++ [duplicate]

I'm getting into C++11 threads and have run into a problem.
I want to declare a thread variable as global and start it later.
However all the examples I've seen seem to start the thread immediately for example
thread t(doSomething);
What I want is
thread t;
and start the thread later.
What I've tried is
if(!isThreadRunning)
{
thread t(readTable);
}
but now t is block scope. So I want to declare t and then start the thread later so that t is accessible to other functions.
Thanks for any help.
std::thread's default constructor instantiates a std::thread without starting or representing any actual thread.
std::thread t;
The assignment operator moves the state of a thread object, and sets the assigned-from thread object to its default-initialized state:
t = std::thread(/* new thread code goes here */);
This first constructs a temporary thread object representing a new thread, transfers the new thread representation into the existing thread object that has a default state, and sets the temporary thread object's state to the default state that does not represent any running thread. Then the temporary thread object is destroyed, doing nothing.
Here's an example:
#include <iostream>
#include <thread>
void thread_func(const int i) {
std::cout << "hello from thread: " << i << std::endl;
}
int main() {
std::thread t;
std::cout << "t exists" << std::endl;
t = std::thread{ thread_func, 7 };
t.join();
std::cout << "done!" << std::endl;
}
As antred says in his answer, you can use a condition variable to make the thread to wait in the beginning of its routine.
Scott Meyers in his book “Effective Modern C++” (in the “Item 39: Consider void futures for one-shot event communication”) proposes to use void-future instead of lower level entities (boolean flag, conditional variable and mutex). So the problem can be solved like this:
auto thread_starter = std::promise<void>;
auto thread = std::thread([starter_future = thread_starter.get_future()]() mutable {
starter_future.wait(); //wait before starting actual work
…; //do actual work
});
…; //you can do something, thread is like “paused” here
thread_starter.set_value(); //“start” the thread (break its initial waiting)
Scott Meyers also warns about exceptions in the second … (marked by the you can do something, thread is like “paused” here comment). If thread_starter.set_value() is never called for some reasons (for example, due to exception throws in the second …), the thread will wait forever, and any attempt to join it would result in deadlock.
As both ways (condvar-based and future-based) contain hidden unsafety, and the first way (condvar-based) needs some boilerplate code, I propose to write a wrapper class around std::thread. Its interface should be similar to the one of std::thread (except that its instances should be assignable from other instances of the same class, not from std::thread), but contain additional void start() method.
Future-based thread-wrapper
class initially_suspended_thread {
std::promise<bool> starter;
std::thread impl;
public:
template<class F, class ...Args>
explicit initially_suspended_thread(F &&f, Args &&...args):
starter(),
impl([
starter_future = starter.get_future(),
routine = std::bind(std::forward<F>(f), std::forward<Args>(args)...)
]() mutable {if (starter_future.get()) routine();})
{}
void start() {starter.set_value(true);}
~initially_suspended_thread() {
try {starter.set_value(false);}
catch (const std::future_error &exc) {
if (exc.code() != std::future_errc::promise_already_satisfied) throw;
return; //already “started”, no need to do anything
}
impl.join(); //auto-join not-yet-“started” threads
}
…; //other methods, trivial
};
Condvar-based thread-wrapper
class initially_suspended_thread {
std::mutex state_mutex;
enum {INITIAL, STARTED, ABORTED} state;
std::condition_variable state_condvar;
std::thread impl;
public:
template<class F, class ...Args>
explicit initially_suspended_thread(F &&f, Args &&...args):
state_mutex(), state(INITIAL), state_condvar(),
impl([
&state_mutex = state_mutex, &state = state, &state_condvar = state_condvar,
routine = std::bind(std::forward<F>(f), std::forward<Args>(args)...)
]() {
{
std::unique_lock state_mutex_lock(state_mutex);
state_condvar.wait(
state_mutex_lock,
[&state]() {return state != INITIAL;}
);
}
if (state == STARTED) routine();
})
{}
void start() {
{
std::lock_guard state_mutex_lock(state_mutex);
state = STARTED;
}
state_condvar.notify_one();
}
~initially_suspended_thread() {
{
std::lock_guard state_mutex_lock(state_mutex);
if (state == STARTED) return; //already “started”, no need to do anything
state = ABORTED;
}
impl.join(); //auto-join not-yet-“started” threads
}
…; //other methods, trivial
};
There is no "standard" of creating a thread "suspended" which I assume is what you wanted to do with the C++ thread library. Because it is not supported on every platform that has threads, it is not there in the C++ API.
You might want to create a class with all the data it is required but not actually run your thread function. This is not the same as creating the thread but may be what you want. If so, create that, then later bind the object and its operator() or start() function or whatever to the thread.
You might want the thread id for your thread. That means you do actually need to start the thread function. However it can start by waiting on a condition variable. You then signal or broadcast to that condition variable later when you want it to continue running. Of course you can have the function check a condition after it resumes in case you might have decided to close it and not run it after all (in which case it will just return instantly).
You might want a std::thread object with no function. You can do that and attach it to a function later to run that function in a new thread.
I would give the thread a condition variable and a boolean called startRunning (initially set to false). Effectively you would start the thread immediately upon creation, but the first thing it would do is suspend itself (using the condition_variable) and then only begin processing its actual task when the condition_variable is signaled from outside (and the startRunning flag set to true).
EDIT: PSEUDO CODE:
// in your worker thread
{
lock_guard l( theMutex );
while ( ! startRunning )
{
cond_var.wait( l );
}
}
// now start processing task
// in your main thread (after creating the worker thread)
{
lock_guard l( theMutex );
startRunning = true;
cond_var.signal_one();
}
EDIT #2: In the above code, the variables theMutex, startRunning and cond_var must be accessible by both threads. Whether you achieve that by making them globals or by encapsulating them in a struct / class instance is up to you.
first declared in class m_grabber runs nothing. We assign member class object with new one with lambda function in launch_grabber method and thread with lambda runs within source class context.
class source {
...
std::thread m_grabber;
bool m_active;
...
}
bool source::launch_grabber() {
// start grabber
m_grabber = std::thread{
[&] () {
m_active = true;
while (true)
{
if(!m_active)
break;
// TODO: something in new thread
}
}
};
m_grabber.detach();
return true;
}
You could use singleton pattern. Or I would rather say antipattern.
Inside a singleton you would have std::thread object encapsulated. Upon first access to singleton your thread will be created and started.

Best way to handle multi-thread cleanup

I have a server-type application, and I have an issue with making sure thread's aren't deleted before they complete. The code below pretty much represents my server; the cleanup is required to prevent a build up of dead threads in the list.
using namespace std;
class A {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
doSomething(getStopFlag);
cleanupFunction();
});
}
private:
void doSomething(function<bool()> getStopFlag);
thread somethingThread;
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
newClient.doSomethingThreaded(bind(&B::cleanup, this, newClient), [this]() {
return stopFlag;
});
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData;
mutex listMutex;
atomc<bool> stopFlag;
}
The issue seems to be that the destructors run in the wrong order - i.e. the shared_ptr is destructed at when the thread's function completes, meaning the 'A' object is deleted before thread completion, causing havok when the thread's destructor is called.
i.e.
Call cleanup function
All references to this (i.e. an A object) removed, so call destructor (including this thread's destructor)
Call this thread's destructor again -- OH NOES!
I've looked at alternatives, such as maintaining a 'to be removed' list which is periodically used to clean the primary list by another thread, or using a time-delayed deletor function for the shared pointers, but both of these seem abit chunky and could have race conditions.
Anyone know of a good way to do this? I can't see an easy way of refactoring it to work ok.
Are the threads joinable or detached? I don't see any detach,
which means that destructing the thread object without having
joined it is a fatal error. You might try simply detaching it,
although this can make a clean shutdown somewhat complex. (Of
course, for a lot of servers, there should never be a shutdown
anyway.) Otherwise: what I've done in the past is to create
a reaper thread; a thread which does nothing but join any
outstanding threads, to clean up after them.
I might add that this is a good example of a case where
shared_ptr is not appropriate. You want full control over
when the delete occurs; if you detach, you can do it in the
clean up function (but quite frankly, just using delete this;
at the end of the lambda in A::doSomethingThreaded seems more
readable); otherwise, you do it after you've joined, in the
reaper thread.
EDIT:
For the reaper thread, something like the following should work:
class ReaperQueue
{
std::deque<A*> myQueue;
std::mutex myMutex;
std::conditional_variable myCond;
A* getOne()
{
std::lock<std::mutex> lock( myMutex );
myCond.wait( lock, [&]( !myQueue.empty() ) );
A* results = myQueue.front();
myQueue.pop_front();
return results;
}
public:
void readyToReap( A* finished_thread )
{
std::unique_lock<std::mutex> lock( myMutex );
myQueue.push_back( finished_thread );
myCond.notify_all();
}
void reaperThread()
{
for ( ; ; )
{
A* mine = getOne();
mine->somethingThread.join();
delete mine;
}
}
};
(Warning: I've not tested this, and I've tried to use the C++11
functionality. I've only actually implemented it, in the past,
using pthreads, so there could be some errors. The basic
principles should hold, however.)
To use, create an instance, then start a thread calling
reaperThread on it. In the cleanup of each thread, call
readyToReap.
To support a clean shutdown, you may want to use two queues: you
insert each thread into the first, as it is created, and then
move it from the first to the second (which would correspond to
myQueue, above) in readyToReap. To shut down, you then wait
until both queues are empty (not starting any new threads in
this interval, of course).
The issue is that, since you manage A via shared pointers, the this pointer captured by the thread lambda really needs to be a shared pointer rather than a raw pointer to prevent it from becoming dangling. The problem is that there's no easy way to create a shared_ptr from a raw pointer when you don't have an actual shared_ptr as well.
One way to get around this is to use shared_from_this:
class A : public enable_shared_from_this<A> {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
shared_ptr<A> temp = shared_from_this();
doSomething(getStopFlag);
cleanupFunction();
});
this creates an extra shared_ptr to the A object that keeps it alive until the thread finishes.
Note that you still have the problem with join/detach that James Kanze identified -- Every thread must have either join or detach called on it exactly once before it is destroyed. You can fulfill that requirement by adding a detach call to the thread lambda if you never care about the thread exit value.
You also have potential for problems if doSomethingThreaded is called multiple times on a single A object...
For those who are interested, I took abit of both answers given (i.e. James' detach suggestion, and Chris' suggestion about shared_ptr's).
My resultant code looks like this and seems neater and doesn't cause a crash on shutdown or client disconnect:
using namespace std;
class A {
public:
void doSomething(function<bool()> getStopFlag) {
...
}
private:
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
thread clientThread([this, newClient]() {
// Capture the shared_ptr until thread over and done with.
newClient->doSomething([this]() {
return stopFlag;
});
cleanup(newClient);
});
// Detach to remove the need to store these threads until their completion.
clientThread.detach();
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData; // Can remove this if you don't
// need to connect with your clients.
// However, you'd need to make sure this
// didn't get deallocated before all clients
// finished as they reference the boolean stopFlag
// OR make it a shared_ptr to an atomic boolean
mutex listMutex;
atomc<bool> stopFlag;
}

Async constructor in C++11

Sometimes I need to create objects whose constructors take very long time to execute.
This leads to responsiveness problems in UI applications.
So I was wondering if it could be sensible to write a constructor designed to be called asynchronously, by passing a callback to it which will alert me when the object is available.
Below is a sample code:
class C
{
public:
// Standard ctor
C()
{
init();
}
// Designed for async ctor
C(std::function<void(void)> callback)
{
init();
callback();
}
private:
void init() // Should be replaced by delegating costructor (not yet supported by my compiler)
{
std::chrono::seconds s(2);
std::this_thread::sleep_for(s);
std::cout << "Object created" << std::endl;
}
};
int main(int argc, char* argv[])
{
auto msgQueue = std::queue<char>();
std::mutex m;
std::condition_variable cv;
auto notified = false;
// Some parallel task
auto f = []()
{
return 42;
};
// Callback to be called when the ctor ends
auto callback = [&m,&cv,&notified,&msgQueue]()
{
std::cout << "The object you were waiting for is now available" << std::endl;
// Notify that the ctor has ended
std::unique_lock<std::mutex> _(m);
msgQueue.push('x');
notified = true;
cv.notify_one();
};
// Start first task
auto ans = std::async(std::launch::async, f);
// Start second task (ctor)
std::async(std::launch::async, [&callback](){ auto c = C(callback); });
std::cout << "The answer is " << ans.get() << std::endl;
// Mimic typical UI message queue
auto done = false;
while(!done)
{
std::unique_lock<std::mutex> lock(m);
while(!notified)
{
cv.wait(lock);
}
while(!msgQueue.empty())
{
auto msg = msgQueue.front();
msgQueue.pop();
if(msg == 'x')
{
done = true;
}
}
}
std::cout << "Press a key to exit..." << std::endl;
getchar();
return 0;
}
Do you see any drawback in this design? Or do you know if there is a better approach?
EDIT
Following the hints of JoergB's answer, I tried to write a factory which will bear the responsibility to create an object in a sync or async way:
template <typename T, typename... Args>
class FutureFactory
{
public:
typedef std::unique_ptr<T> pT;
typedef std::future<pT> future_pT;
typedef std::function<void(pT)> callback_pT;
public:
static pT create_sync(Args... params)
{
return pT(new T(params...));
}
static future_pT create_async_byFuture(Args... params)
{
return std::async(std::launch::async, &FutureFactory<T, Args...>::create_sync, params...);
}
static void create_async_byCallback(callback_pT cb, Args... params)
{
std::async(std::launch::async, &FutureFactory<T, Args...>::manage_async_byCallback, cb, params...);
}
private:
FutureFactory(){}
static void manage_async_byCallback(callback_pT cb, Args... params)
{
auto ptr = FutureFactory<T, Args...>::create_sync(params...);
cb(std::move(ptr));
}
};
Your design seems very intrusive. I don't see a reason why the class would have to be aware of the callback.
Something like:
future<unique_ptr<C>> constructedObject = async(launchopt, [&callback]() {
unique_ptr<C> obj(new C());
callback();
return C;
})
or simply
future<unique_ptr<C>> constructedObject = async(launchopt, [&cv]() {
unique_ptr<C> ptr(new C());
cv.notify_all(); // or _one();
return ptr;
})
or just (without a future but a callback taking an argument):
async(launchopt, [&callback]() {
unique_ptr<C> ptr(new C());
callback(ptr);
})
should do just as well, shouldn't it? These also make sure that the callback is only ever called when a complete object is constructed (when deriving from C).
It shouldn't be too much effort to make any of these into a generic async_construct template.
Encapsulate your problem. Don't think about asynchronous constructors, just asynchronous methods which encapsulate your object creation.
It looks like you should be using std::future rather than constructing a message queue. std::future is a template class that holds a value and can retrieve the value blocking, timeout or polling:
std::future<int> fut = ans;
fut.wait();
auto result = fut.get();
I will suggest a hack using thread and signal handler.
1) Spawn a thread to do the task of the constructor. Lets call it child thread. This thread will intialise the values in your class.
2) After the constructor is completed, child thread uses the kill system call to send a signal to the parent thread. (Hint : SIGUSR1). The main thread on receiving the ASYNCHRONOUS handler call will know that the required object has been created.
Ofcourse, you can use fields like object-id to differentiate between multiple objects in creation.
My advice...
Think carefully about why you need to do such a long operation in a constructor.
I find often it is better to split the creation of an object into three parts
a) allocation
b) construction
c) initialization
For small objects it makes sense to do all three in one "new" operation. However, heavy weight objects, you really want to separate the stages. Figure out how much resource you need and allocate it. Construct the object in the memory into a valid, but empty state.
Then... do your long load operation into the already valid, but empty object.
I think I got this pattern a long time ago from reading a book (Scott Myers perhaps?) but I highly recommend it, it solves all sorts of problems. For example, if your object is a graphic object, you figure out how much memory it needs. If it fails, show the user an error as soon as possible. If not mark the object as not read yet. Then you can show it on screen, the user can also manipulate it, etc.
Initialize the object with an asynchronous file load, when it completes, set a flag in the object that says "loaded". When your update function sees it is loaded, it can draw the graphic.
It also REALLY helps with problems like construction order, where object A needs object B. You suddenly find you need to make A before B, oh no!! Simple, make an empty B, and pass it as a reference, as long as A is clever enough to know that be is empty, and wait to it is not before it uses it, all is well.
And... Not forgetting.. You can do the opposite on destruction.
Mark your object as empty first, so nothing new uses it (de-initialisation)
Free the resources, (destruction)
Then free the memory (deallocation)
The same benefits apply.
Having partially initialized objects could lead to bugs or unnecessarily complicated code, since you would have to check whether they're initialized or not.
I'd recommend using separate threads for UI and processing, and then use message queues for communicating between threads. Leave the UI thread for just handling the UI, which will then be more responsive all the time.
Place a message requesting creation of the object into the queue that the worker thread waits on, and then after the object has been created, the worker can put a message into UI queue indicating that the object is now ready.
Here's yet another pattern for consideration. It takes advantage of the fact that calling wait() on a future<> does not invalidate it. So, as long you never call get(), you're safe. This pattern's trade-off is that you incur the onerous overhead of calling wait() whenever a member function gets called.
class C
{
future<void> ready_;
public:
C()
{
ready_ = async([this]
{
this_thread::sleep_for(chrono::seconds(3));
cout << "I'm ready now." << endl;
});
}
// Every member function must start with ready_.wait(), even the destructor.
~C(){ ready_.wait(); }
void foo()
{
ready_.wait();
cout << __FUNCTION__ << endl;
}
};
int main()
{
C c;
c.foo();
return 0;
}