There is one thing about std::thread which I don't understand:
why the constructor of std::thread takes function to run by rvalue?
I usually want to run a Functor with some members to another thread. Like this:
struct Function
{
void operator() ( /* some args */)
{
/* some code */
}
/* some members */
}
void run_thread()
{
Functor f( /* some data */);
std::thread thread(f, /* some data */);
/* do something and wait for thread to finish */
}
With current implementation of std::thread I must be sure my object is implementing move semantics. I don't get why cannot I pass it by reference.
Extra question is: what does it mean to refer to function by rvalue? Lambda expression?
In your run_thread method f is an auto variable. That means at the bottom of the scope f will be destroyed. You claim that you will "wait for the thread to finish" but the compiler/runtime system does not know that! It has to assume that f will be deleted, possibly before the thread that is supposed to call its method has a chance to start.
By copying (or moving) the f, the run time system gains control of the lifetime of its copy of f and can avoid some really nasty, hard to debug problems.
std::reference_wrapper will expose an operator() to the wrapped object. If you are willing to do the manual lifetime maintenance, std::thread t(std::ref(f)); will run f by reference.
Of course in your code, this induces undefined behaviour as you did not manage lifetimes properly.
Finally, note that raw thread is a poor "client code" tool. async is a touch better, but really you want a task queue with packaged_tasks and futures and condition variables. C++11 added enough threading support to write a decent threading system, but it provides primitives, not good "client code" tools.
In a toy program it may be enough.
Related
This is the code listening from one C++ book.
void edit_document(std::string const& filename)
{
open_document_and_display_gui(filename);
while(!done_editing())
{
user_command cmd=get_user_input();
if(cmd.type==open_new_document)
{
std::string const new_name=get_filename_from_user();
std::thread t(edit_document,new_name);
t.detach();
}
else
{
process_user_input(cmd);
}
}
}
As you can see edit_document function can run other thread by itself. But the thread entry function takes filename as a const reference. Is it wrong in this case? Consider example when the new thread gets blocked in some way and new_name variable is actually destroyed and some garbage value is read. Is it possible in this case?
There's nothing wrong with having a thread function that takes its argument by reference. The constructor for std::thread doesn't forward its arguments by reference; it copies the arguments that are passed to it. So, internally, when you create a thread with
std::thread t(edit_document, new_name);
it, in effect, generates code that spins up a thread which does this:
std::string first_arg(new_name);
edit_document(first_arg);
and first_arg lives until after edit_document returns. (Don't take that code literally -- the actual implementation is much more subtle. The constructor for std::thread won't return until first_arg has been constructed, so there's no risk that new_name will go away before the copy has been made)
You have to go out of your way to pass an actual reference to the thread function. That's what std::reference_wrapper does:
std::thread t(edit_document, std::cref(new_name));
If you do that, of course, you have to be sure that the lifetime of new_name will be longer than the thread. That's not common, for obvious reasons.
The Problem
When creating schedulers the last copy or move of a function object is the last place that the function object is ever referenced (by a worker thread). If you were to use a std::function to store functions in the scheduler then any std::promises or std::packaged_task or other similarly move only types don't work as they cannot be copied by std::function.
Similarly, if you were to use std::packaged_task in the scheduler it imposes unnecessary overhead as many tasks do not require the std::future returned by packaged task at all.
The common and not great solution is to use a std::shared_ptr<std::promise> or a std::shared_ptr<std::packaged_task> which works but it imposes quite a lot of overhead.
The solution
A make_owner, similar to make_unique with one key difference, a move OR copy simply transfers control of destruction of the object. It is basically identical to std::unique_ptr, except that it is copyable (it basically always moves, even on a copy). Grosss....
This means that moving of std::functions doesn't require copies of the std::shared_ptr which require reference counting and it also means there is significantly less overhead on the reference counting etc. A single atomic pointer to the object would be needed and a move OR copy would transfer control. The major difference being that copy also transfers control, this might be a bit of a no-no in terms of strict language rules but I don't see another way around it.
This solution is bad because:
It ignores copy symantics.
It casts away const (in copy constructor and operator =)
Grrr
It isn't as nice of a solution as I'd like so if anybody knows another way to avoid using a shared pointer or only using packaged_tasks in a scheduler I'd love to hear it because I'm stumped...
I am pretty unsatisfied with this solution.... Any ideas?
I am able to re-implement std::function with move symantics but this seems like a massive pain in the arse and it has its own problems regarding object lifetime (but they already exist when using std::function with reference capture).
Some examples of the problem:
EDIT
Note in the target application I cannot do std::thread a (std::move(a)) as the scheduler threads are always running, at most they are put in a sleep state, never joined, never stopped. A fixed number of threads are in the thread pool, I cannot create threads for each task.
auto proms = std::make_unique<std::promise<int>>();
auto future = proms->get_future();
std::thread runner(std::move(std::function( [prom = std::move(proms)]() mutable noexcept
{
prom->set_value(80085);
})));
std::cout << future.get() << std::endl;
std::cin.get();
And an example with a packaged_task
auto pack = std::packaged_task<int(void)>
( []
{
return 1;
});
auto future = pack.get_future();
std::thread runner(std::move(std::function( [pack = std::move(pack)]() mutable noexcept
{
pack();
})));
std::cout << future.get() << std::endl;
std::cin.get();
EDIT
I need to do this from the context of a scheduler, I won't be able to move to the thread.
Please note that the above is minimum re-producible, std::async is not adequate for my application.
The main question is: Why you want to wrap a lambda with std::function before passing it to the std::thread constructor?
It is perfectly fine to do this:
std::thread runner([prom = std::move(proms)]() mutable noexcept
{
prom->set_value(80085);
});
You can find the explanation of why std::function does not allow you to store a move-only lambda here.
If you were going to pass std::function with wrapped lambda to some function, instead of:
void foo(std::function<void()> f)
{
std::thread runner(std::move(f));
/* ... */
}
foo(std::function<void()>([](){}));
You can do this:
void foo(std::thread runner)
{
/* ... */
}
foo(std::thread([](){}));
Update: It can be done in an old-fashioned way.
std::thread runner([prom_deleter = proms.get_deleter(), prom = proms.release()]() mutable noexcept
{
prom->set_value(80085);
// if `proms` deleter is of a `default_deleter` type
// the next line can be simplified to `delete prom;`
prom_deleter(prom);
});
Imagine the following code:
void async(connection *, std::function<void(void)>);
void work()
{
auto o = std::make_shared<O>();
async(&o->member, [] { do_something_else(); } );
}
async will, for example, start a thread using member of o which was passed as a pointer. But written like this when o is going out of scope right after async() has been called and it will be deleted and so will member.
How to solve this correctly and nicely(!) ?
Apparently one solution is to pass o to the capture list. Captures are guaranteed to not be optimized out even if not used.
async(&o->member, [o] { do_something_else(); } );
However, recent compilers (clang-5.0) include the -Wunused-lambda-capture in the -Wextra collection. And this case produces the unused-lambda-capture warning.
I added (void) o; inside the lamdba which silences this warning.
async(&o->member, [o] {
(void) o;
do_something_else();
});
Is there are more elegant way to solve this problem of scope?
(The origin of this problem is derived from using write_async of boost::asio)
Boost.Asio seems to suggest using enable_shared_from_this to keep whatever owns the "connection" alive while there are operations pending that use it. For example:
class task : std::enable_shared_from_this<task> {
public:
static std::shared_ptr<task> make() {
return std::shared_ptr<task>(new task());
}
void schedule() {
async(&conn, [t = shared_from_this()]() { t->run(); });
}
private:
task() = default;
void run() {
// whatever
}
connection conn;
};
Then to use task:
auto t = task::make();
t->schedule();
This seems like a good idea, as it encapsulates all the logic for scheduling and executing a task within the task itself.
I suggest that your async function is not optimally designed. If async invokes the function at some arbitrary point in the future, and it requires that the connection be alive at that time, then I see two possibilities. You could make whatever owns the logic that underlies async also own the connection. For example:
class task_manager {
void async(connection*, std::function<void ()> f);
connection* get_connection(size_t index);
};
This way, the connection will always be alive when async is called.
Alternatively, you could have async take a unique_ptr<connection> or shared_ptr<connection>:
void async(std::shared_ptr<connection>, std::function<void ()> f);
This is better than capturing the owner of connection in the closure, which may have unforeseen side-effects (including that async may expect the connection to stay alive after the function object has been invoked and destroyed).
Not a great answer, but...
It doesn't seem like there's necessarily a "better"/"cleaner" solution, although I'd suggest a more "self descriptive" solution might be to create a functor for the thread operation which explicitly binds the member function and the shared_ptr instance inside it. Using a dummy lambda capture doesn't necessarily capture the intent, and someone might come along later and "optimize" it to a bad end. Admittedly, though, the syntax for binding a functor with a shared_ptr is somewhat more complex.
My 2c, anyway (and I've done similar to my suggestion, for reference).
A solution I've used in a project of mine is to derive the class from enable_shared_from_this and let it leak during the asynchronous call through a data member that stores a copy of the shared pointer.
See Resource class for further details and in particular member methods leak and reset.
Once cleaned up it looks like the following minimal example:
#include<memory>
struct S: std::enable_shared_from_this<S> {
void leak() {
ref = this->shared_from_this();
}
void reset() {
ref.reset();
}
private:
std::shared_ptr<S> ref;
};
int main() {
auto ptr = std::make_shared<S>();
ptr->leak();
// do whatever you want and notify who
// is in charge to reset ptr through
// ptr->reset();
}
The main risk is that if you never reset the internal pointer you'll have an actual leak. In that case it was easy to deal with it, for the underlying library requires a resource to be explicitly closed before to discard it and I reset the pointer when it's closed. Until then, living resources can be retrieved through a proper function (walk member function of Loop class, still a mapping to something offered by the underlying library) and one can still close them at any time, therefore leaks are completely avoided.
In your case you must find your way to avoid the problem somehow and that could be a problem, but it mostly depends on the actual code and I cannot say.
A possible drawback is that in this case you are forced to create your objects on the dynamic storage through a shared pointer, otherwise the whole thing would break out and don't work.
I'm a bit confused about the purpose of std::call_once. To be clear, I understand exactly what std::call_once does, and how to use it. It's usually used to atomically initialize some state, and make sure that only one thread initializes the state. I've also seen online many attempts to create a thread-safe singleton with std::call_once.
As demonstrated here, suppose you write a thread safe singleton, as such:
CSingleton& CSingleton::GetInstance()
{
std::call_once(m_onceFlag, [] {
m_instance.reset(new CSingleton);
});
return *m_instance.get();
}
Okay, I get the idea. But I thought that the only thing std::call_once really guarantees is that the passed function will only be executed once. But does it also guarantee that if there is a race to call the function between multiple threads, and one thread wins, the other threads will block until the winning thread returns from the call?
Because if so, I see no difference between call_once and a plain synchronization mutex, like:
CSingleton& CSingleton::GetInstance()
{
std::unique_lock<std::mutex> lock(m_mutex);
if (!m_instance)
{
m_instance.reset(new CSingleton);
}
lock.unlock();
return *m_instance;
}
So, if std::call_once indeed forces other threads to block, then what benefits does std::call_once offer over a regular mutex? Thinking about it some more, std::call_once would certainly have to force the other threads to block, or whatever computation was accomplished in the user-provided function wouldn't be synchronized. So again, what does std::call_once offer above an ordinary mutex?
One thing that call_once does for you is handle exceptions. That is, if the first thread into it throws an exception inside of the functor (and propagates it out), call_once will not consider the call_once satisfied. A subsequent invocation is allowed to enter the functor again in an effort to complete it without an exception.
In your example, the exceptional case is also handled properly. However it is easy to imagine a more complicated functor where the exceptional case would not be properly handled.
All this being said, I note that call_once is redundant with function-local-statics. E.g.:
CSingleton& CSingleton::GetInstance()
{
static std::unique_ptr<CSingleton> m_instance(new CSingleton);
return *m_instance;
}
Or more simply:
CSingleton& CSingleton::GetInstance()
{
static CSingleton m_instance;
return m_instance;
}
The above is equivalent to your example with call_once, and imho, simpler. Oh, except the order of destruction is very subtly different between this and your example. In both cases m_instance is destroyed in reverse order of construction. But the order of construction is different. In your m_instance is constructed relative to other objects with file-local scope in the same translation unit. Using function-local-statics, m_instance is constructed the first time GetInstance is executed.
That difference may or may not be important to your application. Generally I prefer the function-local-static solution as it is "lazy". I.e. if the application never calls GetInstance() then m_instance is never constructed. And there is no period during application launch when a lot of statics are trying to be constructed at once. You pay for the construction only when actually used.
Slight variation on standard C++ solution is to use lambda inside the usual one:
// header.h
namespace dbj_once {
struct singleton final {};
inline singleton & instance()
{
static singleton single_instance = []() -> singleton {
// this is called only once
// do some more complex initialization
// here
return {};
}();
return single_instance;
};
} // dbj_once
Please observe
anonymous namespace implies default static linkage for variables inside. Thus do not put this inside it. This is header code.
worth repeating: this is safe in presence of multiple threads (MT) and is supported as a such by all major compilers
inside is a lambda which is guaranteed to be called only once
this pattern is also safe to use in header only situations
If you read this you'll see that std::call_once makes no guarantee about data-races, it's simply a utility function for performing an action once (which will work across threads). You shouldn't presume that is has anything close to the affect of a mutex.
as an example:
#include <thread>
#include <mutex>
static std::once_flag flag;
void f(){
operation_that_takes_time();
std::call_once(flag, [](){std::cout << "f() was called\n";});
}
void g(){
operation_that_takes_time();
std::call_once(flag, [](){std::cout << "g() was called\n";});
}
int main(int argc, char *argv[]){
std::thread t1(f);
std::thread t2(g);
t1.join();
t2.join();
}
could print both f() was called and g() was called. This is because in the body of std::call_once it will check whether flag was set then set it if not then call the appropriate function. But while it is checking or before it set flag another thread may call call_once with the same flag and run a function at the same time. You should still protect calls to call_once with a mutex if you know another thread may have a data race.
EDIT
I found a link to the proposal for the std::call_once function and thread library which states that concurrency is guaranteed to only call the function once, so it should work like a mutex (y)
More specifically:
If multiple calls to call_once with the same flag are executing concurrently in separate threads, then only one thread shall call func, and no thread shall proceed until the call to func has completed.
So to answer your question: yes, other threads will be blocked until the calling thread returns from the specified functor.
I'm a novice in C++11 threading and trying use a member function of a class to run in concurrent threads.
In the answer to my earlier question I received the suggestion:
std::thread t1(&SomeClass::threadFunction, *this, arg1, arg2);
I implemented the above suggestion. It removed the compile error I was having but introduced a runtime error. In another question I received the suggestion to remove all copy mechanism. Actually, I don't want to copy the data, because the code is for Finite Element Analysis and require a lot of memory.
Is There any way I can do this?
The header is similar to the following.
SomeClass {
vector<int*> someVariable;
public:
~SomeClass();
void threadedMethod(bool, bool); // Inside this method the
// member vector 'someVariable' is used.
void someMethod(); // In this function the threadedMethod has
// been used twice to make 2 different thread
};
The someMethod implementation is,
void SomeClass::someMethod() {
thread t1(&SomeClass::threadedMethod, *this, arg1, arg2);
thread t2(&SomeClass::threadedMethod, *this, arg1, arg2);
t2.join();
t1.join();
}
The destructor is similar to the following,
SomeClass::~SomeClass() {
int count = someVariable.size();
for(int i=0; i < count; i++) {
delete someVariable[i];
}
}
The threadMethod accesses the variable. The operations are data parallel. As a result, no thread will write in the same memory block. Again, the read and write memory is different. There for I think I don't need any kind of locks.
As you can see, I am using *this and that is causing a lot of copy. I really need to avoid it. Can any one kindly suggest any other way which will let me avoid the copying?
If you need more explanation please let me know. If within my ability I'll try to elaborate as much as possible.
I am using an Intel Mac with OS X 10.8.3. I'm coding on Xcode 4.6.1. The compiler is Apple LLVM 4.2 (default compiler).
Arguments are passed by value to the constructor of std::thread. Therefore, this statement:
std::thread t1(&SomeClass::threadFunction, *this, arg1, arg2);
// ^^^^^
Triggers a copy of *this, which is not what you want. However, std::thread can also accept a pointer to the object on which the member function shall be invoked, exactly like std::bind.
Therefore, by passing this (instead of *this) as an argument, it is the pointer - instead of the pointed object - that is going to be passed by value and eventually copied. This will trigger no copy construction of SomeClass, as you desire.
Thus, you should rewrite the above statement as follows:
std::thread t1(&SomeClass::threadFunction, this, arg1, arg2);
// ^^^^