Can C++11 tell if std::thread is active? - c++

To my surprise, a C++11 std::thread object that has finished executing, but has not yet been joined is still considered an active thread of execution. This is illustrated in the following code example (built on Xubuntu 13.03 with g++ 4.7.3). Does anyone know if the C++11 standard provides a means to detect if a std::thread object is still actively running code?
#include <thread>
#include <chrono>
#include <iostream>
#include <pthread.h>
#include <functional>
int main() {
auto lambdaThread = std::thread([](){std::cout<<"Excuting lambda thread"<<std::endl;});
std::this_thread::sleep_for(std::chrono::milliseconds(250));
if(lambdaThread.joinable()) {
std::cout<<"Lambda thread has exited but is still joinable"<<std::endl;
lambdaThread.join();
}
return 0;
}

No, I don't think that this is possible. I would also try to think about your design and if such a check is really necessary, maybe you are looking for something like the interruptible threads from boost.
However, you can use std::async - which I would do anyway - and then rely on the features std::future provides you.
Namely, you can call std::future::wait_for with something like std::chrono::seconds(0). This gives you a zero-cost check and enables you to compare the std::future_status returned by wait_for.
auto f = std::async(foo);
...
auto status = f.wait_for(std::chrono::seconds(0));
if(status == std::future_status::timeout) {
// still computing
}
else if(status == std::future_status::ready) {
// finished computing
}
else {
// There is still std::future_status::defered
}

for what definition of "actively running code"? not that I know of, I'm not sure what state the thread is left in after it becomes joinable, in most cases I can think of you'd actually want fine grain control, like a flag set by the code running in that thread, anyway
for a platform specific solution, you could use GetThreadTimes

Related

How to implement C++ Task Scheduler

I Know the following code is not Task Scheduler Perhaps
However, trying to get your valuable comments in understanding the scheduler
I am trying to understand & come up with a bare minimum code which can be called as TaskScheduler.
I have the following code but am not sure if it suffices as scheduling.
Could someone provide the comments & code skeleton reference or links?
Thanks!!
#include <iostream>
#include <thread>
#include <future>
#include <queue>
#include <mutex>
#include <condition_variable>
using namespace std;
int factorial_loc(int val) {
int res = 1;
while(val>0) {
res *= val;
val--;
}
return res;
}
queue<packaged_task<int()>> q;
mutex mtx;
condition_variable cond;
void thread_1() {
unique_lock<mutex> ul(mtx);
cond.wait(ul, [](){
return !q.empty();
});
auto f = std::move(q.front());
q.pop();
f();
}
void run_packaged_task()
{
packaged_task<int(int)> t(factorial_loc);
packaged_task<int()> t2(std::bind(factorial_loc, 4));
future<int> f = t2.get_future();
thread t1(thread_1);
{
unique_lock<mutex> ul(mtx);
q.push(std::move(t2));
}
cond.notify_one();
cout<<"\n Res: "<<f.get();
t1.join();
}
I have the following code but am not sure if it suffices as scheduling.
Why not just do this?
void run_packaged_task()
{
cout << "\n Res: " << factorial_loc(4);
}
Maybe you think that my version of run_packaged_task() is not a scheduler. Well, OK. I don't think so either. But, as far as the caller can tell, my version does exactly the same as what your version does;
It computes the factorial of 4,
It writes the result to cout,
And then, only when that's done, it returns.
Your code contains some of the pieces of a scheduler; a thread, a queue, a data type that represents a task, but you don't use any of those pieces to do anything that looks like scheduling.
IMO, you need to think about what "scheduler" means. What do you expect a scheduler to do?
Should a scheduler execute each task as soon as possible? Or, if not, then when? How does the caller say when? How does the scheduler defer execution of the task until such time?
Should the caller have to wait until the task is completed? Should the caller have an option to wait?
I don't know exactly what you mean by "scheduler," but if my guess is correct, then it would have somewhat in common with a thread pool. Maybe you could get some traction if you start by searching for examples of how to implement a simplistic thread pool, and then think about how you could "improve" it to make a "scheduler."

How to give the user some assigned time to answer?

Something like a stopwatch, give the person who is using my program about 30 second to answer, if no answer is got the program to exit ?
Basically the response shouldn't take more than the time given, otherwise the program will exit.
I found the answer by Axalo interesting, however fatally flawed by unfortunate minutia of std::async and std::future. So I'm presenting an alternative that eschews std::async but otherwise follows Axalo's basic design.
When I run Axalo's answer on my platform (which is conforming in the pertinent details), if the client never answers, getInputWithin never returns or exits. The program just hangs. And if the client answers well within the timeout, getInputWithin returns with the correct answer, but doesn't do so until the timeout period has expired.
The reason for this problem is subtle. It is well described in Herb Sutter's excellent paper N3630. A ~std::future() can block if it was returned by std::async() and will block until the associated task is done. This feature was intentionally put into async/future, and in the eyes of some, makes future completely useless.
Axalo's r1 and r2 are such std::futures whose destructor is supposed to block until the associated task is done. And this is why this solution hangs if the client never answers.
Below is an alternative answer which is built from thread, mutex, and condition_variable. It is otherwise very similar to Axalo's answer, but does not suffer from (what some consider) the design flaws of std::async.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <memory>
#include <mutex>
#include <stdexcept>
#include <string>
#include <thread>
#include <tuple>
std::string
getInputWithin(std::chrono::seconds timeout)
{
auto sp = std::make_shared<std::tuple<std::mutex, std::condition_variable,
std::string, bool>>();
std::thread([sp]() mutable
{
std::getline(std::cin, std::get<2>(*sp));
std::lock_guard<std::mutex> lk(std::get<0>(*sp));
std::get<3>(*sp) = true;
std::get<1>(*sp).notify_one();
sp.reset();
}).detach();
std::unique_lock<std::mutex> lk(std::get<0>(*sp));
if (!std::get<1>(*sp).wait_for(lk, timeout, [&]() {return std::get<3>(*sp);}))
throw std::runtime_error("time out");
return std::get<2>(*sp);
}
int main()
{
std::cout << "please answer within 10 seconds...\n";
std::string answer = getInputWithin(std::chrono::seconds(10));
std::cout << answer << '\n';
}
Notes:
The timing stays within the chrono type system always. Prefer the type std::chrono::seconds to a scalar with a suggestive name (int timeoutInSeconds vs std::chrono::seconds timeout).
We need to launch a std::thread to handle the read from std::cin, as Axalo demonstrated. However we are going to need a std::mutex and std::condition_variable for communication instead of using the convenience of std::future. Both the main thread and this auxiliary thread need to share ownership of these communication objects, and we don't know which will die first. If the client never responds, the auxiliary thread may live forever, creating an effective memory leak, which is another problem not solved herein. But at any rate, the easiest way to share ownership is to store the communication objects with a copied std::shared_ptr. Last one out turns out the lights.
Launch a std::thread that waits for std::cin and signals the main thread if it gets it. The signaling must be done with the mutex locked. Note that this thread can be (indeed must be) detached. The thread can not touch any memory that it does not own (because of the shared_ptr owning all referenced memory). If main exits while the auxiliary thread is running, the OS will bring the thread down gracefully with no UB.
The main thread then locks the mutex and does a wait_for on the condition_variable using the specified timeout, and a predicate that is checking for the bool in the tuple to turn to true. This wait_for will either return early with that bool set to true, or it will return with it set to false after timeout seconds. If they race (timeout and client answer at the same time) it is ok, either there will be a string there or not, and the bool in the tuple answers that question. While
the main thread is executing the wait_for, the mutex is unlocked so the auxiliary thread can use it.
If the main thread returns and the bool in the tuple has not been set to true, then an exception is thrown. If this exception is not caught, std::terminate() will be called. Otherwise, the string in the tuple will have the client's response.
This approach is susceptible to a client creating many responses to which it never answers, and thus effectively growing memory leaks held by shared_ptrs which never get destructed. Solving that problem is not something I know how to do in portable C++.
In C++14, a slight modification can be done with getInputWithin which reduces the error of choosing the wrong member of the tuple. Since our tuple is composed of all different types, we can index it by type instead of by position:
std::string
getInputWithin(std::chrono::seconds timeout)
{
auto sp = std::make_shared<std::tuple<std::mutex, std::condition_variable,
std::string, bool>>();
std::thread([sp]() mutable
{
std::getline(std::cin, std::get<std::string>(*sp)); // here
std::lock_guard<std::mutex> lk(std::get<std::mutex>(*sp)); // here
std::get<bool>(*sp) = true; // here
std::get<std::condition_variable>(*sp).notify_one(); // here
sp.reset();
}).detach();
std::unique_lock<std::mutex> lk(std::get<std::mutex>(*sp)); // here
if (!std::get<std::condition_variable>(*sp).wait_for(lk, timeout,
[&]() {return std::get<bool>(*sp);})) // here
throw std::runtime_error("time out");
return std::get<std::string>(*sp); // here
}
That is, the lines marked // here have been changed with std::get<type>(*sp) as opposed to std::get<index>(*sp).
Update
In a fit of paranoia inspired by the good comment from TemplateRex below, I've added a call to sp.reset() as the last thing the aux thread does. This forces the main thread to be the one to destruct the tuple, eliminating the possibility that the aux thread could stall before destructing its local copy of sp, and let main blow through the atexit chain, and then have the aux thread wake up and run the tuple destructor.
There may be other reasons that exist to make the call to sp.reset() unnecessary. But by adding this preventative medicine, we don't have to worry about it.
If you don't want to use exit and kill the process you could do it this way:
std::string getInputWithin(int timeoutInSeconds, bool *noInput = nullptr)
{
std::string answer;
bool exceeded = false;
bool gotInput = false;
auto r1 = std::async([&answer, &gotInput]()
{
std::getline(std::cin, answer);
gotInput = true;
});
auto r2 = std::async([&timeoutInSeconds, &exceeded]()
{
std::this_thread::sleep_for(std::chrono::seconds(timeoutInSeconds));
exceeded = true;
});
while(!gotInput && !exceeded)
{
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
if(gotInput)
{
if(noInput != nullptr) *noInput = false;
return answer;
}
if(noInput != nullptr) *noInput = true;
return "";
}
int main()
{
std::cout << "please answer within 10 seconds...\n";
bool noInput;
std::string answer = getInputWithin(10, &noInput);
return 0;
}
The nice thing about this is that you can now handle the missing input by using a default value or simply give the user a second chance, etc...

C++ program unexpectedly blocks / throws

I'm learning about mutexes in C++ and have a problem with the following code (taken from N. Josuttis' "The C++ Standard Library").
I don't understand why it blocks / throws unless I add this_thread::sleep_for in the main thread (then it doesn't block and all three calls are carried out).
The compiler is cl.exe used from the command line.
#include <future>
#include <mutex>
#include <iostream>
#include <string>
#include <thread>
#include <chrono>
std::mutex printMutex;
void print(const std::string& s)
{
std::lock_guard<std::mutex> lg(printMutex);
for (char c : s)
{
std::cout.put(c);
}
std::cout << std::endl;
}
int main()
{
auto f1 = std::async(std::launch::async, print, "Hello from thread 1");
auto f2 = std::async(std::launch::async, print, "Hello from thread 2");
// std::this_thread::sleep_for(std::chrono::seconds(1));
print(std::string("Hello from main"));
}
I think what you are seeing is an issue with the conformance of the MSVC implementation of async (in combination with future). I believe it is not conformant. I am able to reproduce it with VS2013, but unable to reproduce the issue with gcc.
The crash is because the main thread exits (and starts to clean up) before the other two threads complete.
Hence a simple delay (the sleep_for) or .get() or .wait() on the two futures should fix it for you. So the modified main could look like;
int main()
{
auto f1 = std::async(std::launch::async, print, "Hello from thread 1");
auto f2 = std::async(std::launch::async, print, "Hello from thread 2");
print(std::string("Hello from main"));
f1.get();
f2.get();
}
Favour the explicit wait or get over the timed "sleep".
Notes on the conformance
There was a proposal from Herb Sutter to change the wait or block on the shared state of the future returned from async. This may be the reason for the behaviour in MSVC, it could be seen as having implemented the proposal. I'm not sure what the final result was of the proposal was or its integration (or part thereof) into C++14. At least w.r.t. the blocking of the future returned from async it looks like the MSVC behaviour did not make it into the specification.
It is interesting to note that the wording in §30.6.8/5 changed;
From C++11
a call to a waiting function on an asynchronous return object that shares the shared state created
by this async call shall block until the associated thread has completed, as if joined
To C++14
a call to a waiting function on an asynchronous return object that shares the shared state created
by this async call shall block until the associated thread has completed, as if joined, or else time
out
I'm not sure how the "time out" would be specified, I would imagine it is implementation defined.
std::async returns a future. Its destructor blocks if get or wait has not been called:
it may block if all of the following are true: the shared state was created by a call to std::async, the shared state is not yet ready, and this was the last reference to the shared state.
See std::futures from std::async aren't special! for a detailed treatment of the subject.
Add these 2 lines at the end of main:
f1.wait();
f2.wait();
This will make sure the threads finish before main exists.

std::thread c++. More threads same data

Im using visual studio 2012 and c++11. I dont understand why this does not work:
void client_loop(bool &run)
{
while ( run );
}
int main()
{
bool running = true;
std::thread t(&client_loop,std::ref(running));
running = false ;
t.join();
}
In this case, the loop of thread t never finishes but I explicity set running to false. run and running have the same location. I tried to set running as a single global variable but nothing happens. I tried to pass a pointer value too but nothing.
The threads use the same heap. I really don't understand. Can anyone help me?
Your program has Undefined Behavior, because it introduces a data race on the running variable (one thread writes it, another thread reads it).
You should use a mutex to synchronize access, or make running an atomic<bool>:
#include <iostream>
#include <thread>
#include <atomic>
void client_loop(std::atomic<bool> const& run)
{
while (run.load());
}
int main()
{
std::atomic<bool> running(true);
std::thread t(&client_loop,std::ref(running));
running = false ;
t.join();
std::cout << "Arrived";
}
See a working live example.
The const probably doesn't affect the compiler's view of the code. In a single-threaded application, the value won't change (and this particular program is meaningless). In a multi-threaded application, since it's an atomic type, the compiler can't optimize out the load, so in fact there's no real issue here. It's really more a matter of style; since main modifies the value, and client_loop looks for that modification, it doesn't seem right to me to say that the value is const.

Get return code from std::thread? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C++: Simple return value from std::thread?
Is there anyway to get the return code from a std::thread? I have a function which returns a integer, and I want to be able to get the return code from the function when the thread is done executing.
No, that's not what std::thread is for.
Instead, use async to get a future:
#include <future>
int myfun(double, char, bool);
auto f = std::async(myfun, arg1, arg2, arg3); // f is a std::future<int>
// ...
int res = f.get();
You can use the wait_for member function of f (with zero timeout) to see if the result is ready.
As others have suggested, the facilities in <future> can be used for this. However I object to the answer
No, you can't do this with std::thread
Here is one way to do what you want with std::thread. It is by no means the only way:
#include <thread>
#include <iostream>
int func(int x)
{
return x+1;
}
int main()
{
int i;
std::thread t([&] {i = func(2);});
t.join();
std::cout << i << '\n';
}
This will portably output:
3
Kerrek SB is correct with his answer, but I suggested to add another example (which he suggested should be an answer, so here it is).
I discovered recently that at least in VC11, std::async will not release all the resources of the thread until the end of the application, making possible to get memory leak false positive (if you are monitoring them using, for example Visual Leak Detector).
Here I mean that in most basic applications it is not worth looking at the rest of this answer, but if like me you need to check memory leaks and can't afford to let false positive, like static data not released at the end of the main function. If it's your case, then this might help.
std::async is not guaranteed to run in a separate thread by default, it is only if you use std::launch::async as first parameter. Otherwise the implementation decide what to do, and that's why VC11 implementation will use the new Microsoft Concurrency Runtime task manager to manage the provided function as a task pushed in a task pool, which mean threads are maintained and managed in a transparent way. There are ways to explicitely terminate the task manager but that's too platform specific, making async a poor choice when you want exactly 1) be sure to launch a thread and 2) get a result later and 3) be sure the thread is fully released when you get the result.
The alternative that does exactly that is to use std::packaged_task and std::thread in combination with std::future. The way it is done is almost similar to using std::async, just a bit more verbose (which mean you can generalize it in a custom template function if you want).
#include <packaged_task>
#include <thread>
int myfun(double, char, bool);
std::packaged_task<int(double, char, bool)> task(myfun, arg1, arg2, arg3);
auto f = task.get_future(); // f is a std::future<int>
First we create a task, basically an object containing both the function and the std::promise that will be associated with the future. std::packaged_task works mostly like an augmented version of std::function:
Now we need to execute the thread explicitly:
std::thread thread(std::move(task));
thread.detach();
The move is necessary because std::packaged_task is not copyable. Detaching the thread is only necessary if you only want to synchronize using the future – otherwise you will need to join the thread explicitly. If you don't, when thread's destructor is called, it will just call std::terminate().
// ...
int res = f.get(); // Synchronization and retrieval.
Here's an example using packaged_task:
#include <future>
#include <iostream>
void task_waiter(std::future<int>&& f) {
std::future<int> ft = std::move(f);
int result = ft.get();
std::cout << result << '\n';
}
int the_task() {
return 17;
}
int main() {
std::packaged_task<int()> task(the_task);
std::thread thr(task_waiter, task.get_future());
task();
thr.join();
return 0;
}