Do I need to implement blocking when using boost::asio? - c++

My question is, if I run io_service::run () on multiple threads, do I need to implement blocking on these asynchronous functions?
example:
int i = 0;
int j = 0;
void test_timer(boost::system::error_code ec)
{
//I need to lock up here ?
if (i++ == 10)
{
j = i * 10;
}
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(500));
timer.async_wait(&test_timer);
}
void threadMain()
{
io_service.run();
}
int main()
{
boost::thread_group workers;
timer.async_wait(&test_timer);
for (int i = 0; i < 5; i++){
workers.create_thread(&threadMain);
}
io_service.run();
workers.join_all();
return 0;
}

The definition of async is that it is non-blocking.
If you mean to ask "do I have to synchronize access to shared objects from different threads" - that question is unrelated and the answer depends on the thread-safety documented for the object you are sharing.
For Asio, basically (rough summary) you need to synchronize concurrent access (concurrent as in: from multiple threads) to all types except boost::asio::io_context¹,².
Your Sample
Your sample uses multiple threads running the io service, meaning handlers run on any of those threads. This means that effectively you're sharing the globals and indeed they need protection.
However Because your application logic (the async call chain) dictates that only one operation is ever pending, and the next async operation on the shared timer object is always scheduled from within that chain, the access is logically all from a single thread (called an implicit strand. See Why do I need strand per connection when using boost::asio?
The simplest thing that would work:
Logical Strand
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <iostream>
boost::asio::io_service io_service;
boost::asio::deadline_timer timer { io_service };
struct state_t {
int i = 0;
int j = 0;
} state;
void test_timer(boost::system::error_code ec)
{
if (ec != boost::asio::error::operation_aborted) {
{
if (state.i++ == 10) {
state.j = state.i * 10;
if (state.j > 100)
return; // stop after 5 seconds
}
}
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(50));
timer.async_wait(&test_timer);
}
}
int main()
{
boost::thread_group workers;
timer.expires_from_now(boost::posix_time::milliseconds(50));
timer.async_wait(&test_timer);
for (int i = 0; i < 5; i++){
workers.create_thread([] { io_service.run(); });
}
workers.join_all();
std::cout << "i = " << state.i << std::endl;
std::cout << "j = " << state.j << std::endl;
}
Note I removed the io_service::run() from the main thread as it is redundant with the join() (unless you really wanted 6 threads running the handlers, not 5).
Prints
i = 11
j = 110
Caveat
There's a pitfall lurking here. Say, you didn't want to bail at a fixed number, like I did, but want to stop, you'd be tempted to do:
timer.cancel();
from main. That's not legal, because the deadline_timer object is not thread safe. You'd need to either
use a global atomic_bool to signal the request for termination
post the timer.cancel() on the same strand as the timer async chain. However, there is only an explicit strand, so you can't do it without changing the code to use an explicit strand.
More Timers
Let's complicate things by having two timers, with their own implicit strands. This means access to the timer instances still need not be synchronized, but access to i and j does need to be.
Note In this demo I use synchronized_value<> for elegance. You can write similar logic manually using mutex and lock_guard.
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/thread/synchronized_value.hpp>
#include <iostream>
boost::asio::io_service io_service;
struct state {
int i = 0;
int j = 0;
};
boost::synchronized_value<state> shared_state;
struct TimerChain {
boost::asio::deadline_timer _timer;
TimerChain() : _timer{io_service} {
_timer.expires_from_now(boost::posix_time::milliseconds(50));
resume();
}
void resume() {
_timer.async_wait(boost::bind(&TimerChain::test_timer, this, _1));
};
void test_timer(boost::system::error_code ec)
{
if (ec != boost::asio::error::operation_aborted) {
{
auto state = shared_state.synchronize();
if (state->i++ == 10) {
state->j = state->i * 10;
}
if (state->j > 100) return; // stop after some iterations
}
_timer.expires_at(_timer.expires_at() + boost::posix_time::milliseconds(50));
resume();
}
}
};
int main()
{
boost::thread_group workers;
TimerChain timer1;
TimerChain timer2;
for (int i = 0; i < 5; i++){
workers.create_thread([] { io_service.run(); });
}
workers.join_all();
auto state = shared_state.synchronize();
std::cout << "i = " << state->i << std::endl;
std::cout << "j = " << state->j << std::endl;
}
Prints
i = 12
j = 110
Adding The Explicit Strands
Now it's pretty straight-forward to add them:
struct TimerChain {
boost::asio::io_service::strand _strand;
boost::asio::deadline_timer _timer;
TimerChain() : _strand{io_service}, _timer{io_service} {
_timer.expires_from_now(boost::posix_time::milliseconds(50));
resume();
}
void resume() {
_timer.async_wait(_strand.wrap(boost::bind(&TimerChain::test_timer, this, _1)));
};
void stop() { // thread safe
_strand.post([this] { _timer.cancel(); });
}
// ...
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/thread/synchronized_value.hpp>
#include <iostream>
boost::asio::io_service io_service;
struct state {
int i = 0;
int j = 0;
};
boost::synchronized_value<state> shared_state;
struct TimerChain {
boost::asio::io_service::strand _strand;
boost::asio::deadline_timer _timer;
TimerChain() : _strand{io_service}, _timer{io_service} {
_timer.expires_from_now(boost::posix_time::milliseconds(50));
resume();
}
void resume() {
_timer.async_wait(_strand.wrap(boost::bind(&TimerChain::test_timer, this, _1)));
};
void stop() { // thread safe
_strand.post([this] { _timer.cancel(); });
}
void test_timer(boost::system::error_code ec)
{
if (ec != boost::asio::error::operation_aborted) {
{
auto state = shared_state.synchronize();
if (state->i++ == 10) {
state->j = state->i * 10;
}
}
// continue indefinitely
_timer.expires_at(_timer.expires_at() + boost::posix_time::milliseconds(50));
resume();
}
}
};
int main()
{
boost::thread_group workers;
TimerChain timer1;
TimerChain timer2;
for (int i = 0; i < 5; i++){
workers.create_thread([] { io_service.run(); });
}
boost::this_thread::sleep_for(boost::chrono::seconds(10));
timer1.stop();
timer2.stop();
workers.join_all();
auto state = shared_state.synchronize();
std::cout << "i = " << state->i << std::endl;
std::cout << "j = " << state->j << std::endl;
}
Prints
i = 400
j = 110
¹ (or using the legacy name boost::asio::io_service)
² lifetime mutations are not considered member operations in this respect (you have to manually synchronize construction/destruction of shared objects even for thread-safe objects)

Related

thread pooling in c++ - how to end the program

I've implemented thread pooling following the answer of Kerrek SB in this question.
I've implemented MPMC queue for the functions and vector threads for the threads.
Everything worked perfectly, except that I don't know how to terminate the program, in the end if I just do thread.join since the thread is still waiting for more tasks to do, it will not join and the main thread will not continue.
Any idea how to end the program correctly?
For completeness, this is my code:
function_pool.h
#pragma once
#include <queue>
#include <functional>
#include <mutex>
#include <condition_variable>
class Function_pool
{
private:
std::queue<std::function<void()>> m_function_queue;
std::mutex m_lock;
std::condition_variable m_data_condition;
public:
Function_pool();
~Function_pool();
void push(std::function<void()> func);
std::function<void()> pop();
};
function_pool.cpp
#include "function_pool.h"
Function_pool::Function_pool() : m_function_queue(), m_lock(), m_data_condition()
{
}
Function_pool::~Function_pool()
{
}
void Function_pool::push(std::function<void()> func)
{
std::unique_lock<std::mutex> lock(m_lock);
m_function_queue.push(func);
// when we send the notification immediately, the consumer will try to
get the lock , so unlock asap
lock.unlock();
m_data_condition.notify_one();
}
std::function<void()> Function_pool::pop()
{
std::unique_lock<std::mutex> lock(m_lock);
m_data_condition.wait(lock, [this]() {return !m_function_queue.empty();
});
auto func = m_function_queue.front();
m_function_queue.pop();
return func;
// Lock will be released
}
main.cpp
#include "function_pool.h"
#include <string>
#include <iostream>
#include <mutex>
#include <functional>
#include <thread>
#include <vector>
Function_pool func_pool;
void example_function()
{
std::cout << "bla" << std::endl;
}
void infinite_loop_func()
{
while (true)
{
std::function<void()> func = func_pool.pop();
func();
}
}
int main()
{
std::cout << "stating operation" << std::endl;
int num_threads = std::thread::hardware_concurrency();
std::cout << "number of threads = " << num_threads << std::endl;
std::vector<std::thread> thread_pool;
for (int i = 0; i < num_threads; i++)
{
thread_pool.push_back(std::thread(infinite_loop_func));
}
//here we should send our functions
func_pool.push(example_function);
for (int i = 0; i < thread_pool.size(); i++)
{
thread_pool.at(i).join();
}
int i;
std::cin >> i;
}
Your problem is located in infinite_loop_func, which is an infinite loop and by result doesn't terminate. I've read the previous answer which suggests throwing an exception, however, I don't like it since exceptions should not be used for the regular control flow.
The best way to solve this is to explicitly deal with the stop condition. For example:
std::atomic<bool> acceptsFunctions;
Adding this to the function pool allows you to clearly have state and to assert that no new functions being added when you destruct.
std::optional<std::function<void()>> Function_pool::pop()
Returning an empty optional (or function in C++14 and before), allows you to deal with an empty queue. You have to, as condition_variable can do spurious wakeups.
With this, m_data_condition.notify_all() can be used to wake all threads.
Finally we have to fix the infinite loop as it doesn't cover overcommitment and at the same time allows you to execute all functions still in the queue:
while (func_pool.acceptsFunctions || func_pool.containsFunctions())
{
auto f = func_pool.pop();
If (!f)
{
func_pool.m_data_condition.wait_for(1s);
continue;
}
auto &function = *f;
function ();
}
I'll leave it up to you to implement containsFunctions() and clean up the code (infinite_loop_func as member function?) Note that with a counter, you could even deal with background task being spawned.
You can always use a specific exception type to signal to infinite_loop_func that it should return...
class quit_worker_exception: public std::exception {};
Then change infinite_loop_func to...
void infinite_loop_func ()
{
while (true) {
std::function<void()> func = func_pool.pop();
try {
func();
}
catch (quit_worker_exception &ex) {
return;
}
}
}
With the above changes you could then use (in main)...
/*
* Enqueue `thread_pool.size()' function objects whose sole job is
* to throw an instance of `quit_worker_exception' when invoked.
*/
for (int i = 0; i < thread_pool.size(); i++)
func_pool.push([](){ throw quit_worker_exception(); });
/*
* Now just wait for each worker to terminate having received its
* quit_worker_exception.
*/
for (int i = 0; i < thread_pool.size(); i++)
thread_pool.at(i).join();
Each instance of infinite_loop_func will dequeue one function object which, when called, throws a quit_worker_exception causing it to return.
Follwoing [JVApen](https://stackoverflow.com/posts/51382714/revisions) suggestion, I copy my code in case anyone will want a working code:
function_pool.h
#pragma once
#include <queue>
#include <functional>
#include <mutex>
#include <condition_variable>
#include <atomic>
#include <cassert>
class Function_pool
{
private:
std::queue<std::function<void()>> m_function_queue;
std::mutex m_lock;
std::condition_variable m_data_condition;
std::atomic<bool> m_accept_functions;
public:
Function_pool();
~Function_pool();
void push(std::function<void()> func);
void done();
void infinite_loop_func();
};
function_pool.cpp
#include "function_pool.h"
Function_pool::Function_pool() : m_function_queue(), m_lock(), m_data_condition(), m_accept_functions(true)
{
}
Function_pool::~Function_pool()
{
}
void Function_pool::push(std::function<void()> func)
{
std::unique_lock<std::mutex> lock(m_lock);
m_function_queue.push(func);
// when we send the notification immediately, the consumer will try to get the lock , so unlock asap
lock.unlock();
m_data_condition.notify_one();
}
void Function_pool::done()
{
std::unique_lock<std::mutex> lock(m_lock);
m_accept_functions = false;
lock.unlock();
// when we send the notification immediately, the consumer will try to get the lock , so unlock asap
m_data_condition.notify_all();
//notify all waiting threads.
}
void Function_pool::infinite_loop_func()
{
std::function<void()> func;
while (true)
{
{
std::unique_lock<std::mutex> lock(m_lock);
m_data_condition.wait(lock, [this]() {return !m_function_queue.empty() || !m_accept_functions; });
if (!m_accept_functions && m_function_queue.empty())
{
//lock will be release automatically.
//finish the thread loop and let it join in the main thread.
return;
}
func = m_function_queue.front();
m_function_queue.pop();
//release the lock
}
func();
}
}
main.cpp
#include "function_pool.h"
#include <string>
#include <iostream>
#include <mutex>
#include <functional>
#include <thread>
#include <vector>
Function_pool func_pool;
class quit_worker_exception : public std::exception {};
void example_function()
{
std::cout << "bla" << std::endl;
}
int main()
{
std::cout << "stating operation" << std::endl;
int num_threads = std::thread::hardware_concurrency();
std::cout << "number of threads = " << num_threads << std::endl;
std::vector<std::thread> thread_pool;
for (int i = 0; i < num_threads; i++)
{
thread_pool.push_back(std::thread(&Function_pool::infinite_loop_func, &func_pool));
}
//here we should send our functions
for (int i = 0; i < 50; i++)
{
func_pool.push(example_function);
}
func_pool.done();
for (unsigned int i = 0; i < thread_pool.size(); i++)
{
thread_pool.at(i).join();
}
}

In C++11, is it wise (or even safe) to use std::unique_lock<std::mutex> as a class member? If so, are there any guidelines?

Is it wise (or even safe) to use std::unique_lock as a class member? If so, are there any guidelines?
My thinking in using std::unique_lock was to ensure that the mutex is unlocked in the case of an exception being thrown.
The following code gives an example of how I'm currently using the unique_lock. I would like to know if I'm going in the wrong direction or not before the project grows too much.
#include <iostream>
#include <string>
#include <thread>
#include <mutex>
#include <unistd.h>
class WorkerClass {
private:
std::thread workerThread;
bool workerThreadRunning;
int workerThreadInterval;
int sharedResource;
std::mutex mutex;
std::unique_lock<std::mutex> workerMutex;
public:
WorkerClass() {
workerThreadRunning = false;
workerThreadInterval = 2;
sharedResource = 0;
workerMutex = std::unique_lock<std::mutex>(mutex);
unlockMutex();
}
~WorkerClass() {
stopWork();
}
void startWork() {
workerThreadRunning = true;
workerThread = std::thread(&WorkerClass::workerThreadMethod,
this);
}
void stopWork() {
lockMutex();
if (workerThreadRunning) {
workerThreadRunning = false;
unlockMutex();
workerThread.join();
}else {
unlockMutex();
}
}
void lockMutex() {
try {
workerMutex.lock();
}catch (std::system_error &error) {
std::cout << "Already locked" << std::endl;
}
}
void unlockMutex() {
try {
workerMutex.unlock();
}catch (std::system_error &error) {
std::cout << "Already unlocked" << std::endl;
}
}
int getSharedResource() {
int result;
lockMutex();
result = sharedResource;
unlockMutex();
return result;
}
void workerThreadMethod() {
bool isRunning = true;
while (isRunning) {
lockMutex();
sharedResource++;
std::cout << "WorkerThread: sharedResource = "
<< sharedResource << std::endl;
isRunning = workerThreadRunning;
unlockMutex();
sleep(workerThreadInterval);
}
}
};
int main(int argc, char *argv[]) {
int sharedResource;
WorkerClass *worker = new WorkerClass();
std::cout << "ThisThread: Starting work..." << std::endl;
worker->startWork();
for (int i = 0; i < 10; i++) {
sleep(1);
sharedResource = worker->getSharedResource();
std::cout << "ThisThread: sharedResource = "
<< sharedResource << std::endl;
}
worker->stopWork();
std::cout << "Done..." << std::endl;
return 0;
}
this is actually quite bad. storing a std::unique_lock or std::lock_guard as a member variable misses the point of scoped locking, and locking in general.
the idea is to have shared lock between threads, but each one temporary locks the shared resource the lock protects. the wrapper object makes it return-from-function safe and exception-safe.
you first should think about your shared resource. in the context of "Worker" I'd imagine some task queue. then, that task queue is associated with a some lock. each worker locks that lock with scoped-wrapper for queuing a task or dequeuing it. there is no real reason to keep the lock locked as long as some instance of a worker thread is alive, it should lock it when it needs to.
It is not a good idea to do that for a number of reasons. The first you're already "handling" with the try-catch block: two threads attempting to lock the same lock results in an exception. If you want non-blocking lock attempts you should use try_lock instead.
The second reason is that when std::unique_lock is stack-allocated in the scope of the duration of the lock, then when it is destructed it will unlock the resource for you. This means it is exception safe, if workerThread.join() throws in your current code then the lock will remain acquired.

deadline_timer strange behavior

I have a test class like this. What I want to do is to keep running the three timers in this object. But after I instantiate an object, some timer just keeps repeating but others will disappear after like 3 mins. Can anyone explain this for me?
class EventProcessor
{
private:
boost::asio::deadline_timer* m_Timer0;
boost::asio::deadline_timer* m_Timer1;
boost::asio::deadline_timer* m_Timer2;
boost::asio::io_service io0;
boost::asio::io_service io1;
boost::asio::io_service io2;
int TimerInterval[3];
boost::asio::deadline_timer* Timers[3];
public:
EventProcessor(int p_0, int p_1, int p_2)
{
TimerInterval[0] = p_0;
TimerInterval[1] = p_1;
TimerInterval[2] = p_2;
m_Timer0= new boost::asio::deadline_timer(io0, boost::posix_time::seconds(TimerInterval[0]));
Timers[0] = m_Timer0;
m_Timer1 = new boost::asio::deadline_timer(io1, boost::posix_time::seconds(TimerInterval[1]));
Timers[1] = m_Timer1;
m_Timer2 = new boost::asio::deadline_timer(io2, boost::posix_time::seconds(TimerInterval[2]));
Timers[2] = m_Timer2;
m_Timer0->async_wait(boost::bind(&EventProcessor::HandleExpire, this, boost::asio::placeholders::error, 0));
m_Timer1->async_wait(boost::bind(&EventProcessor::HandleExpire, this, boost::asio::placeholders::error, 1));
m_Timer2->async_wait(boost::bind(&EventProcessor::HandleExpire, this, boost::asio::placeholders::error, 2));
StartWithNewThread(0);
StartWithNewThread(1);
StartWithNewThread(2);
}
private:
void HandleExpire(const boost::system::error_code& p_ec, int p_TimerIndex)
{
if(p_ec == boost::asio::error::operation_aborted)
{
std::cout << "Timer" << p_TimerIndex << " canceled" << std::endl;
return;
}
std::cout << "Timer" << p_TimerIndex << " expired" << std::endl;
//Reset(p_OriginalTimer, TimerInterval[p_TimerIndex], p_TimerIndex);
boost::thread Thread(boost::bind(&EventProcessor::Reset, this, p_TimerIndex, TimerInterval[p_TimerIndex]));
}
void Start(int p_Index)
{
boost::asio::io_service& UnderlyingIO = Timers[p_Index]->get_io_service();
UnderlyingIO.reset();
UnderlyingIO.run();
UnderlyingIO.stop();
return;
}
void StartWithNewThread(int p_Index)
{
boost::thread Thread(boost::bind(&EventProcessor::Start, this, p_Index));
std::cout << Thread.get_id() << "<->" << "Timer" << p_Index << std::endl;
return;
}
public:
void Reset(int p_Index, int p_Seconds)
{
Timers[p_Index]->cancel();
Timers[p_Index]->expires_from_now(boost::posix_time::time_duration(0,0,p_Seconds,0));
TimerInterval[p_Index] = p_Seconds;
Timers[p_Index]->async_wait(boost::bind(&EventProcessor::HandleExpire, this, boost::asio::placeholders::error, p_Index));
boost::asio::io_service& UnderlyingIO = Timers[p_Index]->get_io_service();
UnderlyingIO.reset();
UnderlyingIO.run();
UnderlyingIO.stop();
return;
}
};
So this is how you should do it:
#include "test.h"
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/atomic.hpp>
class EventProcessor
{
private:
std::unique_ptr<boost::asio::deadline_timer> m_Timers[3];
boost::asio::io_service service;
boost::atomic<int> TimerInterval[3];
public:
EventProcessor(int time0,int time1, int time2)
{
TimerInterval[0] = time0;
TimerInterval[1] = time1;
TimerInterval[2] = time2;
for (int i = 0; i < 3; i++)
{
m_Timers[i].reset(
new boost::asio::deadline_timer(service));
}
}
~EventProcessor()
{
service.stop();
for (int i = 0; i < 3; i++)
{
m_Timers[i]->cancel();
}
}
void Run()
{
for (int i = 0; i < 3; i++)
{
m_Timers[i]->expires_from_now(boost::posix_time::seconds(TimerInterval[i]));
m_Timers[i]->async_wait(boost::bind(&EventProcessor::HandleExpire,
this,
i,
_1));
}
service.run();
}
void RunAsync()
{
boost::thread(boost::bind(&EventProcessor::Run,this));
}
void Reset(int i,int seconds)
{
TimerInterval[i] = seconds;
m_Timers[i]->expires_from_now(boost::posix_time::seconds(TimerInterval[i]));
m_Timers[i]->async_wait(boost::bind(&EventProcessor::HandleExpire,
this,
i,
_1));
}
private:
void HandleExpire(int p_TimerIndex, const boost::system::error_code& error)
{
if(error == boost::asio::error::operation_aborted)
{
std::cout << "Timer" << p_TimerIndex << " canceled" << std::endl;
return;
}
std::cout << "Timer" << p_TimerIndex << " expired" << std::endl;
//Reset(p_OriginalTimer, TimerInterval[p_TimerIndex], p_TimerIndex);
m_Timers[p_TimerIndex]->expires_from_now(
boost::posix_time::seconds(TimerInterval[p_TimerIndex]));
m_Timers[p_TimerIndex]->async_wait(boost::bind(&EventProcessor::HandleExpire,
this,
p_TimerIndex,
_1));
}
};
int main()
{
EventProcessor ev(1,2,3);
ev.RunAsync();
getchar();
ev.Reset(2,4);
getchar();
}
Granted I don't have any of the fancy checkers to see if you are currently running or not (which you totally need if you want this to be safe to use).
You can think of boost::asio::io_service as a context in which async calls can be made. It creates a FIFO queue of messages to process, and processes them where and when you tell it to. The most common way to process these messages is boost::asio::io_service::run, which will process messages until there is nothing left to be done. "nothing left to be done" is a flexible definition: it doesn't necessarily mean there is a message to process, just that there is stuff to be done. Things like a deadline timer make sure that there is "something to be done" as long as an async_wait is going on until the handler is called. You can manually enforce that there is something to be done by creating a boost::asio::io_service::work instance. This makes it so that there is "something left to be done" for the lifetime of the work object.
The deadline timer class takes care of all the async calls for you, so you don't have to spawn all those threads. The io_service performs synchronization, which is necessary to prevent annoying control issues.
So to the problem with your code:
With all those threads controlling the io_service, it is hard to tell what is actually going wrong...I have to guess on what could possibly going wrong. I'd put my money on somewhere along the line, you call a io_service::cancel before a deadline timer times out, which will stop your loop. I solve this in my code by doing all the control (calling wait_async) in one synchronous thread (the io_service::run call) and only calling io_service::cancel when I want the code to stop.

How to check if a std::thread is still running?

How can I check if a std::thread is still running (in a platform independent way)?
It lacks a timed_join() method and joinable() is not meant for that.
I thought of locking a mutex with a std::lock_guard in the thread and using the try_lock() method of the mutex to determine if it is still locked (the thread is running), but it seems unnecessarily complex to me.
Do you know a more elegant method?
Update: To be clear: I want to check if the thread cleanly exited or not. A 'hanging' thread is considered running for this purpose.
If you are willing to make use of C++11 std::async and std::future for running your tasks, then you can utilize the wait_for function of std::future to check if the thread is still running in a neat way like this:
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
int main() {
using namespace std::chrono_literals;
/* Run some task on new thread. The launch policy std::launch::async
makes sure that the task is run asynchronously on a new thread. */
auto future = std::async(std::launch::async, [] {
std::this_thread::sleep_for(3s);
return 8;
});
// Use wait_for() with zero milliseconds to check thread status.
auto status = future.wait_for(0ms);
// Print status.
if (status == std::future_status::ready) {
std::cout << "Thread finished" << std::endl;
} else {
std::cout << "Thread still running" << std::endl;
}
auto result = future.get(); // Get result.
}
If you must use std::thread then you can use std::promise to get a future object:
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
int main() {
using namespace std::chrono_literals;
// Create a promise and get its future.
std::promise<bool> p;
auto future = p.get_future();
// Run some task on a new thread.
std::thread t([&p] {
std::this_thread::sleep_for(3s);
p.set_value(true); // Is done atomically.
});
// Get thread status using wait_for as before.
auto status = future.wait_for(0ms);
// Print status.
if (status == std::future_status::ready) {
std::cout << "Thread finished" << std::endl;
} else {
std::cout << "Thread still running" << std::endl;
}
t.join(); // Join thread.
}
Both of these examples will output:
Thread still running
This is of course because the thread status is checked before the task is finished.
But then again, it might be simpler to just do it like others have already mentioned:
#include <thread>
#include <atomic>
#include <chrono>
#include <iostream>
int main() {
using namespace std::chrono_literals;
std::atomic<bool> done(false); // Use an atomic flag.
/* Run some task on a new thread.
Make sure to set the done flag to true when finished. */
std::thread t([&done] {
std::this_thread::sleep_for(3s);
done = true;
});
// Print status.
if (done) {
std::cout << "Thread finished" << std::endl;
} else {
std::cout << "Thread still running" << std::endl;
}
t.join(); // Join thread.
}
Edit:
There's also the std::packaged_task for use with std::thread for a cleaner solution than using std::promise:
#include <future>
#include <thread>
#include <chrono>
#include <iostream>
int main() {
using namespace std::chrono_literals;
// Create a packaged_task using some task and get its future.
std::packaged_task<void()> task([] {
std::this_thread::sleep_for(3s);
});
auto future = task.get_future();
// Run task on new thread.
std::thread t(std::move(task));
// Get thread status using wait_for as before.
auto status = future.wait_for(0ms);
// Print status.
if (status == std::future_status::ready) {
// ...
}
t.join(); // Join thread.
}
An easy solution is to have a boolean variable that the thread sets to true on regular intervals, and that is checked and set to false by the thread wanting to know the status. If the variable is false for to long then the thread is no longer considered active.
A more thread-safe way is to have a counter that is increased by the child thread, and the main thread compares the counter to a stored value and if the same after too long time then the child thread is considered not active.
Note however, there is no way in C++11 to actually kill or remove a thread that has hanged.
Edit How to check if a thread has cleanly exited or not: Basically the same technique as described in the first paragraph; Have a boolean variable initialized to false. The last thing the child thread does is set it to true. The main thread can then check that variable, and if true do a join on the child thread without much (if any) blocking.
Edit2 If the thread exits due to an exception, then have two thread "main" functions: The first one have a try-catch inside which it calls the second "real" main thread function. This first main function sets the "have_exited" variable. Something like this:
std::atomic<bool> thread_done = false;
void *thread_function(void *arg)
{
void *res = nullptr;
try
{
res = real_thread_function(arg);
}
catch (...)
{
}
thread_done = true;
return res;
}
This simple mechanism you can use for detecting finishing of a thread without blocking in join method.
std::thread thread([&thread]() {
sleep(3);
thread.detach();
});
while(thread.joinable())
sleep(1);
You can always check if the thread's id is different than std::thread::id() default constructed.
A Running thread has always a genuine associated id.
Try to avoid too much fancy stuff :)
Create a mutex that the running thread and the calling thread both have access to. When the running thread starts it locks the mutex, and when it ends it unlocks the mutex. To check if the thread is still running, the calling thread calls mutex.try_lock(). The return value of that is the status of the thread. (Just make sure to unlock the mutex if the try_lock worked)
One small problem with this, mutex.try_lock() will return false between the time the thread is created, and when it locks the mutex, but this can be avoided using a slightly more complex method.
Surely have a mutex-wrapped variable initialised to false, that the thread sets to true as the last thing it does before exiting. Is that atomic enough for your needs?
I checked both systems:
-Using thread+atomic: take 9738 milliseconds
-Using future+async: take 7746 milliseconds
Not threads: 56000milliseconds
Using a Core-I7 6 cores laptop
My code creates 4000 threads, but no more than 12 running every time.
Here is the code:
#include <iostream>
#include <thread>
#include <future>
#include <chrono>
#include <mutex> // std::mutex
#include <atomic>
#include <chrono>
#pragma warning(disable:4996)
#pragma warning(disable:6031)
#pragma warning(disable:6387)//strout
#pragma warning(disable:26451)
using namespace std;
const bool FLAG_IMPRIME = false;
const int MAX_THREADS = 12;
mutex mtx; // mutex for critical section
atomic <bool> th_end[MAX_THREADS];
atomic <int> tareas_acabadas;
typedef std::chrono::high_resolution_clock t_clock; //SOLO EN WINDOWS
std::chrono::time_point<t_clock> start_time, stop_time; char null_char;
void timer(const char* title = 0, int data_size = 1) { stop_time = t_clock::now(); double us = (double)chrono::duration_cast<chrono::microseconds>(stop_time - start_time).count(); if (title) printf("%s time = %7lgms = %7lg MOPs\n", title, (double)us * 1e-3, (double)data_size / us); start_time = t_clock::now(); }
class c_trim
{
char line[200];
thread th[MAX_THREADS];
double th_result[MAX_THREADS];
int th_index;
double milliseconds_commanded;
void hilo(int hindex,int milliseconds, double& milliseconds2)
{
sprintf(line, "%i:%ia ",hindex, milliseconds); imprime(line);
this_thread::sleep_for(std::chrono::milliseconds(milliseconds));
milliseconds2 = milliseconds * 1000;
sprintf(line, "%i:%ib ", hindex, milliseconds); imprime(line);
tareas_acabadas++; th_end[hindex] = true;
}
int wait_first();
void imprime(char* str) { if (FLAG_IMPRIME) { mtx.lock(); cout << str; mtx.unlock(); } }
public:
void lanzatareas();
vector <future<void>> futures;
int wait_first_future();
void lanzatareas_future();//usa future
};
int main()
{
c_trim trim;
timer();
trim.lanzatareas();
cout << endl;
timer("4000 tareas using THREAD+ATOMIC:", 4000);
trim.lanzatareas_future();
cout << endl;
timer("4000 tareas using FUTURE:", 4000);
cout << endl << "Tareas acabadas:" << tareas_acabadas << endl;
cout << "=== END ===\n"; (void)getchar();
}
void c_trim::lanzatareas()
{
th_index = 0;
tareas_acabadas = 0;
milliseconds_commanded = 0;
double *timeout=new double[MAX_THREADS];
int i;
for (i = 0; i < MAX_THREADS; i++)
{
th_end[i] = true;
th_result[i] = timeout[i] = -1;
}
for (i = 0; i < 4000; i++)
{
int milliseconds = 5 + (i % 10) * 2;
{
int j = wait_first();
if (th[j].joinable())
{
th[j].join();
th_result[j] = timeout[j];
}
milliseconds_commanded += milliseconds;
th_end[j] = false;
th[j] = thread(&c_trim::hilo, this, j, milliseconds, std::ref(timeout[j]));
}
}
for (int j = 0; j < MAX_THREADS; j++)
if (th[j].joinable())
{
th[j].join();
th_result[j] = timeout[j];
}
delete[] timeout;
cout <<endl<< "Milliseconds commanded to wait=" << milliseconds_commanded << endl;
}
void c_trim::lanzatareas_future()
{
futures.clear();
futures.resize(MAX_THREADS);
tareas_acabadas = 0;
milliseconds_commanded = 0;
double* timeout = new double[MAX_THREADS];
int i;
for (i = 0; i < MAX_THREADS; i++)
{
th_result[i] = timeout[i] = -1;
}
for (i = 0; i < 4000; i++)
{
int milliseconds = 5 + (i % 10) * 2;
{
int j;
if (i < MAX_THREADS) j = i;
else
{
j = wait_first_future();
futures[j].get();
th_result[j] = timeout[j];
}
milliseconds_commanded += milliseconds;
futures[j] = std::async(std::launch::async, &c_trim::hilo, this, j, milliseconds, std::ref(timeout[j]));
}
}
//Last MAX_THREADS:
for (int j = 0; j < MAX_THREADS; j++)
{
futures[j].get();
th_result[j] = timeout[j];
}
delete[] timeout;
cout << endl << "Milliseconds commanded to wait=" << milliseconds_commanded << endl;
}
int c_trim::wait_first()
{
int i;
while (1)
for (i = 0; i < MAX_THREADS; i++)
{
if (th_end[i] == true)
{
return i;
}
}
}
//Espera que acabe algun future y da su index
int c_trim::wait_first_future()
{
int i;
std::future_status status;
while (1)
for (i = 0; i < MAX_THREADS; i++)
{
status = futures[i].wait_for(0ms);
if (status == std::future_status::ready)
return i;
}
}
I also had this problem very recently. Tried with the C++20 std::jthread using the shared-stop state to check if the thread is over, but inside the thread the std::stop_token argument is a readonly and doesn't indicate to outside when the thread finishes.
So I created a simple class (nes::uthread) extending std::thread with a flag to indicate it's finished. Example:
#include <atomic>
#include <chrono>
#include <iostream>
#include <memory>
#include <thread>
namespace nes {
class uthread final
{
std::unique_ptr<std::atomic<bool>> m_finished;
std::thread m_thr;
public:
uthread()
: m_finished { std::make_unique<std::atomic<bool>>(true) }
{}
template <class Function, class... Args>
uthread(Function&& f, Args&&... args)
: m_finished { std::make_unique<std::atomic<bool>>(false) }
, m_thr {
[](std::atomic<bool>& finished, Function&& ff, Args&&... aargs) {
try {
std::forward<Function>(ff)(std::forward<Args>(aargs)...);
finished = true;
} catch (...) {
finished = true;
throw;
}
},
std::ref(*m_finished), std::forward<Function>(f),
std::forward<Args>(args)...
}
{}
uthread(const uthread&) = delete;
uthread(uthread&&) = default;
uthread& operator=(const uthread&) = delete;
uthread& operator=(uthread&&) = default;
[[nodiscard]] std::thread::id get_id() const noexcept {
return m_thr.get_id(); }
[[nodiscard]] bool joinable() const noexcept { return m_thr.joinable(); }
void join() { m_thr.join(); }
[[nodiscard]] const std::atomic<bool>& finished() const noexcept {
return *m_finished; }
};
}
int main()
{
using namespace std;
using namespace std::chrono;
using namespace std::chrono_literals;
using namespace nes;
{
cout << "std::thread join() termination\n";
atomic<bool> finished = false;
thread t { [&finished] {
this_thread::sleep_for(2s);
finished = true;
cout << "thread ended\n";
}};
for (int i = 0; i < 5; i++) {
cout << t.get_id() << ".join() " << t.joinable()
<< " finished: " << finished << '\n';
this_thread::sleep_for(1s);
}
t.join();
}
cout << '\n';
{
cout << "std::jthread join() termination\n";
jthread t {[](stop_token st) {
this_thread::sleep_for(2s);
cout << "thread ended. stop possible: " << st.stop_possible() << '\n';
}};
auto st = t.get_stop_source();
for (int i = 0; i < 5; i++) {
cout << t.get_id() << ".join() " << t.joinable()
<< " finished: " << !st.stop_possible() << '\n';
this_thread::sleep_for(1s);
}
}
cout << '\n';
{
cout << "nes::uthread join() termination\n";
uthread t {[] {
this_thread::sleep_for(2s);
cout << "thread ended\n";
}};
for (int i = 0; i < 5; i++) {
cout << t.get_id() << ".join() " << t.joinable()
<< " finished: " << t.finished() << '\n';
this_thread::sleep_for(1s);
}
t.join();
}
}
Possible prints:
std::thread join() termination
2.join() 1 finished: 0
2.join() 1 finished: 0
thread ended
2.join() 1 finished: 1
2.join() 1 finished: 1
2.join() 1 finished: 1
std::jthread join() termination
3.join() 1 finished: 0
3.join() 1 finished: 0
thread ended. stop possible: 1
3.join() 1 finished: 0
3.join() 1 finished: 0
3.join() 1 finished: 0
nes::uthread join() termination
4.join() 1 finished: 0
4.join() 1 finished: 0
thread ended
4.join() 1 finished: 1
4.join() 1 finished: 1
4.join() 1 finished: 1
You can use std::jthread in nes::uthread so you don't need to join.

Boost Threads and Timers, C++

I have this code for a custom class 'sau_timer':
sau_timer::sau_timer(int secs, timerparam f, vector<string> params) : strnd(io),
t(io, boost::posix_time::seconds(secs))
{
assert(secs > 0);
this->f = f;
this->params = params;
t.async_wait(strnd.wrap(boost::bind(&sau_timer::exec, this, _1)));
boost::thread thrd(boost::bind(&boost::asio::io_service::run, &io));
io.run();
}
void sau_timer::exec(const boost::system::error_code&) {
(f)(params);
}
I want it so that when I make a sau_timer object, the timer will start, but allow program execution to continue. For example, this is main():
int main(int argc, char* argv[])
{
vector<string> args(1);
args[0] = "Hello!";
sau_timer timer_test(3, sau_prompt, args);
args[0] = "First!";
sau_prompt(args);
timer_test.thrd.join();
return 0;
}
My intention here is that timer_test is made, starting a timer that waits three seconds before calling sau_prompt("Hello!"), but that sau_prompt("First!") will be called first. At the moment, Hello is shown in the prompt before First, indicating that the timer is halting the entire program for three seconds before allowing it to continue. I want the timer to run in the background.
What am I doing wrong? The code compiles...
Thank you.
You're calling "io.run()" in sau_timer - that essentially tells the asio reactor to process any/all pending async events if it can.
You should call run or post after the having setup the events, which is how its normally done. check out the examples in the asio documentation.
#include <iostream>
#include <asio.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/date_time.hpp>
#include <boost/thread.hpp>
class event_timer
{
public:
event_timer(asio::io_service& io_service,
const std::size_t& tid,
const std::size_t& interval = 5)
: io_service_(io_service),
timer_(io_service),
tid_(tid),
interval_(interval),
tick_count_(0),
total_diff_(0)
{
}
void start()
{
timer_.cancel();
initiate_timer();
}
void stop()
{
timer_.cancel();
}
void set_interval(const std::size_t& milliseconds)
{
interval_ = milliseconds;
}
private:
inline void initiate_timer()
{
if (interval_)
{
timer_.expires_from_now(boost::posix_time::milliseconds(interval_));
timer_.async_wait(
boost::bind(&event_timer::handle_timer_event,this,
asio::placeholders::error));
before_ = boost::posix_time::microsec_clock::universal_time();
}
}
inline void handle_timer_event(const asio::error_code& error)
{
if (!error && interval_)
{
after_ = boost::posix_time::microsec_clock::universal_time();
boost::posix_time::time_duration duration = after_ - before_;
total_diff_ += std::abs(interval_ - duration.total_milliseconds());
++tick_count_;
if (tick_count_ < 200)
initiate_timer();
else
std::cout << "Timer["<< tid_ <<"]\tTick["<< tick_count_ <<"] Average Diff: " << total_diff_ / (1.0 * tick_count_) << std::endl;
}
}
asio::io_service& io_service_;
std::size_t tid_;
std::size_t interval_;
std::size_t tick_count_;
asio::deadline_timer timer_;
boost::posix_time::ptime before_;
boost::posix_time::ptime after_;
std::size_t total_diff_;
};
int main()
{
std::cout << "Timer Test" << std::endl;
asio::io_service io_service;
try
{
const std::size_t et_cnt = 1000;
std::vector<event_timer*> et_lst;
for(unsigned int i = 0; i < et_cnt; ++i)
{
et_lst.push_back(new event_timer(io_service,i,10));
}
for(unsigned int i = 0; i < et_cnt;)
{
et_lst[i++]->start();
}
std::size_t thread_pool_size = 100;
//Create a pool of threads to run all of the io_services.
std::vector<boost::shared_ptr<boost::thread> > threads;
for (std::size_t i = 0; i < thread_pool_size; ++i)
{
boost::shared_ptr<boost::thread> thread(new boost::thread(boost::bind(&asio::io_service::run, &io_service)));
threads.push_back(thread);
}
// Wait for all threads in the pool to exit.
for (std::size_t i = 0; i < threads.size(); ++i)
threads[i]->join();
for(unsigned int i = 0; i < et_cnt; delete et_lst[i++]);
}
catch(std::exception& e)
{
std::cout << "Exception: " << e.what() << std::endl;
return 1;
}
return 0;
}
considering namespace and boost version 1.69, there are three modifications should be done:
change #include <asio.hpp> into #include <boost/asio.hpp>
add: using namespace boost; using namespace boost:asio;
change inline void handle_timer_event(const asio::error_code& error) into void handle_timer_event(const boost::system::error_code& error)