I have an object that once created executes many tasks in the background, but should block untill /all/ posted tasks are finished. I.e.:
struct run_many{
boost::asio::io_service m_io_service;
boost::thread_group m_threads;
boost::asio::signal_set m_signals;
void evaluate(std::string work, int i){ /*...*/ }
void run_tasks(int tasks, std::string work){
{
boost::asio::io_service::work w(m_io_service); //
for(int i=0;i<tasks;i++)
m_io_service.post(boost::bind(&run_many::evaluate, this, work, i));
}
//m_io_service.run(); // blocks forever
m_io_service.stop(); // seems to cut off queued jobs
m_threads.join_all(); // works only after m_io_service.stop()
}
run_many(int n_workers)
{
m_threads.create_thread(boost::bind(&boost::asio::io_service::run,m_io_service);
}
};
So I am stuck... it seems that I can either wait forever or cut off the queue after the currently running job in each thread. There must be something I'm missing in the docs?
According to the documentation this idea should work(pseudocode):
...
// put a few handlers into io_service
...
// don't forget to destroy all boost::asio::io_service::work objects
while (m_io_service.stopped())
{
m_io_service.run();
}
// when code reaches this line m_io_service will be in stopped state and all handlers will be executed
// this code can be called without any doubts about being locked
m_threads.join_all();
Related
I have a problem where two threads are called like this, one after another.
new boost::thread( &SERVER::start_receive, this);
new boost::thread( &SERVER::run_io_service, this);
Where the first thread calls this function.
void start_receive()
{
udp_socket.async_receive(....);
}
and the second thread calls,
void run_io_service()
{
io_service.run();
}
and sometimes the io_service thread ends up finishing before the start_receive() thread and then the server cannot receive packets.
I thought about putting a sleep function between the two threads to wait a while for the start_receive() to complete and that works but I wondered if there was another sure fire way to make this happen?
When you call io_service.run(), the thread will block, dispatching posted handlers until either:
There are no io_service::work objects associated with the io_service, or
io_service.stop() is called.
If either of these happens, the io_service enters the stopped state and will refuse to dispatch any more handlers in future until its reset() method is called.
Every time you initiate an asynchronous operation on an io object associated with the io_service, an io_service::work object is embedded in the asynchronous handler.
For this reason, point (1) above cannot happen until the asynchronous handler has run.
this code therefore will guarantee that the async process completes and that the asserts pass:
asio::io_service ios; // ios is not in stopped state
assert(!ios.stopped());
auto obj = some_io_object(ios);
bool completed = false;
obj.async_something(..., [&](auto const& ec) { completed = true; });
// nothing will happen yet. There is now 1 work object associated with ios
assert(!completed);
auto ran = ios.run();
assert(completed);
assert(ran == 1); // only 1 async op waiting for completion.
assert(ios.stopped()); // io_service is exhausted and no work remaining
ios.reset();
assert(!ios.stopped()); // io_service is ready to run again
If you want to keep the io_service running, create a work object:
boost::asio::io_service svc;
auto work = std::make_shared<boost::asio::io_service::work>(svc);
svc.run(); // this will block as long as the work object is valid.
The nice thing about this approach is that the work object above will keep the svc object "running", but not block any other operations on it.
In a project we're creating multiple statemachines in a wrapper-class. Each wrapper runs in it's own thread. When the jobs is done, the wrapper-class destructor is being called, and in there we would like to stop the thread.
Though if we're using thread.join(), we get a deadlock (since it tries to join itself). We could somehow signal another thread, but that seems a bit messy.
Is there any way to properly terminate the thread in which a class is running in, upon object destruction?
thread.join() does not stop a thread. It waits for the thread to finish and then returns. In order to stop a thread you have to have some way of telling the thread to stop, and the thread has to check to see whether it's time to stop. One way to do that is with an atomic bool:
class my_thread {
public:
my_thread() : done(false) { }
~my_thread() { done = true; thr.join(); }
void run() { thread th(&my_thread::do_it, this); swap(th, thr); }
private:
void do_it() { while (!done) { /* ... */ } }
std::thread thr;
std::atomic<bool> done;
};
That's off the top of my head; not compiled, not tested.
I have a cycle in C++11 like that:
while (true)
{
std::thread t0(some_work, 0);
...
std::thread tn(some_work, n);
t0.join();
...
tn.join();
}
but creating new threads in every iteration isn't good of course. In C it's easy to use messages to tell threads to wait another iteration, but I want to do it with C++11 tools. I looked at condition_variable, but it isn't solution I think. What can I do with it?
Use Boost to get through!
Note: I do not have C++11, so the code I will show you is for C++98 with Boost libraries. Most of Boost stuff ends up in std::tr1 and subsequently later versions of the standard, so most of this is probably transferrable without boost.
It sounds like you have multiple threads that you are constantly, but not consistently, assigning work to do. The work doesn't always need performing (otherwise your thread could do it in its own loop) or perhaps the thread doesn't have the information to perform it. If this is the case, consider boost::asio::io_service.
With this, you will need to create a thread that is always running, so you'll probably want to put your threads in a class (although you don't need to).
class WorkerThread
{
WorkerThread()
: thread(&WorkerThread::HandleWorkThread, this), io_service(), runThread(true)
{
}
~WorkerThread()
{
// Inform the thread not to run anymore:
runThread = false;
// Wait for the thread to finish:
thread.join();
}
void AssignWork(boost::function<void()> workFunc) { io_service.post(workFunc); }
private:
void HandleWorkThread()
{
while (runThread)
{
// handle work:
io_service.run();
// prepare for more work:
io_service.reset();
}
}
boost::thread thread;
boost::asio::io_service io_service;
bool runThread; // NB: this should be atomic
};
Now you can have the following:
void CalculateThings(int, int);
void CalculateThingsComplex(int, int, double);
// Create two threads. The threads will continue to run and wait for work to do
WorkerThread thread1, thread2;
while (true)
{
thread1.AssignWork(boost::bind(&CalculateThings, 20, 30));
thread2.AssignWork(boost::bind(&CalculateThingsComplex, 2, 5, 3.14));
}
You can continue to assign as much work as necessary. Once the WorkerThreads go out of scope, they will stop running and close nicely
boost::asio::io_service ioService;
boost::thread_group threadpool;
boost::barrier barrier(5);
boost::asio::io_service::work work(ioService);
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService)); //Thread 1
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService)); //Thread 2
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService)); //Thread 3 threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService)); //Thread 4
while (true)
{
{
boost::lock_guard<boost::recursive_mutex> lock(mutex_);
map::iterator it = m_map.begin();
while (it != m_map.end())
{
ioService.post(boost::bind(&ProcessFun, this, it));
++it;
}
ioService.run(); <-- main thread is stuck here..
}
}
I want to have the ability to know that all of the tasks that were assigned to the thread pool have been done, only after to assign tasks again to the thread pool.
As long as the threads are processing the tasks I don't want to release the lock.
Is there any way I can make sure all of the assigned tasks are done? And only then to proceed?
The simplest way is to just call ioService.run() According to the boost asio documentation:
The io_service object's run() function will not exit while work is
underway. It does exit when there is no unfinished work
remaining.
By the way it is difficult to determine this without seeing much more of your program, but it appears that you are attempting to defeat the primary purpose of asio. You are serializing batches of tasks. If somehow it is important that all tasks in batch#1 be completely processed before any task in batch#2 begins, then this may make sense, but it is an odd usage.
Also be careful, if any of the handlers for batch#1 tasks try to add new tasks, they can deadlock attempting to acquire the lock on the mutex.
So, my final solution was to create a small semaphore on my own that has a mutex and condition veritable inside
which i found here :
C++0x has no semaphores? How to synchronize threads?
I pass this semaphore as pointer to the threads, and reset it each iteration.
I had to modify a bit the semaphore code to enable reset functionality and because my threads sometime finish the work before the main thread fall a sleep i had to modify the condition inside abit
class semaphore
{
private:
boost::mutex mutex_;
boost::condition_variable condition_;
unsigned long count_;
public:
semaphore()
: count_()
{}
void reset(int x)
{
count = x;
}
void notify()
{
boost::mutex::scoped_lock lock(mutex_);
++count_;
if(count_ < 1)
condition_.notify_one();
}
void wait()
{
boost::mutex::scoped_lock lock(mutex_);
while(count_ > 1)
condition_.wait(lock);
}
};
....
....
semaphore* m_semaphore = new semaphore();
while (true)
{
{
boost::lock_guard<boost::recursive_mutex> lock(mutex_);
map::iterator it = m_map.begin();
if(it ! = m_map.end())
m_semaphore->reset(m_map.size());
while (it != m_map.end())
{
ioService.post(boost::bind(&ProcessFun, this, it, &m_semaphore));
++it;
}
m_semaphore.wait();
}
}
The idea is to be able to replace multithreaded code with boost::asio and a thread pool, on a consumer/producer problem. Currently, each consumer thread waits on a boost::condition_variable - when a producer adds something to the queue, it calls notify_one/notify_all to notify all the consumers. Now what happens when you (potentially) have 1k+ consumers? Threads won't scale!
I decided to use boost::asio, but then I ran into the fact that it doesn't have condition variables. And then async_condition_variable was born:
class async_condition_variable
{
private:
boost::asio::io_service& service_;
typedef boost::function<void ()> async_handler;
std::queue<async_handler> waiters_;
public:
async_condition_variable(boost::asio::io_service& service) : service_(service)
{
}
void async_wait(async_handler handler)
{
waiters_.push(handler);
}
void notify_one()
{
service_.post(waiters_.front());
waiters_.pop();
}
void notify_all()
{
while (!waiters_.empty()) {
notify_one();
}
}
};
Basically, each consumer would call async_condition_variable::wait(...). Then, a producer would eventually call async_condition_variable::notify_one() or async_condition_variable::notify_all(). Each consumer's handle would be called, and would either act on the condition or call async_condition_variable::wait(...) again. Is this feasible or am I being crazy here? What kind of locking (mutexes) should be performed, given the fact that this would be run on a thread pool?
P.S.: Yes, this is more a RFC (Request for Comments) than a question :).
Have a list of things that need to be done when an event occurs. Have a function to add something to that list and a function to remove something from that list. Then, when the event occurs, have a pool of threads work on the list of jobs that now need to be done. You don't need threads specifically waiting for the event.
Boost::asio can be kind of hard to wrap your head around. At least, I have difficult time doing it.
You don't need to have the threads wait on anything. They do that on their own when they don't have any work to do. The examples that seemed to look like what you wanted to do had work posted to the io_service for each item.
The following code was inspired from this link. It actually open my eyes to how you could use it do a lot of things.
I'm sure this isn't perfect, but I think it gives the general idea. I hope this helps.
Code
#include <iostream>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
class ServerProcessor
{
protected:
void handleWork1(WorkObject1* work)
{
//The code to do task 1 goes in here
}
void handleWork2(WorkObject2* work)
{
//The code to do task 2 goes in here
}
boost::thread_group worker_threads_;
boost::asio::io_service io_service_;
//This is used to keep io_service from running out of work and exiting to soon.
boost::shared_ptr<boost::asio::io_service::work> work_;
public:
void start(int numberOfThreads)
{
boost::shared_ptr<boost::asio::io_service::work> myWork(new boost::asio::io_service::work(io_service_));
work_=myWork;
for (int x=0; x < numberOfThreads; ++x)
worker_threads_.create_thread( boost::bind( &ServerProcessor::threadAction, this ) );
}
void doWork1(WorkObject1* work)
{
io_service_.post(boost::bind(&ServerProcessor::handleWork1, this, work));
}
void doWork2(WorkObject2* work)
{
io_service_.post(boost::bind(&ServerProcessor::handleWork2, this, work));
}
void threadAction()
{
io_service_.run();
}
void stop()
{
work_.reset();
io_service_.stop();
worker_threads_.join_all();
}
};
int main()
{
ServerProcessor s;
std::string input;
std::cout<<"Press f to stop"<<std::endl;
s.start(8);
std::cin>>input;
s.stop();
return 0;
}
How about using boost::signals2?
It is a thread safe spinoff of boost::signals that lets your clients subscribe a callback to a signal to be emitted.
Then, when the signal is emitted asynchronously in an io_service dispatched job all the registered callbacks will be executed (on the same thread that emitted the signal).