How to check whether tasks in io_service are completed? - c++

I have a question about boost::io_service.
I have a set of tasks that I can run concurrently. After running all of them, I need to run another set of tasks concurrently. However first set has to be completed before starting to run the second set. This means I need to make sure that all the jobs submitted to io_service is completed before starting to schedule to second set.
I can implement it by keeping some kind of counter and add a busy loop but it does not look very efficient. So, I wanted to checked whether someone has a better idea or not. Following is a dummy code that I was using to experiment.
Thank you in advance!
#include <cstdio>
#include <iostream>
#include <unistd.h>
#include <boost/asio/io_service.hpp>
#include <boost/bind.hpp>
#include <boost/thread/thread.hpp>
const size_t numTasks = 100000;
void print_counter(const size_t id)
{
if (id + 1 == numTasks) {
printf("sleeping for %ld\n", id);
sleep(15);
}
printf("%ld\n", id);
}
int main(int argc, char** argv)
{
using namespace std;
using namespace boost;
asio::io_service io_service;
asio::io_service::work work(io_service);
const size_t numWorker = 4;
boost::thread_group workers;
for(size_t i = 0; i < numWorker; ++i) {
workers.create_thread(boost::bind(&asio::io_service::run, &io_service));
}
for(size_t i = 0; i < numTasks; ++i) {
io_service.post(boost::bind(print_counter, i));
}
// TODO: wait until all the tasks are done above
for(size_t i = 0; i < numTasks; ++i) {
io_service.post(boost::bind(print_counter, i));
}
// TODO: wait until all the tasks are done above
// ...
// Finally stop the service
io_service.stop();
workers.join_all();
return 0;
}

Your main problem is that all sets of your tasks are processed by the same instance of io_service. Function io_service::run returns where there is no tasks to be processed. Destructor of io_service::work informs io_service object that run can return where there are no pending tasks in queue to be performed. You can post all tasks from first set, then destroyed work and wait until io_service::run returns, then create again work object, post tasks from the next set and delete work, and so on. To do it just write helper class which may look like something below:
class TasksWaiter
{
public:
TasksWaiter(int numOfThreads)
{
work = std::make_unique<boost::asio::io_service::work>(io_service);
for(size_t i = 0; i < numOfThreads; ++i) {
workers.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
}
}
~TasksWaiter() {
work.reset();
workers.join_all();
}
template<class F>
void post(F f) {
io_service.post(f);
}
boost::thread_group workers;
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work;
};
int main()
{
{
TasksWaiter w1{4};
for (int i = 0; i < numTasks; ++i)
w1.post(boost::bind(print_counter,i));
// work in w1 is destroyed, then io_service::run ends
// when there are no tasks to be performed
}
printf("wait here");
{
TasksWaiter w1{4};
for (int i = 0; i < numTasks; ++i)
w1.post(boost::bind(print_counter,i));
}
}
a few remarks:
in constructor pool of threads are created
in destructor work is deleted, so io_service::run returns only if there are no pending tasks
functionality of destructor can be wrapped into a member function - e.g. wait, then you don't have to use {} scope to wait for your tasks.

From the io_service::run documentation:
The run() function blocks until all work has finished and there are no more handlers to be dispatched, or until the io_context has been stopped.
Also, from the io_context::work constructor documentation:
The constructor is used to inform the io_context that some work has begun. This ensures that the io_context object's run() function will not exit while the work is underway.
[Emphasis mine]
In short, if the run function returns and stopped returns false, then all work has been finished.

Related

Using std::thread with std::mutex

I am trying mutex lock with independent threads. The requirement is, I have many threads which will run independently and access/update a common recourse. To ensure that the recourse is updated via a single task, I used mutex. However this is not working.
I have pasted code, a representation of what I am trying to do below:
#include <iostream>
#include <map>
#include <string>
#include <chrono>
#include <thread>
#include <mutex>
#include <unistd.h>
std::mutex mt;
static int iMem = 0;
int maxITr = 1000;
void renum()
{
// Ensure that only 1 task will update the variable
mt.lock();
int tmpMem = iMem;
usleep(100); // Make the system sleep/induce delay
iMem = tmpMem + 1;
mt.unlock();
printf("iMem = %d\n", iMem);
}
int main()
{
for (int i = 0; i < maxITr; i++) {
std::thread mth(renum);
mth.detach(); // Run each task in an independent thread
}
return 0;
}
but this is terminating with the below error:
terminate called after throwing an instance of 'std::system_error'
what(): Resource temporarily unavailable
I want to know if the usage of <thread>.detach() is correct above? If I use .join() it works, but I want each thread to run independently and not wait for the thread to finish.
I also want to know what is the best way to achieve the above logic.
Try this:
int main()
{
std::vector<std::thread> mths;
mths.reserve(maxITr);
for (int i = 0; i < maxITr; i++) {
mths.emplace_back(renum);
}
for (auto& mth : mths) {
mth.join();
}
}
This way, you retain control of the threads (by not calling detach()), and you can join them all at the end, so you know they have completed their tasks.

boost asio and condition variables -- strange output

Suggest that I have the following code:
#include <boost/asio/io_service.hpp>
#include <boost/thread.hpp>
#include <condition_variable>
#include <iostream>
#include <mutex>
const int THREAD_POOL_SIZE = 2;
std::condition_variable g_cv;
std::mutex g_cv_mutex;
bool g_answer_ready;
void foo()
{
std::cout << "foo \n";
std::unique_lock<std::mutex> lock(g_cv_mutex);
g_answer_ready = true;
g_cv.notify_all();
}
int main()
{
boost::asio::io_service io_service;
for (int i = 0; i < 10; ++i)
{
std::auto_ptr<boost::asio::io_service::work> work(new boost::asio::io_service::work(io_service));
boost::thread_group threads;
for (int i = 0; i < THREAD_POOL_SIZE; ++i)
{
threads.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
}
std::unique_lock<std::mutex> lock(g_cv_mutex);
io_service.post(foo);
g_answer_ready = false;
g_cv.wait_for(lock, std::chrono::milliseconds(2000));
if (!g_answer_ready)
{
std::cout << "timed_out \n";
}
io_service.stop();
threads.join_all();
}
}
The output will be different between program's launches. For example,
foo
timed_out
foo
foo
However, if I move boost::asio::io_service object construction inside the loop, it works as expected:
foo
foo
foo
foo
foo
foo
foo
foo
foo
foo
Why? What am I doing wrong? How can I fix it?
boost 1.54, MSVC-11.0
If I understand you correctly, you need to fix last lines on your loop (see comments for descriptions):
// io_service.stop();
// threads.join_all();
work.reset(); // <- signal to process all pending jobs and quit from io_service::run function
threads.join_all(); // <- wait for all threads
io_service.reset(); // <- now `io_service` can accept new tasks
So, there were two issues in the original code:
io_service.stop() will cancel posted but not yet processed jobs (usually this is not what a programmer wants),
io_service.reset() is required to change the state of io_service from "stopped" to "ready to accept new jobs".

Using Boost threads and io_service to create a threadpool

I have looked around Stack Overflow and there have been a few really good answers on this, (my code is actually based on this answer here) but for some reason I am getting weird behavior - in that thread_func should be called ls1 times, but it is only running between 0 and 2 times before the threads exit. It seems like ioService.stop() is cutting off queued jobs before they are completed, but from what I understand this should not happen. Here is the relevant code snippet:
boost::asio::io_service ioService;
boost::asio::io_service::work work(ioService);
boost::thread_group threadpool;
for (unsigned t = 0; t < num_threads; t++)
{
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &ioService));
}
//Iterate over the dimensions of the matrices
for (unsigned i = 0; i < ls1; i++)
{
ioService.post(boost::bind(&thread_func,i, rs1, rs2, ls2, arr, left_arr, &result));
}
ioService.stop();
threadpool.join_all();
Any help would be greatly appreciated, thanks!
io_service::stop() causes all invocations of run() or run_one() to return as soon as possible. It does not remove any outstanding handlers that are already queued into the io_service. When io_service::stop() is invoked, the threads in threadpool will return as soon as possible, causing each thread of execution to be complete.
As io_service::post() will return immediately after requesting that the io_service invoke the handler, it is non-deterministic as to how many posted handlers will be invoked by threads in threadpool before the io_service is stopped.
If you wish for thread_func to be invoked ls1 times, then one simple alternative is to reorganize the code so that work is added to the io_service before the threadpool services it, and then the application lets the io_service run to completion.
boost::asio::io_service ioService;
// Add work to ioService.
for (unsigned i = 0; i < ls1; i++)
{
ioService.post(boost::bind(
&thread_func,i, rs1, rs2, ls2, arr, left_arr, &result));
}
// Now that the ioService has work, use a pool of threads to service it.
boost::thread_group threadpool;
for (unsigned t = 0; t < num_threads; t++)
{
threadpool.create_thread(boost::bind(
&boost::asio::io_service::run, &ioService));
}
// Once all work has been completed (thread_func invoked ls1 times), the
// threads in the threadpool will be completed and can be joined.
threadpool.join_all();
If you're expecting thread_func to be called ls1 times, then you should wait until it is actually called ls1 times before stopping io_service. As written, stop() may be called before any of the threads had a chance to have been scheduled even.
There are many ways to wait for that condition. For example you could use a condition variable:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
unsigned num_threads = 10, ls1=11;
int result = 0;
boost::mutex m;
boost::condition_variable cv;
void thread_func(unsigned , int* result) {
/* do stuff */
{
boost::lock_guard<boost::mutex> lk(m);
++*result;
}
cv.notify_one();
}
int main()
{
boost::asio::io_service ioService;
boost::asio::io_service::work work(ioService);
boost::thread_group threadpool;
for (unsigned t = 0; t < num_threads; t++)
threadpool.create_thread(boost::bind(&boost::asio::io_service::run,
&ioService));
for (unsigned i = 0; i < ls1; i++)
ioService.post(boost::bind(&thread_func,i,&result));
{
boost::unique_lock<boost::mutex> lk(m);
cv.wait(lk, []{return result == ls1; });
}
ioService.stop();
threadpool.join_all();
std::cout << "result = " << result << '\n';
}

Thread pool implementation using pthreads

I am trying to understand the below implementation of thread pool using the pthreads. When I comment out the the for loop in the main, the program stucks, upon putting the logs it seems that its getting stuck in the join function in threadpool destructor.
I am unable to understand why this is happening, is there any deadlock scenario happening ?
This may be naive but can someone help me understand why this is happening and how to correct this.
Thanks a lot !!!
#include <stdio.h>
#include <queue>
#include <unistd.h>
#include <pthread.h>
#include <malloc.h>
#include <stdlib.h>
// Base class for Tasks
// run() should be overloaded and expensive calculations done there
// showTask() is for debugging and can be deleted if not used
class Task {
public:
Task() {}
virtual ~Task() {}
virtual void run()=0;
virtual void showTask()=0;
};
// Wrapper around std::queue with some mutex protection
class WorkQueue {
public:
WorkQueue() {
// Initialize the mutex protecting the queue
pthread_mutex_init(&qmtx,0);
// wcond is a condition variable that's signaled
// when new work arrives
pthread_cond_init(&wcond, 0);
}
~WorkQueue() {
// Cleanup pthreads
pthread_mutex_destroy(&qmtx);
pthread_cond_destroy(&wcond);
}
// Retrieves the next task from the queue
Task *nextTask() {
// The return value
Task *nt = 0;
// Lock the queue mutex
pthread_mutex_lock(&qmtx);
// Check if there's work
if (finished && tasks.size() == 0) {
// If not return null (0)
nt = 0;
} else {
// Not finished, but there are no tasks, so wait for
// wcond to be signalled
if (tasks.size()==0) {
pthread_cond_wait(&wcond, &qmtx);
}
// get the next task
nt = tasks.front();
if(nt){
tasks.pop();
}
// For debugging
if (nt) nt->showTask();
}
// Unlock the mutex and return
pthread_mutex_unlock(&qmtx);
return nt;
}
// Add a task
void addTask(Task *nt) {
// Only add the task if the queue isn't marked finished
if (!finished) {
// Lock the queue
pthread_mutex_lock(&qmtx);
// Add the task
tasks.push(nt);
// signal there's new work
pthread_cond_signal(&wcond);
// Unlock the mutex
pthread_mutex_unlock(&qmtx);
}
}
// Mark the queue finished
void finish() {
pthread_mutex_lock(&qmtx);
finished = true;
// Signal the condition variable in case any threads are waiting
pthread_cond_signal(&wcond);
pthread_mutex_unlock(&qmtx);
}
// Check if there's work
bool hasWork() {
//printf("task queue size is %d\n",tasks.size());
return (tasks.size()>0);
}
private:
std::queue<Task*> tasks;
bool finished;
pthread_mutex_t qmtx;
pthread_cond_t wcond;
};
// Function that retrieves a task from a queue, runs it and deletes it
void *getWork(void* param) {
Task *mw = 0;
WorkQueue *wq = (WorkQueue*)param;
while (mw = wq->nextTask()) {
mw->run();
delete mw;
}
pthread_exit(NULL);
}
class ThreadPool {
public:
// Allocate a thread pool and set them to work trying to get tasks
ThreadPool(int n) : _numThreads(n) {
int rc;
printf("Creating a thread pool with %d threads\n", n);
threads = new pthread_t[n];
for (int i=0; i< n; ++i) {
rc = pthread_create(&(threads[i]), 0, getWork, &workQueue);
if (rc){
printf("ERROR; return code from pthread_create() is %d\n", rc);
exit(-1);
}
}
}
// Wait for the threads to finish, then delete them
~ThreadPool() {
workQueue.finish();
//waitForCompletion();
for (int i=0; i<_numThreads; ++i) {
pthread_join(threads[i], 0);
}
delete [] threads;
}
// Add a task
void addTask(Task *nt) {
workQueue.addTask(nt);
}
// Tell the tasks to finish and return
void finish() {
workQueue.finish();
}
// Checks if there is work to do
bool hasWork() {
return workQueue.hasWork();
}
private:
pthread_t * threads;
int _numThreads;
WorkQueue workQueue;
};
// stdout is a shared resource, so protected it with a mutex
static pthread_mutex_t console_mutex = PTHREAD_MUTEX_INITIALIZER;
// Debugging function
void showTask(int n) {
pthread_mutex_lock(&console_mutex);
pthread_mutex_unlock(&console_mutex);
}
// Task to compute fibonacci numbers
// It's more efficient to use an iterative algorithm, but
// the recursive algorithm takes longer and is more interesting
// than sleeping for X seconds to show parrallelism
class FibTask : public Task {
public:
FibTask(int n) : Task(), _n(n) {}
~FibTask() {
// Debug prints
pthread_mutex_lock(&console_mutex);
printf("tid(%d) - fibd(%d) being deleted\n", pthread_self(), _n);
pthread_mutex_unlock(&console_mutex);
}
virtual void run() {
// Note: it's important that this isn't contained in the console mutex lock
long long val = innerFib(_n);
// Show results
pthread_mutex_lock(&console_mutex);
printf("Fibd %d = %lld\n",_n, val);
pthread_mutex_unlock(&console_mutex);
// The following won't work in parrallel:
// pthread_mutex_lock(&console_mutex);
// printf("Fibd %d = %lld\n",_n, innerFib(_n));
// pthread_mutex_unlock(&console_mutex);
}
virtual void showTask() {
// More debug printing
pthread_mutex_lock(&console_mutex);
printf("thread %d computing fibonacci %d\n", pthread_self(), _n);
pthread_mutex_unlock(&console_mutex);
}
private:
// Slow computation of fibonacci sequence
// To make things interesting, and perhaps imporove load balancing, these
// inner computations could be added to the task queue
// Ideally set a lower limit on when that's done
// (i.e. don't create a task for fib(2)) because thread overhead makes it
// not worth it
long long innerFib(long long n) {
if (n<=1) { return 1; }
return innerFib(n-1) + innerFib(n-2);
}
long long _n;
};
int main(int argc, char *argv[])
{
// Create a thread pool
ThreadPool *tp = new ThreadPool(10);
// Create work for it
/*for (int i=0;i<100; ++i) {
int rv = rand() % 40 + 1;
showTask(rv);
tp->addTask(new FibTask(rv));
}*/
delete tp;
printf("\n\n\n\n\nDone with all work!\n");
}
The design is more or less OK-ish but implementationwise it contains several things that are a bit overcomplicated and may introduce instabilities. I guess you prog deadlocks when you comment out the for loop because you should use pthread_cond_broadcast instead of pthread_cond_signal in your WorkQueue::finish() method.
Note: I usually implemented threadpool termination by placing NUM_THREADS number of NULL items into the workqueue and I set a finished flag only to be able to check something in my addTask() method because after finish() I usually don't let adding new tasks and I return with false from addTask() or sometimes I assert.
Another note: Its best to encapsulate threads into classes, that has several benefits and makes proting to multiple platforms easier.
There may be other bugs too as I haven't executed your program, just ran through your code.
EDIT: Here is a reworked version, I issued some modifications to your code but I don't guarantee that it works. Fingers crossed... :-)
#include <stdio.h>
#include <queue>
#include <unistd.h>
#include <pthread.h>
#include <malloc.h>
#include <stdlib.h>
#include <assert.h>
// Reusable thread class
class Thread
{
public:
Thread()
{
state = EState_None;
handle = 0;
}
virtual ~Thread()
{
assert(state != EState_Started);
}
void start()
{
assert(state == EState_None);
// in case of thread create error I usually FatalExit...
if (pthread_create(&handle, NULL, threadProc, this))
abort();
state = EState_Started;
}
void join()
{
// A started thread must be joined exactly once!
// This requirement could be eliminated with an alternative implementation but it isn't needed.
assert(state == EState_Started);
pthread_join(handle, NULL);
state = EState_Joined;
}
protected:
virtual void run() = 0;
private:
static void* threadProc(void* param)
{
Thread* thread = reinterpret_cast<Thread*>(param);
thread->run();
return NULL;
}
private:
enum EState
{
EState_None,
EState_Started,
EState_Joined
};
EState state;
pthread_t handle;
};
// Base task for Tasks
// run() should be overloaded and expensive calculations done there
// showTask() is for debugging and can be deleted if not used
class Task {
public:
Task() {}
virtual ~Task() {}
virtual void run()=0;
virtual void showTask()=0;
};
// Wrapper around std::queue with some mutex protection
class WorkQueue
{
public:
WorkQueue() {
pthread_mutex_init(&qmtx,0);
// wcond is a condition variable that's signaled
// when new work arrives
pthread_cond_init(&wcond, 0);
}
~WorkQueue() {
// Cleanup pthreads
pthread_mutex_destroy(&qmtx);
pthread_cond_destroy(&wcond);
}
// Retrieves the next task from the queue
Task *nextTask() {
// The return value
Task *nt = 0;
// Lock the queue mutex
pthread_mutex_lock(&qmtx);
while (tasks.empty())
pthread_cond_wait(&wcond, &qmtx);
nt = tasks.front();
tasks.pop();
// Unlock the mutex and return
pthread_mutex_unlock(&qmtx);
return nt;
}
// Add a task
void addTask(Task *nt) {
// Lock the queue
pthread_mutex_lock(&qmtx);
// Add the task
tasks.push(nt);
// signal there's new work
pthread_cond_signal(&wcond);
// Unlock the mutex
pthread_mutex_unlock(&qmtx);
}
private:
std::queue<Task*> tasks;
pthread_mutex_t qmtx;
pthread_cond_t wcond;
};
// Thanks to the reusable thread class implementing threads is
// simple and free of pthread api usage.
class PoolWorkerThread : public Thread
{
public:
PoolWorkerThread(WorkQueue& _work_queue) : work_queue(_work_queue) {}
protected:
virtual void run()
{
while (Task* task = work_queue.nextTask())
task->run();
}
private:
WorkQueue& work_queue;
};
class ThreadPool {
public:
// Allocate a thread pool and set them to work trying to get tasks
ThreadPool(int n) {
printf("Creating a thread pool with %d threads\n", n);
for (int i=0; i<n; ++i)
{
threads.push_back(new PoolWorkerThread(workQueue));
threads.back()->start();
}
}
// Wait for the threads to finish, then delete them
~ThreadPool() {
finish();
}
// Add a task
void addTask(Task *nt) {
workQueue.addTask(nt);
}
// Asking the threads to finish, waiting for the task
// queue to be consumed and then returning.
void finish() {
for (size_t i=0,e=threads.size(); i<e; ++i)
workQueue.addTask(NULL);
for (size_t i=0,e=threads.size(); i<e; ++i)
{
threads[i]->join();
delete threads[i];
}
threads.clear();
}
private:
std::vector<PoolWorkerThread*> threads;
WorkQueue workQueue;
};
// stdout is a shared resource, so protected it with a mutex
static pthread_mutex_t console_mutex = PTHREAD_MUTEX_INITIALIZER;
// Debugging function
void showTask(int n) {
pthread_mutex_lock(&console_mutex);
pthread_mutex_unlock(&console_mutex);
}
// Task to compute fibonacci numbers
// It's more efficient to use an iterative algorithm, but
// the recursive algorithm takes longer and is more interesting
// than sleeping for X seconds to show parrallelism
class FibTask : public Task {
public:
FibTask(int n) : Task(), _n(n) {}
~FibTask() {
// Debug prints
pthread_mutex_lock(&console_mutex);
printf("tid(%d) - fibd(%d) being deleted\n", (int)pthread_self(), (int)_n);
pthread_mutex_unlock(&console_mutex);
}
virtual void run() {
// Note: it's important that this isn't contained in the console mutex lock
long long val = innerFib(_n);
// Show results
pthread_mutex_lock(&console_mutex);
printf("Fibd %d = %lld\n",(int)_n, val);
pthread_mutex_unlock(&console_mutex);
// The following won't work in parrallel:
// pthread_mutex_lock(&console_mutex);
// printf("Fibd %d = %lld\n",_n, innerFib(_n));
// pthread_mutex_unlock(&console_mutex);
// this thread pool implementation doesn't delete
// the tasks so we perform the cleanup here
delete this;
}
virtual void showTask() {
// More debug printing
pthread_mutex_lock(&console_mutex);
printf("thread %d computing fibonacci %d\n", (int)pthread_self(), (int)_n);
pthread_mutex_unlock(&console_mutex);
}
private:
// Slow computation of fibonacci sequence
// To make things interesting, and perhaps imporove load balancing, these
// inner computations could be added to the task queue
// Ideally set a lower limit on when that's done
// (i.e. don't create a task for fib(2)) because thread overhead makes it
// not worth it
long long innerFib(long long n) {
if (n<=1) { return 1; }
return innerFib(n-1) + innerFib(n-2);
}
long long _n;
};
int main(int argc, char *argv[])
{
// Create a thread pool
ThreadPool *tp = new ThreadPool(10);
// Create work for it
for (int i=0;i<100; ++i) {
int rv = rand() % 40 + 1;
showTask(rv);
tp->addTask(new FibTask(rv));
}
delete tp;
printf("\n\n\n\n\nDone with all work!\n");
}
I think you are having a race condition there...
When you remove the for loop, the pool is destructed as soon as it gets created so there is no time for the threads to start waiting on the queue. Try putting a sleep there and you'll see.
I implemented a threadpool library, which is used widely among all our services, so here come some advices:
You are using C++, so there's no need to use pthreads, just use boost, or std:thread if available
Don't signal, push empty tasks instead (pushing a task requires to signal, of course)
Use boost::function or std:function instead of a base class
Cope with spurious wake-ups (you code doesn't seem to handle them)
pthread_cond_signal wakes-up only one thread, you must use pthread_cond_broadcast if you want to notify them all, that said, I'd recommend, again, to stick to boost's conditions (#pasztorpisti got it rigth here, he's got my upvote)

Is boost::io_service::post thread safe?

Is it thread safe to post new handlers from within a handler?
I.e. Can threads that called the io_service::run() post new Handlers to the same io_service?
Thanks
It is safe to post handlers from within a handler for a single instance of an io_service according to the documentation.
Thread Safety
Distinct objects: Safe.
Shared objects: Safe, with the exception that calling reset() while
there are unfinished run(), run_one(),
poll() or poll_one() calls results in
undefined behaviour.
I think it's not because the following code didn't return 3000000 and I didn't see mutex synching the internal queue of io_service neither a lock-free queue.
#include <boost/asio/io_service.hpp>
#include <boost/thread.hpp>
#include <boost/thread/detail/thread_group.hpp>
#include <memory>
void postInc(boost::asio::io_service *service, std::atomic_int *counter) {
for(int i = 0; i < 100000; i++) service->post([counter] { (*counter)++; });
}
int main(int argc, char **argv)
{
std::atomic_int counter(0);
{
boost::asio::io_service service;
boost::asio::io_service::work working(service);
boost::thread_group workers;
for(size_t i = 0; i < 10;++i) workers.create_thread(boost::bind(&boost::asio::io_service::run, &service));
boost::thread_group producers;
for (int it = 0; it < 30; it++)
{
producers.add_thread(new boost::thread(boost::bind(postInc,&service,&counter)));
}
producers.join_all();
std::cout << "producers ended" << std::endl;
service.stop();
workers.join_all();
}
std::cout << counter.load();
char c; std::cin >> c;
return 0;
}