How to run multiple threads created by loop simultaneous using boost.thread? - c++

I'm using learning the basic of boost.thread. So far, I can create each thread one by one manually to let them run at the same time. However, when creating by loop, it runs sequentially not concurrency anymore.
#include <iostream>
#include <boost/thread.hpp>
void workerFunc()
{
boost::posix_time::seconds workTime(3);
std::cout << "Worker: Running" << '\n';
boost::this_thread::sleep(workTime);
std::cout<< "Worker: Finished" << '\n';
}
int main()
{
std::cout << "main: startup" << '\n';
boost::thread workerThread(workerFunc);
std::cout << "main: waiting for thread" << '\n';
//these are ok
boost::thread t(workerFunc), t2(workerFunc), t3(workerFunc), t4(workerFunc);
t.join();
t2.join();
t3.join();
t4.join();
//these are not
for (int i = 0; i < 2; ++i)
{
boost::thread z(workerFunc);
z.join();
}
std::cout << "main:done" << '\n';
return 0;
}

for (int i = 0; i < 2; ++i)
{
boost::thread z(workerFunc);
z.join();
}
You are starting your thread and then immediately waiting for it to complete!
EDIT
One of several alternative hacks besides thread groups.
std::vector<boost::thread *> z;
for (int i = 0; i < 2; ++i)
z.push_back(new boost::thread(workerFunc));
for (int i = 0; i < 2; ++i)
{
z[i]->join();
delete z[i];
}

Ok I found the answer through the problem of someone else, as well as learn their problem:
How to make boost::thread_group execute a fixed number of parallel threads

Use shared_ptr
#include <iostream>
#include <boost/thread.hpp>
void workerFunc()
{
boost::posix_time::seconds workTime(3);
std::cout << "Worker: Running" << '\n';
boost::this_thread::sleep(workTime);
std::cout << "Worker: Finished" << '\n';
}
int main()
{
std::cout << "main: startup" << '\n';
std::vector<std::shared_ptr<boost::thread>> z;
for (int i = 0; i < 2; ++i) {
z.push_back(std::make_shared<boost::thread>(workerFunc));
}
for (auto t : z) {
t->join();
}
std::cout << "main:done" << '\n';
return 0;
}
Execute it
# g++ e.cpp -lboost_thread && ./a.out
main: startup
Worker: Running
Worker: Running
Worker: Finished
Worker: Finished
main:done

Related

C++ Condition variable to signal end of detached thread execution stalls

I have some code which I'm working on where a detached thread is spawned, does some work, and then should wait for a signal from main() before sending another signal back to main indicating that the thread has quit.
I'm fairly new to condition variables, however I have worked with some multi thread code before. (Mostly mutexes.)
This is what I tried to implement, but it doesn't behave the way I would have expected. (Likely I misunderstood something.)
The idea behind this is to pass a struct containing two flags to each detached thread. The first flag indicates that main() says "it is ok to exit, and drop off the end of the thread function". The second flag is set by the thread itself and signals to main() that the thread has indeed exited. (It's just to confirm the signal from main() is recieved ok and to send something back.)
#include <cstdlib> // std::atoi
#include <iostream>
#include <thread>
#include <vector>
#include <random>
#include <future>
#include <condition_variable>
#include <mutex>
struct ThreadStruct
{
int id;
std::condition_variable cv;
std::mutex m;
int ok_to_exit;
int exit_confirm;
};
void Pause()
{
std::cout << "Press enter to continue" << std::endl;
std::cin.get();
}
void detachedThread(ThreadStruct* threadData)
{
std::cout << "START: Detached Thread " << threadData->id << std::endl;
// Performs some arbitrary amount of work.
for(int i = 0; i < 100000; ++ i);
std::cout << "FINISH: Detached thread " << threadData->id << std::endl;
std::unique_lock<std::mutex> lock(threadData->m);
std::cout << "WAIT: Detached thread " << threadData->id << std::endl;
threadData->cv.wait(lock, [threadData]{return threadData->ok_to_exit == 1;});
std::cout << "EXIT: Detached thread " << threadData->id << std::endl;
threadData->exit_confirm = 1;
}
int main(int argc, char** argv)
{
int totalThreadCount = 1;
ThreadStruct* perThreadData = new ThreadStruct[totalThreadCount];
std::cout << "Main thread starting " << totalThreadCount << " thread(s)" << std::endl;
for(int i = totalThreadCount - 1; i >= 0; --i)
{
perThreadData[i].id = i;
perThreadData[i].ok_to_exit = 0;
perThreadData[i].exit_confirm = 0;
std::thread t(detachedThread, &perThreadData[i]);
t.detach();
}
for(int i{0}; i < totalThreadCount; ++i)
{
ThreadStruct *threadData = &perThreadData[i];
std::cout << "Waiting for lock - main() thread" << std::endl;
std::unique_lock<std::mutex> lock(perThreadData[i].m);
std::cout << "Lock obtained - main() thread" << std::endl;
perThreadData[i].cv.wait(lock);
threadData->ok_to_exit = 1;
// added after comment from Sergey
threadData->cv.notify_all();
std::cout << "Done - main() thread" << std::endl;
}
for(int i{0}; i < totalThreadCount; ++i)
{
std::size_t thread_index = i;
ThreadStruct& threadData = perThreadData[thread_index];
std::unique_lock<std::mutex> lock(threadData.m);
std::cout << "i=" << i << std::endl;
int &exit_confirm = threadData.exit_confirm;
threadData.cv.wait(lock, [exit_confirm]{return exit_confirm == 1;});
std::cout << "i=" << i << " finished!" << std::endl;
}
Pause();
return 0;
}
This runs to the line:
WAIT: Detached thread 0
but the detached thread never quits. What have I done wrong?
Edit: Further experimentation - is this helpful?
I thought it might be helpful to simplify things by removing a step. In the example below, main() does not signal to the detached thread, it just waits for a signal from the detached thread.
But again, this code hangs - after printing DROP... This means the detached thread exits ok, but main() doesn't know about it.
#include <cstdlib> // std::atoi
#include <iostream>
#include <thread>
#include <vector>
#include <random>
#include <future>
#include <condition_variable>
#include <mutex>
struct ThreadStruct
{
int id;
std::condition_variable cv;
std::mutex m;
int ok_to_exit;
int exit_confirm;
};
void Pause()
{
std::cout << "Press enter to continue" << std::endl;
std::cin.get();
}
void detachedThread(ThreadStruct* threadData)
{
std::cout << "START: Detached Thread " << threadData->id << std::endl;
// Performs some arbitrary amount of work.
for(int i = 0; i < 100000; ++ i);
std::cout << "FINISH: Detached thread " << threadData->id << std::endl;
std::unique_lock<std::mutex> lock(threadData->m);
std::cout << "EXIT: Detached thread " << threadData->id << std::endl;
threadData->exit_confirm = 1;
threadData->cv.notify_all();
std::cout << "DROP" << std::endl;
}
int main(int argc, char** argv)
{
int totalThreadCount = 1;
ThreadStruct* perThreadData = new ThreadStruct[totalThreadCount];
std::cout << "Main thread starting " << totalThreadCount << " thread(s)" << std::endl;
for(int i = totalThreadCount - 1; i >= 0; --i)
{
perThreadData[i].id = i;
perThreadData[i].ok_to_exit = 0;
perThreadData[i].exit_confirm = 0;
std::thread t(detachedThread, &perThreadData[i]);
t.detach();
}
for(int i{0}; i < totalThreadCount; ++i)
{
std::size_t thread_index = i;
ThreadStruct& threadData = perThreadData[thread_index];
std::cout << "Waiting for mutex" << std::endl;
std::unique_lock<std::mutex> lock(threadData.m);
std::cout << "i=" << i << std::endl;
int &exit_confirm = threadData.exit_confirm;
threadData.cv.wait(lock, [exit_confirm]{return exit_confirm == 1;});
std::cout << "i=" << i << " finished!" << std::endl;
}
Pause();
return 0;
}
Your lambda is capturing by-value so it will never see the changes made to exit_confim.
Capture by-reference instead:
int& exit_confirm = threadData.exit_confirm;
threadData.cv.wait(lock, [&exit_confirm] { return exit_confirm == 1; });
// ^
// | capture by-reference
You also need to delete[] what you new[] so do
delete[] ThreadStruct;
when you're done with the the structs.
I also noticed some heap usage after free but that magically went away when I made some simplifications to the code. I didn't investigate that further.
Some suggestions:
Move code into the ThreadStruct class that deals with ThreadStruct member variables and locks. It usually makes it simpler to read and maintain.
Remove unused variables and headers.
Don't use new[]/delete[]. For this example, you could use a std::vector<ThreadStruct> instead.
Don't detach() at all - I haven't done anything about that below, but I suggest using join() (on attached threads) to do the final synchronization. That's what it's there for.
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <vector>
struct ThreadStruct {
int id;
// move this function into the ThreadStruct class
void detachedThread() {
std::cout << "START: Detached Thread " << id << std::endl;
// Performs some arbitrary amount of work (optimized away here)
std::cout << "FINISH: Detached thread " << id << std::endl;
std::lock_guard<std::mutex> lock(m);
std::cout << "EXIT: Detached thread " << id << std::endl;
exit_confirm = 1;
cv.notify_all();
std::cout << "DROP" << std::endl;
}
// add support functions instead of doing these things in your normal code
void wait_for_exit_confirm() {
std::unique_lock<std::mutex> lock(m);
cv.wait(lock, [this] { return exit_confirm == 1; });
}
void spawn_detached() {
std::thread(&ThreadStruct::detachedThread, this).detach();
}
private:
std::condition_variable cv;
std::mutex m;
int exit_confirm = 0; // initialize
};
With the above, main becomes a little cleaner:
int main() {
int totalThreadCount = 1;
std::vector<ThreadStruct> perThreadData(totalThreadCount);
std::cout << "Main thread starting " << perThreadData.size() << " thread(s)\n";
int i = 0;
for(auto& threadData : perThreadData) {
threadData.id = i++;
threadData.spawn_detached();
}
for(auto& threadData : perThreadData) {
std::cout << "Waiting for mutex" << std::endl;
std::cout << "i=" << threadData.id << std::endl;
threadData.wait_for_exit_confirm();
std::cout << "i=" << threadData.id << " finished!" << std::endl;
}
std::cout << "Press enter to continue" << std::endl;
std::cin.get();
}
For future interest: fixed the origional MWE posted in the question. There was two issues
not capturing local variable in lambda by reference (see other answer)
1 too many wait() calls
#include <cstdlib> // std::atoi
#include <iostream>
#include <thread>
#include <vector>
#include <random>
#include <future>
#include <condition_variable>
#include <mutex>
struct ThreadStruct
{
int id;
std::condition_variable cv;
std::mutex m;
int ok_to_exit;
int exit_confirm;
};
void Pause()
{
std::cout << "Press enter to continue" << std::endl;
std::cin.get();
}
void detachedThread(ThreadStruct* threadData)
{
std::cout << "START: Detached Thread " << threadData->id << std::endl;
// Performs some arbitrary amount of work.
for (int i = 0; i < 100000; ++i);
std::cout << "FINISH: Detached thread " << threadData->id << std::endl;
std::unique_lock<std::mutex> lock(threadData->m);
std::cout << "WAIT: Detached thread " << threadData->id << std::endl;
threadData->cv.wait(lock, [&threadData]{return threadData->ok_to_exit == 1;});
std::cout << "EXIT: Detached thread " << threadData->id << std::endl;
threadData->exit_confirm = 1;
threadData->cv.notify_all();
std::cout << "DROP" << std::endl;
}
int main(int argc, char** argv)
{
int totalThreadCount = 1;
ThreadStruct* perThreadData = new ThreadStruct[totalThreadCount];
std::cout << "Main thread starting " << totalThreadCount << " thread(s)" << std::endl;
for (int i = totalThreadCount - 1; i >= 0; --i)
{
perThreadData[i].id = i;
perThreadData[i].ok_to_exit = 0;
perThreadData[i].exit_confirm = 0;
std::thread t(detachedThread, &perThreadData[i]);
t.detach();
}
for(int i{0}; i < totalThreadCount; ++ i)
{
ThreadStruct *threadData = &perThreadData[i];
std::cout << "Waiting for lock - main() thread" << std::endl;
std::unique_lock<std::mutex> lock(perThreadData[i].m);
std::cout << "Lock obtained - main() thread" << std::endl;
//perThreadData[i].cv.wait(lock, [&threadData]{return threadData->ok_to_exit == 1;});
std::cout << "Wait complete" << std::endl;
threadData->ok_to_exit = 1;
threadData->cv.notify_all();
std::cout << "Done - main() thread" << std::endl;
}
for (int i{ 0 }; i < totalThreadCount; ++i)
{
std::size_t thread_index = i;
ThreadStruct& threadData = perThreadData[thread_index];
std::cout << "Waiting for mutex" << std::endl;
std::unique_lock<std::mutex> lock(threadData.m);
std::cout << "i=" << i << std::endl;
int& exit_confirm = threadData.exit_confirm;
threadData.cv.wait(lock, [&exit_confirm] {return exit_confirm == 1; });
std::cout << "i=" << i << " finished!" << std::endl;
}
Pause();
return 0;
}

unpredictable C++ sleep/wait behavior

I must be doing something stupid because I am getting the weirdest behavior from this simple sleep code. Originally I was using std::this_thread::sleep_for and got the same results but assumed it must have been some thread strangeness. However, I am getting the same seemingly out-of-order waiting with the code below. Same results with clang++ or g++. I am on Debian and compiling at the command line.
Expected behavior:
Shutting down in 3... [wait one second] 2... [wait one second] 1... [wait one second; program exit]
Actual behavior:
[3 second long wait] Shutting down in 3... 2... 1... [program exit]
#include<chrono>
#include<iostream>
void Sleep(int i) {
auto start = std::chrono::high_resolution_clock::now();
auto now = std::chrono::high_resolution_clock::now();
while (std::chrono::duration_cast<std::chrono::seconds>(now-start).count() < i)
now = std::chrono::high_resolution_clock::now();
}
void ShutdownCountdown(int i) {
if (i <= 0) return;
std::cout << "Shutting down in ";
for (; i != 0; --i) {
std::cout << i << "... ";
Sleep(1);
}
std::cout << std::endl << std::endl;
}
int main (int argc, char *argv[]) {
ShutdownCountdown(3);
return 0;
}
Normally iostreams do not flush until a newline is encountered. Since you don't output an EOL character, you need to explicitly flush to get the output printed:
std::cout << i << "... " << std::flush;
Unrelated, but note also the CPU getting a bit hot when you run your program. To save energy, consider changing the busy loop back to a real sleep:
for (; i != 0; --i) {
std::cout << i << "... " << std::flush;
std::this_thread::sleep_for(1s);
}
The nifty "1s" syntax is possible with using namespace std::chrono_literals; at the beginning of the program.
#include<chrono>
#include<iostream>
void Sleep(int i) {
auto start = std::chrono::high_resolution_clock::now();
auto now = std::chrono::high_resolution_clock::now();
while (std::chrono::duration_cast<std::chrono::seconds>(now-start).count() < i)
now = std::chrono::high_resolution_clock::now();
}
void ShutdownCountdown(int i) {
if (i <= 0) return;
std::cout << "Shutting down in "<<std::flush;
for (; i != 0; --i) {
std::cout << i << "... "<<std::flush;
Sleep(1);
}
std::cout << std::endl << std::endl;
}
int main (int argc, char *argv[]) {
ShutdownCountdown(3);
return 0;
}

Single producer / multiple consumer deadlock

The following code reasults in a deadlock. The problem is that I cannot figure out how unlock the consumers waiting on the condition variable. The consumer should loop and consume from the stack when a certain condition is met. I've tried exiting when the stack is empty but of course it doesn't work.
Stack.h
class Stack {
private:
std::stack<int> stack;
std::mutex mutex;
std::condition_variable is_empty;
bool done;
public:
Stack();
void push(int);
void pop();
void print();
bool isDone() const;
~Stack();
};
Stack.cpp
#include <iostream>
#include <sstream>
#include <thread>
#include "Stack.h"
void Stack::push(int x) {
std::lock_guard lock(mutex);
std::stringstream msg1;
msg1 << "producer " << std::this_thread::get_id() << " pushing " << x << std::endl;
std::cout << msg1.str();
stack.push(x);
std::stringstream msg;
msg << "producer " << std::this_thread::get_id() << ": " << x << " pushed" << std::endl;
std::cout << msg.str();
is_empty.notify_all();
}
void Stack::pop() {
std::unique_lock lock(mutex);
std::stringstream msg;
msg << "consumer " << std::this_thread::get_id() << " waiting to consume" << std::endl;
std::cout << msg.str();
is_empty.wait(lock, [this] { return !stack.empty(); });
if (!stack.empty()) {
stack.pop();
std::stringstream msg1;
msg1 << "consumer " << std::this_thread::get_id() << " popped" << std::endl;
std::cout << msg1.str();
} else {
done = true;
is_empty.notify_all();
}
}
void Stack::print() {
std::lock_guard lock(mutex);
for (int i = 0; i < stack.size(); i++) {
std::cout << "\t" << stack.top() << std::endl;
}
}
Stack::~Stack() {
}
bool Stack::isDone() const {
return done;
}
Stack::Stack() : done(false) {}
main.cpp
#include <thread>
#include <vector>
#include <iostream>
#include "Stack.h"
int main() {
Stack stack;
std::vector<std::thread> producer;
std::vector<std::thread> consumer;
for (int i = 0; i < 10; i++) {
consumer.emplace_back([&stack]{
while (!stack.isDone()) {
stack.pop();
}
});
}
for (int i = 0; i < 1; i++) {
producer.emplace_back([&stack]{
for (int j = 0; j < 5; ++j) {
stack.push(random());
}
});
}
for (int k = 0; k < producer.size(); k++) {
producer[k].join();
std::cout << producer[k].get_id() << " joined" << std::endl;
stack.print();
}
for (int j = 0; j < consumer.size(); j++) {
consumer[j].join();
std::cout << consumer[j].get_id() << " joined" << std::endl;
stack.print();
}
return 0;
}
Your code is not deadlocked but your threads are waiting for more input because you haven't configure the value of done properly.
There is no way that the else condition is invoked here
is_empty.wait(lock, [this] { return !stack.empty(); });
if (!stack.empty()) {
stack.pop();
std::stringstream msg1;
msg1 << "consumer " << std::this_thread::get_id() << " popped" << std::endl;
std::cout << msg1.str();
} else {
done = true;
is_empty.notify_all();
}
Looking from the code it seems like what you want is that after the producer stops producing the consumer should wake up and empty. But this is not the way to implement it. After the producer has pushed 5 elements you should set done =true from there.
Also as answered by madducci you need to change the location of notify_all();
This is something which worked for me
is_empty.wait(lock, [&] { return stack.size()>0 || done; });
if (!stack.empty()) {
int val=stack.top();
stack.pop();
std::stringstream msg1;
msg1 << "consumer " << std::this_thread::get_id() << " popped " <<val<<std::endl;
std::cout << msg1.str();
}
Looks like you have a logic error in your pop function: you never call notify_all() in case you pop an element from the stack.
The correct way should be this one:
void Stack::pop() {
std::unique_lock lock(mutex);
std::stringstream msg;
msg << "consumer " << std::this_thread::get_id() << " waiting to consume" << std::endl;
std::cout << msg.str();
is_empty.wait(lock, [this] { return !stack.empty(); });
if (!stack.empty()) {
stack.pop();
std::stringstream msg1;
msg1 << "consumer " << std::this_thread::get_id() << " popped" << std::endl;
std::cout << msg1.str();
} else {
done = true;
}
is_empty.notify_all();
}
You also invoke pop() before push() in your main

How to avoid firing already destroyed boost::asio::deadline_timer

I'm using multiple boost::asio::deadline_timer on one io_service object. std::shared_ptr of boost::asio::deadline_timer are stored in the container std::map<int, std::shared_ptr<debug_tim>> timers with index.
In the timer handler, I erase other boost::asio::deadline_timer. However, it seems that the erased timer woule be often fired with success error code.
Is there any way to avoid that. I expect that the timer handler that corresponding to the erased boost::asio::deadline_timer always fires with Operation canceled.
Am I missing something?
Here is the code that reproduces the behavior
https://wandbox.org/permlink/G0qzYcqauxdqw4i7
#include <iostream>
#include <memory>
#include <boost/asio.hpp>
// deadline_timer with index ctor/dtor print
struct debug_tim : boost::asio::deadline_timer {
debug_tim(boost::asio::io_service& ios, int i) : boost::asio::deadline_timer(ios), i(i) {
std::cout << "debug_tim() " << i << std::endl;
}
~debug_tim() {
std::cout << "~debug_tim() " << i << std::endl;
}
int i;
};
int main() {
boost::asio::io_service ios;
std::map<int, std::shared_ptr<debug_tim>> timers;
{
for (int i = 0; i != 5; ++i) {
auto tim = std::make_shared<debug_tim>(ios, i);
std::cout << "set timer " << i << std::endl;
tim->expires_from_now(boost::posix_time::seconds(1));
timers.emplace(i, tim);
tim->async_wait([&timers, i](auto ec){
std::cout << "timer fired " << i << " : " << ec.message() << std::endl;
auto it = timers.find(i);
if (it == timers.end()) {
std::cout << " already destructed." << std::endl;
}
else {
int other_idx = i + 1; // erase other timer (e.g. i + 1)
timers.erase(other_idx);
std::cout << " erased " << other_idx << std::endl;
}
}
);
}
}
ios.run();
}
I also call boost::asio::deadline_timer::cancel() before I erase the timer. However, I got similar result. Here is the cancel version:
https://wandbox.org/permlink/uM0yMFufkyn9ipdG
#include <iostream>
#include <memory>
#include <boost/asio.hpp>
// deadline_timer with index ctor/dtor print
struct debug_tim : boost::asio::deadline_timer {
debug_tim(boost::asio::io_service& ios, int i) : boost::asio::deadline_timer(ios), i(i) {
std::cout << "debug_tim() " << i << std::endl;
}
~debug_tim() {
std::cout << "~debug_tim() " << i << std::endl;
}
int i;
};
int main() {
boost::asio::io_service ios;
std::map<int, std::shared_ptr<debug_tim>> timers;
{
for (int i = 0; i != 5; ++i) {
auto tim = std::make_shared<debug_tim>(ios, i);
std::cout << "set timer " << i << std::endl;
tim->expires_from_now(boost::posix_time::seconds(1));
timers.emplace(i, tim);
tim->async_wait([&timers, i](auto ec){
std::cout << "timer fired " << i << " : " << ec.message() << std::endl;
auto it = timers.find(i);
if (it == timers.end()) {
std::cout << " already destructed." << std::endl;
}
else {
int other_idx = i + 1; // erase other timer (e.g. i + 1)
auto other_it = timers.find(other_idx);
if (other_it != timers.end()) {
other_it->second->cancel();
timers.erase(other_it);
}
std::cout << " erased " << other_idx << std::endl;
}
}
);
}
}
ios.run();
}
Edit
Felix, thank you for the answer. I understand the boost::asio::deadline::timer::cancel() behavior. I always need to care the lifetime of boost::asio::deadline::timer. I my actual code of my project, the ``boost::asio::deadline::timer` is a member variable of another object such as a session object. And in the timer handler, it accesses the object. It's dangerous.
I consider how to write safe code. And I come up with using std::weak_ptr in order to check the object's lifetime.
Here is the updated code:
#include <iostream>
#include <memory>
#include <boost/asio.hpp>
// deadline_timer with index ctor/dtor print
struct debug_tim : boost::asio::deadline_timer {
debug_tim(boost::asio::io_service& ios, int i) : boost::asio::deadline_timer(ios), i(i) {
std::cout << "debug_tim() " << i << std::endl;
}
~debug_tim() {
std::cout << "~debug_tim() " << i << std::endl;
}
int i;
};
int main() {
boost::asio::io_service ios;
std::map<int, std::shared_ptr<debug_tim>> timers;
{
for (int i = 0; i != 5; ++i) {
auto tim = std::make_shared<debug_tim>(ios, i);
std::cout << "set timer " << i << std::endl;
tim->expires_from_now(boost::posix_time::seconds(1));
timers.emplace(i, tim);
// Capture tim as the weak_ptr wp
tim->async_wait([&timers, i, wp = std::weak_ptr<debug_tim>(tim)](auto ec){
std::cout << "timer fired " << i << " : " << ec.message() << std::endl;
// Check the lifetime of wp
if (!wp.lock()) std::cout << " timer freed." << std::endl; // return here on actual code
auto it = timers.find(i);
if (it == timers.end()) {
std::cout << " already destructed." << std::endl;
}
else {
int other_idx = i + 1; // erase other timer (e.g. i + 1)
timers.erase(other_idx);
std::cout << " erased " << other_idx << std::endl;
}
}
);
}
}
ios.run();
}
Is this a good way to avoid accessing the deleted object that has the boost::asio::deadline_timer ?
Edit
My weak_ptr solution works well.
See
How to avoid firing already destroyed boost::asio::deadline_timer
According to the reference of deadline_timer::cancel:
If the timer has already expired when cancel() is called, then the handlers for asynchronous wait operations will:
have already been invoked; or
have been queued for invocation in the near future.
These handlers can no longer be cancelled, and therefore are passed an error code that indicates the successful completion of the wait operation.
We can know that calling cancel() can not cancel the timer which has already been queued for firing.
And it seems that the dealine_timer doesn't override destructor. (There is no destructor in the member list of deadline_timer)
In your code snippet, all timers will fire at almost the same time. Concerning that asio will use some internal threads, it's quite probably that when one completion handler is called, the others are being queued.

boost::asio::ioservice threadpool is running all code on the same thread ID

I am using boost::asio::ioservice to create a threadpool with 100 threads.
In a while loop I want to post 5 threads to do this work:
void dowork(int i) {
std::cout << "hello" << std::endl;
cout << " thread ID :" << boost::this_thread::get_id();
}
Then I do work.reset().
Despite the threadpool size being 100, at posting time it didn't take the 5 threads. In fact, when I print the thread ID it is the same for all 5 threads.
So it's not executing in parallel...why is that?
int main() {
int ch;
int i;
boost::asio::io_service ioservice;
boost::thread_group threadpool;
auto_ptr<boost::asio::io_service::work> work(
new boost::asio::io_service::work(ioservice)
);
for(i=0; i<100; i++) {
threadpool.create_thread(
boost::bind(&boost::asio::io_service::run, &ioservice)
);
}
ch=0;
while(ch <= 5) {
ch++;
cout << "in main" << boost::this_thread::get_id() << endl;
for(i=0; i<5; i++) {
ioservice.post(boost::bind(dowork,10));
}
std::cout << "size=" << threadpool.size() <<std::endl;
work.reset();
ioservice.reset();
ioservice.run();
}
}
If you look closely, the first batch of tasks is executed on all threads of the "pool". See it Live On Coliru
However, at the end of the loop, you reset the work. This causes all the threads to exit [¹]. It's no surprise that your main thread will be the sole handling thread for subsequent tasks posted.
I'd suggest not making the main thread invoke run() there at all. Also, using 100 threads is a (big) anti-pattern. It's unusual for so many workers to make any sense.
#include <iostream>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
boost::atomic_int tid_gen(0);
void dowork(int i) {
thread_local int tid = ++tid_gen;
std::cout << "hello";
std::cout << " thread ID :" << tid << "\n";
}
int main() {
boost::asio::io_service ioservice;
boost::thread_group group;
boost::optional<boost::asio::io_service::work> work(boost::asio::io_service::work{ioservice});
for(size_t i=0; i<boost::thread::hardware_concurrency(); i++) {
group.create_thread(
boost::bind(&boost::asio::io_service::run, &ioservice)
);
}
std::cout << "in main thread size=" << group.size() <<std::endl;
for(int i = 0; i <= 5; ++i) {
for(int i=0; i<5; i++) {
ioservice.post(boost::bind(dowork,10));
}
boost::this_thread::sleep_for(boost::chrono::milliseconds(600));
std::cout << "waking up\n";
}
work.reset(); // allow threads to exit
group.join_all(); // await proper thread shutdown
}
See it Live On Coliru
[¹]. (In fact, if you didn't, the threads would still run at program exit, leading to undefined behaviour).