I was wondering if anyone has any good design suggestions for a jobqueue that notifies a processJob() function when tasks > 0. I'm using Boost and c++ and just trying to get a general idea of such a design. Thanks.
I would run processJob() in a separate thread, which uses a "condition variable" to gate whether it's running; and when adding something to the queue, notifying that c.v.
The loop logic is something like:
boost::unique_lock<boost::mutex> lock(mymutex);
while (!terminate)
{
lock.lock();
while (!Q.empty())
jobCV.wait(lock);
pItem = Q.pop();
lock.unlock();
pItem->process();
}
Remember that adding items to the queue also needs to lock on the same mutex. Also, you'll need a test before that wait() for the signal that will set terminate; and the setting of that signal also needs to call notify() on the c.v.
If you already use boost library, it is convenient for you to just use boost::asio. an io_service object can be used to manage queue of jobs, since it guarantees the callbacks will be called in the order they are posted. and no hassle of locks if you only run io_service in one thread. Some sample code:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/shared_ptr.hpp>
#include <iostream>
#include <cstdlib>
int job_number = 0;
struct Job
{
virtual void run() { std::cout << "job " << ++job_number << " done" << '\n'; }
virtual ~Job() {}
};
class Processor
{
private:
boost::asio::io_service ioserv_;
boost::asio::io_service::work work_;
boost::thread thread_;
public:
Processor() : ioserv_(), work_(ioserv_) {
}
void run() {
ioserv_.reset();
thread_ = boost::thread (boost::bind(&boost::asio::io_service::run, &ioserv_));
}
void stop() {
ioserv_.stop();
}
~Processor() {
stop();
if (thread_.get_id() != boost::thread::id())
thread_.join();
}
void processJob(boost::shared_ptr<Job> j)
{
j->run();
}
void addJob(boost::shared_ptr<Job> j)
{
ioserv_.post(boost::bind(&Processor::processJob, this, j));
}
};
int main()
{
Processor psr;
psr.run();
for (int i=0; i<10; ++i)
psr.addJob(boost::shared_ptr<Job>(new Job));
sleep(1);
return 0;
}
Related
I have one main thread that will send an async job to the task queue on the other thread. And this main thread can trigger a destroy action at any time, which could cause the program to crash in the async task, a piece of very much simplified code like this:
class Bomb {
public:
int trigger;
mutex my_mutex;
};
void f1(Bomb *b) {
lock_guard<std::mutex> lock(b->my_mutex); //won't work! Maybe b have been destructed!
sleep(1);
cout<<"wake up.."<<b->trigger<<"..."<<endl;
}
int main()
{
Bomb *b = new Bomb();
b->trigger = 1;
thread t1(f1, b);
sleep(1);
//lock here won't work
delete b;//in actual case it is triggered by outside users
t1.join();
return 0;
}
The lock in f1 won't work since the destructor can be called first and trying to read mutex will crash. Put lock in destructor or before the delete also won't work for the same reason.
So is there any better way in this situation? Do I have to put mutex in the global scope and inside destructor to solve the issue?
In code, my comment looks like this :
#include <future>
#include <mutex>
#include <iostream>
#include <chrono>
#include <thread>
// do not use : using namespace std;
class Bomb
{
public:
void f1()
{
m_future = std::async(std::launch::async,[this]
{
async_f1();
});
}
private:
void async_f1()
{
using namespace std::chrono_literals;
std::lock_guard<std::mutex> lock{ m_mtx };
std::cout << "wake up..\n";
std::this_thread::sleep_for(1s);
std::cout << "thread done.\n";
}
std::future<void> m_future;
std::mutex m_mtx;
};
int main()
{
{
std::cout << "Creating bomb\n";
Bomb b; // no need to use unecessary new
b.f1();
}
std::cout << "Bomb destructed\n";
return 0;
}
I wrote this sample program to mimic what I'm trying to do in a larger program.
I have some data that will come from the user and be passed into a thread for some processing. I am using mutexes around the data the flags to signal when there is data.
Using the lambda expression, is a pointer to *this send to the thread? I seem to be getting the behavior I expect in the cout statement.
Are the mutexes used properly around the data?
Is putting the atomics and mutexes as a private member of the class a good move?
foo.h
#pragma once
#include <atomic>
#include <thread>
#include <vector>
#include <mutex>
class Foo
{
public:
Foo();
~Foo();
void StartThread();
void StopThread();
void SendData();
private:
std::atomic<bool> dataFlag;
std::atomic<bool> runBar;
void bar();
std::thread t1;
std::vector<int> data;
std::mutex mx;
};
foo.c
#include "FooClass.h"
#include <thread>
#include <string>
#include <iostream>
Foo::Foo()
{
dataFlag = false;
}
Foo::~Foo()
{
StopThread();
}
void Foo::StartThread()
{
runBar = true;
t1 = std::thread([=] {bar(); });
return;
}
void Foo::StopThread()
{
runBar = false;
if(t1.joinable())
t1.join();
return;
}
void Foo::SendData()
{
mx.lock();
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
mx.unlock();
dataFlag = true;
}
void Foo::bar()
{
while (runBar)
{
if(dataFlag)
{
mx.lock();
for(auto it = data.begin(); it < data.end(); ++it)
{
std::cout << *it << '\n';
}
mx.unlock();
dataFlag = false;
}
}
}
main.cpp
#include "FooClass.h"
#include <iostream>
#include <string>
int main()
{
Foo foo1;
std::cout << "Type anything to end thread" << std::endl;
foo1.StartThread();
foo1.SendData();
// type something to end threads
char a;
std::cin >> a;
foo1.StopThread();
return 0;
}
You ensure that the thread is joined using RAII techniques? Check.
All data access/modification is either protected through atomics or mutexs? Check.
Mutex locking uses std::lock_guard? Nope. Using std::lock_guard wraps your lock() and unlock() calls with RAII. This ensures that even if an exception occurs while within the lock, that the lock is released.
Is putting the atomics and mutexes as a private member of the class a good move?
Its neither good nor bad, but in this scenario, where Foo is a wrapper for a std::thread that does work and controls the synchronization, it makes sense.
Using the lambda expression, is a pointer to *this send to the thread?
Yes, you can also do t1 = std::thread([this]{bar();}); to make it more explicit.
As it stands, with your dataFlag assignments after the locks, you may encounter problems. If you call SendData twice such that bar processes the first one but is halted before setting dataFlag = false so that the second call adds the data, sets the flag to true only to have bar set it back to false. Then, you'll have data that has been "sent" but bar doesn't think there's anything to process.
There may be other tricky situations, but this was just one example; moving it into the lock clears up that problem.
for example, your SendData should look like:
void Foo::SendData()
{
std::lock_guard<std::mutex> guard(mx);
for (int i = 0; i < 5; ++i) {
data.push_back(i);
}
dataFlag = true;
}
Hello,
i am quite new to C++ but I have 6 years Java experience, 2 years C experience and some knowledge of concurrency basics. I am trying to create a threadpool to handle tasks. it is below with the associated test main.
it seems like the error is generated from
void ThreadPool::ThreadHandler::enqueueTask(void (*task)(void)) {
std::lock_guard<std::mutex> lock(queueMutex);
as said by my debugger, but doing traditional cout debug, i found out that sometimes it works without segfaulting and removing
threads.emplace(handler->getSize(), handler);
from ThreadPool::enqueueTask() improves stability greatly.
Overall i think it is related too my bad use of condition_variable (called idler).
compiler: minGW-w64 in CLion
.cpp
#include <iostream>
#include "ThreadPool.h"
ThreadPool::ThreadHandler::ThreadHandler(ThreadPool *parent) : parent(parent) {
thread = std::thread([&]{
while (this->parent->alive){
if (getSize()){
std::lock_guard<std::mutex> lock(queueMutex);
(*(queue.front()))();
queue.pop_front();
} else {
std::unique_lock<std::mutex> lock(idlerMutex);
idler.wait(lock);
}
}
});
}
void ThreadPool::ThreadHandler::enqueueTask(void (*task)(void)) {
std::lock_guard<std::mutex> lock(queueMutex);
queue.push_back(task);
idler.notify_all();
}
size_t ThreadPool::ThreadHandler::getSize() {
std::lock_guard<std::mutex> lock(queueMutex);
return queue.size();
}
void ThreadPool::enqueueTask(void (*task)(void)) {
std::lock_guard<std::mutex> lock(threadsMutex);
std::map<int, ThreadHandler*>::iterator iter = threads.begin();
threads.erase(iter);
ThreadHandler *handler = iter->second;
handler->enqueueTask(task);
threads.emplace(handler->getSize(), handler);
}
ThreadPool::ThreadPool(size_t size) {
for (size_t i = 0; i < size; ++i) {
std::lock_guard<std::mutex> lock(threadsMutex);
ThreadHandler *handler = new ThreadHandler(this);
threads.emplace(handler->getSize(), handler);
}
}
ThreadPool::~ThreadPool() {
std::lock_guard<std::mutex> lock(threadsMutex);
auto it = threads.begin(), end = threads.end();
for (; it != end; ++it) {
delete it->second;
}
}
.h
#ifndef WLIB_THREADPOOL_H
#define WLIB_THREADPOOL_H
#include <mutex>
#include <thread>
#include <list>
#include <map>
#include <condition_variable>
class ThreadPool {
private:
class ThreadHandler {
std::condition_variable idler;
std::mutex idlerMutex;
std::mutex queueMutex;
std::thread thread;
std::list<void (*)(void)> queue;
ThreadPool *parent;
public:
ThreadHandler(ThreadPool *parent);
void enqueueTask(void (*task)(void));
size_t getSize();
};
std::multimap<int, ThreadHandler*> threads;
std::mutex threadsMutex;
public:
bool alive = true;
ThreadPool(size_t size);
~ThreadPool();
virtual void enqueueTask(void (*task)(void));
};
#endif //WLIB_THREADPOOL_H
main:
#include <iostream>
#include <ThreadPool.h>
ThreadPool pool(3);
void fn() {
std::cout << std::this_thread::get_id() << '\n';
pool.enqueueTask(fn);
};
int main() {
std::cout << "Hello, World!" << std::endl;
pool.enqueueTask(fn);
return 0;
}
Your main() function invokes enqueueTask().
Immediately afterwards, your main() returns.
This gets the gears in motion for winding down your process. This involves invoking the destructors of all global objects.
ThreadPool's destructor then proceeds to delete all dynamically-scoped threads.
While the threads are still running. Hilarity ensues.
You need to implement the process for an orderly shutdown of all threads.
This means setting active to false, kicking all of the threads in the shins, and then joining all threads, before letting nature take its course, and finally destroy everything.
P.S. -- you need to fix how alive is being checked. You also need to make access to alive thread-safe, protected by a mutex. The problem is that the thread could be holding a lock on one of two differented mutexes. This makes this process somewhat complicated. Some redesign is in order, here.
Can anybody explain me why this program does not terminate (see the comments)?
#include <boost/asio/io_service.hpp>
#include <boost/asio.hpp>
#include <memory>
#include <cstdio>
#include <iostream>
#include <future>
class Service {
public:
~Service() {
std::cout << "Destroying...\n";
io_service.post([this]() {
std::cout << "clean and stop\n"; // does not get called
// do some cleanup
// ...
io_service.stop();
std::cout << "Bye!\n";
});
std::cout << "...destroyed\n"; // last printed line, blocks
}
void operator()() {
io_service.run();
std::cout << "run completed\n";
}
private:
boost::asio::io_service io_service;
boost::asio::io_service::work work{io_service};
};
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
std::future<void> f;
Service service;
};
int main(int argc, char* argv[]) {
{
Test test;
test.start();
std::string exit;
std::cin >> exit;
}
std::cout << "exiting program\n"; // never printed
}
The real issue is that destruction of io_service is (obviously) not thread-safe.
Just reset the work and join the thread. Optionally, set a flag so your IO operations know shutdown is in progress.
You Test and Service classes are trying to share responsibility for the IO service, that doesn't work. Here's much simplified, merging the classes and dropping the unused future.
Live On Coliru
The trick was to make the work object optional<>:
#include <boost/asio.hpp>
#include <boost/optional.hpp>
#include <iostream>
#include <thread>
struct Service {
~Service() {
std::cout << "clean and stop\n";
io_service.post([this]() {
work.reset(); // let io_service run out of work
});
if (worker.joinable())
worker.join();
}
void start() {
assert(!worker.joinable());
worker = std::thread([this] { io_service.run(); std::cout << "exiting thread\n";});
}
private:
boost::asio::io_service io_service;
std::thread worker;
boost::optional<boost::asio::io_service::work> work{io_service};
};
int main() {
{
Service test;
test.start();
std::cin.ignore(1024, '\n');
std::cout << "Start shutdown\n";
}
std::cout << "exiting program\n"; // never printed
}
Prints
Start shutdown
clean and stop
exiting thread
exiting program
See here: boost::asio hangs in resolver service destructor after throwing out of io_service::run()
I think the trick here is to destroy the worker (the work member) before calling io_service.stop(). I.e. in this case the work could be an unique_ptr, and call reset() explicitly before stopping the service.
EDIT: The above helped me some time ago in my case, where the ioservice::stop didn't stop and was waiting for some dispatching events which never happened.
However I reproduced the problem you have on my machine and this seems to be a race condition inside ioservice, a race between ioservice::post() and the ioservice destruction code (shutdown_service). In particular, if the shutdown_service() is triggered before the post() notification wakes up the other thread, the shutdown_service() code removes the operation from the queue (and "destroys" it instead of calling it), therefore the lambda is never called then.
For now it seems to me that you'd need to call the io_service.stop() directly in the destructor, not postponed via the post() as that apparently doest not work here because of the race.
I was able to fix the problem by rewriting your code like so:
class Service {
public:
~Service() {
std::cout << "Destroying...\n";
work.reset();
std::cout << "...destroyed\n"; // last printed line, blocks
}
void operator()() {
io_service.run();
std::cout << "run completed\n";
}
private:
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work = std::make_unique<boost::asio::io_service::work>(io_service);
};
However, this is largely a bandaid solution.
The problem lies in your design ethos; specifically, in choosing not to tie the lifetime of the executing thread directly to the io_service object:
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
std::future<void> f; //Constructed First, deleted last
Service service; //Constructed second, deleted first
};
In this particular scenario, the thread is going to continue to attempt to execute io_service.run() past the lifetime of the io_service object itself. If more than the basic work object were executing on the service, you very quickly begin to deal with undefined behavior with calling member functions of deleted objects.
You could reverse the order of the member objects in Test:
struct Test {
void start() {
f = std::async(std::launch::async, [this]() { service(); std::cout << "exiting thread\n";});
}
Service service;
std::future<void> f;
};
But it still represents a significant design flaw.
The way that I usually implement anything which uses io_service is to tie its lifetime to the threads that are actually going to be executing on it.
class Service {
public:
Service(size_t num_of_threads = 1) :
work(std::make_unique<boost::asio::io_service::work>(io_service))
{
for (size_t thread_index = 0; thread_index < num_of_threads; thread_index++) {
threads.emplace_back([this] {io_service.run(); });
}
}
~Service() {
work.reset();
for (std::thread & thread : threads)
thread.join();
}
private:
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work;
std::vector<std::thread> threads;
};
Now, if you have any infinite loops active on any of these threads, you'll still need to make sure you properly clean those up, but at least the code specific to the operation of this io_service is cleaned up correctly.
Why in this simple class if i use directly io.run() the function will be invoked otherwise if demand the run to other thread the print will not be invoked?
#include <iostream>
#include <boost/thread.hpp>
#include <boost/asio.hpp>
using namespace std;
class test
{
public:
test()
{
io.post(boost::bind(&test::print, this));
//io.run();
t = boost::thread(boost::bind(&boost::asio::io_service::run, &io));
}
void print()
{
cout << "test..." << endl;
}
private:
boost::thread t;
boost::asio::io_service io;
};
int main()
{
test();
return 0;
}
The thread object is being destroyed before allowing the io_service to completely run. The thread destructor documentation states:
[...] the programmer must ensure that the destructor is never executed while the thread is still joinable.
If BOOST_THREAD_PROVIDES_THREAD_DESTRUCTOR_CALLS_TERMINATE_IF_JOINABLE is defined, the program would abort as the thread destructor would call std::terminate().
If the io_service should run to completion, then consider joining the thread within Test's destructor. Here is a complete example that demonstrates synchronizing on the thread's completion:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
class test
{
public:
test()
{
io.post(boost::bind(&test::print, this));
t = boost::thread(boost::bind(&boost::asio::io_service::run, &io));
}
~test()
{
if (t.joinable())
t.join();
}
void print()
{
std::cout << "test..." << std::endl;
}
private:
boost::thread t;
boost::asio::io_service io;
};
int main()
{
test();
return 0;
}
Output:
test...
io_service::run() will complete all outstanding tasks and return when complete. If you don't call it, it will do nothing. If you do something like this:
boost::asio::io_service::work work(io);
Another thread will do this for you and run until you stop it one way or another.