c++ condition variable notification not working as expected - c++

I'm trying to launch new threads as soon as work in previous worker_thread has started, but maybe ended or not. I've replaced started and ended work with time delays. My code is:
#include <iostream>
#include <string>
#include <mutex>
#include <condition_variable>
#include <future>
#include <atomic>
#include <chrono>
#include <thread>
std::mutex m;
std::condition_variable cv;
bool started = false;
void worker_thread()
{
std::unique_lock<std::mutex> lk(m);
static std::atomic<int> count(1);
std::this_thread::sleep_for(std::chrono::milliseconds{(count % 5) * 100});
std::cerr << "Start Worker thread: " << count << "\n";
started = true;
lk.unlock();
cv.notify_one();
std::this_thread::sleep_for(std::chrono::milliseconds{3000});
std::cerr << "Exit Worker thread: " << count << "\n";
++count;
}
int main()
{
while(1) {
std::async(std::launch::async, worker_thread);
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return started;});
started = false;
}
}
The output looks like that:
Start Worker thread: 1
Exit Worker thread: 1
Start Worker thread: 2
Exit Worker thread: 2
Start Worker thread: 3
Exit Worker thread: 3
Start Worker thread: 4
Exit Worker thread: 4
Start Worker thread: 5
Exit Worker thread: 5
which isn't the behavior I wanted. What I wanted was something like (not exactly) this:
Start Worker thread: 1
Start Worker thread: 2
Start Worker thread: 3
Start Worker thread: 4
Exit Worker thread: 1
Exit Worker thread: 3
Exit Worker thread: 4
Exit Worker thread: 2
Start Worker thread: 5
Exit Worker thread: 5
Currently the next thread is only started when work is finished in previous thread. But I want to start next thread as soon as work is started in previous thread and not wait for it's end, only wait for start.

std::async returns a std::future holding result of function execution. In your case, it is a temporary object which is immideatly destroyed. The documentation for std::future says:
these actions will not block for the shared state to become ready, except that it may block if all of the following are true:
✔ the shared state was created by a call to std::async
✔ the shared state is not yet ready
✔ this was the last reference to the shared state
All of those are true, so destruction of that future will block until worker function will finish executing.
You can create detached thread to avoid this problem:
std::thread(worker_thread).detach();

Related

Accurate timer based processing implementation using condition variable

I need a thread to perform processing every one second accurately. Suppose if the worker thread is busy on some operation that takes more than one second, I want the worker thread to miss the 1s expiry notification and perform the processing in the next cycle.
I am trying to implement this using two threads. One thread is a worker thread, another thread sleeps for one second and notifies the worker thread via condition variable.
Code is shown below
Worker thread
while(!threadExit){
std::unique_lock<std::mutex> lock(mutex);
// Block until a signal is received
condVar_.wait(lock, [this](){return (threadExit || performProc);)});
if(threadExit_){
break;
}
// Perform the processing
..............
}
Timer thread
while(!threadExit)
{
{
std::unique_lock<std::mutex> lock(mutex);
performProc= false;
}
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
if(threadExit){
break;
}
{
std::unique_lock<std::mutex> lock(mutex);
performProc= true;
}
condVar.notify_one();
}
Please note the variable threadExit is set by the main thread under the mutex lock and notified to worker thread. The timer thread can see this flag when it wakes up(which should be fine for my implementation)
Do you think performProc may set to false again before the worker thread sees it as true? If yes, can you please throw some light on how to tackle this problem? Thanks!
Unless threadExit is atomic, the code exhibits undefined behavior (race condition). All accesses to threadExit must be protected by a mutex, so also reads in while(!threadExit) and if(threadExit)....
But there's no need to do any of this. You can run everything in the same thread if you use sleep_until (and a steady clock) instead of sleep_for.
#include <chrono>
#include <iostream>
#include <thread>
using namespace std::literals;
void do_work() {
std::cout << "Work # " << std::chrono::system_clock::now() << std::endl;
}
int main() {
while (true) {
auto t = ceil<std::chrono::seconds>(std::chrono::steady_clock::now() + 600ms);
std::this_thread::sleep_until(t);
do_work();
}
}
Output:
Work # 2022-03-04 09:56:51.0148904
Work # 2022-03-04 09:56:52.0134687
Work # 2022-03-04 09:56:53.0198704
Work # 2022-03-04 09:56:54.0010437
Work # 2022-03-04 09:56:55.0148975
. . .

C++ thread that starts several threads

I am trying to do a program that has to run 2 tasks periodically.
That is, for example, run task 1 every 10 seconds, and run task 2 every 20 seconds.
What I am thinking is to create two threads, each one with a timer. Thread 1 launches a new thread with task 1 every 10 seconds. and Thread 2 launches a new thread with task 2 every 20 seconds.
My doubt is, how to launch a new task 1 if the previous task 1 hasn't finished?
while (true)
{
thread t1 (task1);
this_thread::sleep_for(std::chrono::seconds(10));
t1.join();
}
I was trying this, but this way it will only launch a new task 1 when the previous one finishes.
EDIT:
Basically I want to implement a task scheduler.
Run task1 every X seconds.
Run task2 every Y seconds.
I was thinking in something like this:
thread t1 (timer1);
thread t2 (timer2);
void timer1()
{
while (true)
{
thread t (task1);
t.detach()
sleep(X);
}
}
the same for timer2 and task2
Perhaps you could create a periodic_task handler that is responsible for scheduling one task every t seconds. And then you can launch a periodic_task with a specific function and time duration from anywhere you want to in your program.
Below I've sketched something out. One valid choice is to detach the thread and let it run forever. Another is to include cancellation to allow the parent thread to cancel/join. I've included functionality to allow the latter (though you could still just detach/forget).
#include <condition_variable>
#include <functional>
#include <iostream>
#include <mutex>
#include <thread>
class periodic_task
{
std::chrono::seconds d_;
std::function<void()> task_;
std::mutex mut_;
std::condition_variable cv_;
bool cancel_{false};
public:
periodic_task(std::function<void()> task, std::chrono::seconds s)
: d_{s}
, task_(std::move(task))
{}
void
operator()()
{
std::unique_lock<std::mutex> lk{mut_};
auto until = std::chrono::steady_clock::now();
while (true)
{
while (!cancel_ && std::chrono::steady_clock::now() < until)
cv_.wait_until(lk, until);
if (cancel_)
return;
lk.unlock();
task_();
lk.lock();
until += d_;
}
}
void cancel()
{
std::unique_lock<std::mutex> lk{mut_};
cancel_ = true;
cv_.notify_one();
}
};
void
short_task()
{
std::cerr << "short\n";
}
void
long_task(int i, const std::string& message)
{
std::cerr << "long " << message << ' ' << i << '\n';
}
int
main()
{
using namespace std::chrono_literals;
periodic_task task_short{short_task, 7s};
periodic_task task_long{[](){long_task(5, "Hi");}, 13s};
std::thread t1{std::ref(task_short)};
std::this_thread::sleep_for(200ms);
std::thread t2{std::ref(task_long)};
std::this_thread::sleep_for(1min);
task_short.cancel();
task_long.cancel();
t1.join();
t2.join();
}
You want to avoid using thread::join() it, by definition, waits for the thread to finish. Instead, use thread::detach before sleeping, so it doesn't need to wait.
I'd suggest reading up on it http://www.cplusplus.com/reference/thread/thread/detach/

C++ / boost : how to signal async task completion?

I have a thread pool which executes tasks asynchronously. But I need to wait for a certain task to complete before proceeding (running the task in current thread is not allowed, the task must be run by a worker thread).
What's the easiest way to achieve this using C++11 or Boost?
pool.enqueue([]() {
std::this_thread::sleep_for(2s); // task 1
// notify task 1 completion???
std::this_thread::sleep_for(2s); // task 2
});
// wait until task 1 is complete???
If you have a thread pool, either the pool should handle the dependencies or
you should chain the continuation task from the first task directly.
Otherwise, the pool can deadlock. Imagine just for example a pool with 1 thread. It would block indefinitely. Same can occur with many threads given enough task inter dependencies.
Use std::condition_variable:
std::mutex m;
bool task1_done=false;
std::condition_variable cond_var;
pool.enqueue([&cond_var, &task1_done]() {
std::this_thread::sleep_for(2s); // task 1
// notify task 1 completion
task1_done=true;
cond_var.notify_one();
std::this_thread::sleep_for(2s); // task 2
});
// wait until task 1 is complete
std::unique_lock<std::mutex> lock(m);
while( !task1_done ) {
cond_var.wait(lock);
}
You can use mutex and wait_for/wait_until
You can look example
Going to answer my own question.
I ended up using a future:
std::packaged_task<int()> task1([]() {
std::this_thread::sleep_for(2s); // task 1
return 1;
});
std::future<int> task1result = task1.get_future();
std::thread thread1([&]() {
task1();
std::this_thread::sleep_for(2s); // task 2
});
int rc1 = task1result.get();
printf("task1 complete: %d\n", rc1);
thread1.join();
printf("thread complete\n");
And no, there is no chance for a deadlock since there is no cyclic dependency between the threads (the waiting thread is not part of the pool).

std::condition_variable not properly wakes up after std::condition_variable::notify_all() from other thread

This code is simplification of real project code. Main thread create worker thread and wait with std::condition_variable for worker thread really started. In code below std::condition_variable wakes up after current_thread_state becomes "ThreadState::Stopping" - this is the second notification from worker thread, that is the main thread did not wake up after the first notification, when current_thread_state becomes "ThreadState::Starting". The result was deadlock. Why this happens? Why std::condition_variable not wake up after first thread_event.notify_all()?
int main()
{
std::thread thread_var;
struct ThreadState {
enum Type { Stopped, Started, Stopping };
};
ThreadState::Type current_thread_state = ThreadState::Stopped;
std::mutex thread_mutex;
std::condition_variable thread_event;
while (true) {
{
std::unique_lock<std::mutex> lck(thread_mutex);
thread_var = std::move(std::thread([&]() {
{
std::unique_lock<std::mutex> lck(thread_mutex);
cout << "ThreadFunction() - step 1\n";
current_thread_state = ThreadState::Started;
}
thread_event.notify_all();
// This code need to disable output to console (simulate some work).
cout.setstate(std::ios::failbit);
cout << "ThreadFunction() - step 1 -> step 2\n";
cout.clear();
{
std::unique_lock<std::mutex> lck(thread_mutex);
cout << "ThreadFunction() - step 2\n";
current_thread_state = ThreadState::Stopping;
}
thread_event.notify_all();
}));
while (current_thread_state != ThreadState::Started) {
thread_event.wait(lck);
}
}
if (thread_var.joinable()) {
thread_var.join();
current_thread_state = ThreadState::Stopped;
}
}
return 0;
}
Once you call the notify_all method, your main thread and your worker thread (after doing its work) both try to get a lock on the thread_mutex mutex. If your work load is insignificant, like in your example, the worker thread is likely to get the lock before the main thread and sets the state back to ThreadState::Stopped before the main thread ever reads it. This results in a dead lock.
Try adding a significant work load, e.g.
std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
to the worker thread. Dead locks are far less likely now. Of course, this is not a fix for your problem. This is just for illustrating the problem.
You have two threads racing: one writes values of current_thread_state twice, another reads the value of current_thread_state once.
It is indeterminate whether the sequence of events is write-write-read or write-read-write as you expect, both are valid executions of your application.

Setting limit on post queue size with Boost Asio?

I'm using boost::asio::io_service as a basic thread pool. Some threads get added to io_service, the main thread starts posting handlers, the worker threads start running the handlers, and everything finishes. So far, so good; I get a nice speedup over single-threaded code.
However, the main thread has millions of things to post. And it just keeps on posting them, much faster than the worker threads can handle them. I don't hit RAM limits, but it's still kind of silly to be enqueuing so many things. What I'd like to do is have a fixed-size for the handler queue, and have post() block if the queue is full.
I don't see any options for this in the Boost ASIO docs. Is this possible?
I'm using the semaphore to fix the handlers queue size. The following code illustrate this solution:
void Schedule(boost::function<void()> function)
{
semaphore.wait();
io_service.post(boost::bind(&TaskWrapper, function));
}
void TaskWrapper(boost::function<void()> &function)
{
function();
semaphore.post();
}
You can wrap your lambda in another lambda which would take care of counting the "in-progress" tasks, and then wait before posting if there are too many in-progress tasks.
Example:
#include <atomic>
#include <chrono>
#include <future>
#include <iostream>
#include <mutex>
#include <thread>
#include <vector>
#include <boost/asio.hpp>
class ThreadPool {
using asio_worker = std::unique_ptr<boost::asio::io_service::work>;
boost::asio::io_service service;
asio_worker service_worker;
std::vector<std::thread> grp;
std::atomic<int> inProgress = 0;
std::mutex mtx;
std::condition_variable busy;
public:
ThreadPool(int threads) : service(), service_worker(new asio_worker::element_type(service)) {
for (int i = 0; i < threads; ++i) {
grp.emplace_back([this] { service.run(); });
}
}
template<typename F>
void enqueue(F && f) {
std::unique_lock<std::mutex> lock(mtx);
// limit queue depth = number of threads
while (inProgress >= grp.size()) {
busy.wait(lock);
}
inProgress++;
service.post([this, f = std::forward<F>(f)]{
try {
f();
}
catch (...) {
inProgress--;
busy.notify_one();
throw;
}
inProgress--;
busy.notify_one();
});
}
~ThreadPool() {
service_worker.reset();
for (auto& t : grp)
if (t.joinable())
t.join();
service.stop();
}
};
int main() {
std::unique_ptr<ThreadPool> pool(new ThreadPool(4));
for (int i = 1; i <= 20; ++i) {
pool->enqueue([i] {
std::string s("Hello from task ");
s += std::to_string(i) + "\n";
std::cout << s;
std::this_thread::sleep_for(std::chrono::seconds(1));
});
}
std::cout << "All tasks queued.\n";
pool.reset(); // wait for all tasks to complete
std::cout << "Done.\n";
}
Output:
Hello from task 3
Hello from task 4
Hello from task 2
Hello from task 1
Hello from task 5
Hello from task 7
Hello from task 6
Hello from task 8
Hello from task 9
Hello from task 10
Hello from task 11
Hello from task 12
Hello from task 13
Hello from task 14
Hello from task 15
Hello from task 16
Hello from task 17
Hello from task 18
All tasks queued.
Hello from task 19
Hello from task 20
Done.
you could use the strand object to put the events and put a delay in your main ? Is your program dropping out after all the work is posted? If so you can use the work object which will give you more control over when your io_service stops.
you could always main check the state of the threads and have it wait untill one becomes free or something like that.
//links
http://www.boost.org/doc/libs/1_40_0/doc/html/boost_asio/reference/io_service__strand.html
http://www.boost.org/doc/libs/1_40_0/doc/html/boost_asio/reference/io_service.html
//example from the second link
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
hope this helps.
Maybe try lowering the priority of the main thread so that once the worker threads get busy they starve the main thread and the system self limits.