C++11 multi threaded producer/consumer program hangs - c++

I am new to C++11 and using threading features. In the following program, the main thread starts 9 worker threads and pushes data into a queue and then goes to wait for thread termination. I see that the worker threads don't get woken up and the program just hangs.
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <queue>
#include <vector>
#include <chrono>
#include <future>
#include <atomic>
using namespace std::chrono_literals;
std::mutex _rmtx;
std::mutex _wmtx;
std::queue<unsigned long long> dataq;
std::condition_variable _rcv;
std::condition_variable _wcv;
std::atomic_bool termthd;
void thdfunc(const int& num)
{
std::cout << "starting thread#" << num << std::endl;
std::unique_lock<std::mutex> rul(_rmtx);
while (true) {
while(!_rcv.wait_until(rul, std::chrono::steady_clock::now() + 10ms, [] {return !dataq.empty() || termthd.load(); }));
if (termthd.load()) {
std::terminate();
}
std::cout<<"thd#" << num << " : " << dataq.front() <<std::endl;
dataq.pop();
_wcv.notify_one();
}
}
int main()
{
std::vector<std::thread*> thdvec;
std::unique_lock<std::mutex> wul(_rmtx);
unsigned long long data = 0ULL;
termthd.store(false);
for (int i = 0; i < 9; i++) {
thdvec.push_back(new std::thread(thdfunc, i));
}
for ( data = 0ULL; data < 2ULL; data++) {
_wcv.wait_until(wul, std::chrono::steady_clock::now() + 10ms, [&] {return data > 1000000ULL; });
dataq.push(std::ref(data));
_rcv.notify_one();
}
termthd.store(true);
_rcv.notify_all();
//std::this_thread::yield();
for (int i = 0; i < 9; i++) {
thdvec[i]->join();
}
}
I am unable to figure out the problem. How can I make sure the threads get woken up and processes the requests and terminates normally?

This std::unique_lock<std::mutex> wul(_rmtx); will lock the _rmtx mutex until the end of main scope. It's surely an issue, because other threads trying to get the lock on _rmtx will block:
int main()
{
std::vector<std::thread*> thdvec;
std::unique_lock<std::mutex> wul(_rmtx); // <- locking mutex until end of main.
// other threads trying to lock _rmtx will block
unsigned long long data = 0ULL;
// ... rest of the code ...

Related

Condition_variable C++

There is a simple example of using Condition_variable:
#include <iostream> // std::cout
#include <thread> // std::thread
#include <mutex> // std::mutex, std::unique_lock
#include <condition_variable> // std::condition_variable
std::mutex mtx;
std::condition_variable cv;
int global_status = 0;
void print_id(int id)
{
std::unique_lock<std::mutex> lck(mtx);
while (global_status == 0)
{
cv.wait(lck);
}
std::cout << "thread " << id << '\n';
}
int main()
{
std::thread threads[10];
for (int i = 0; i < 10; ++i)
{
threads[i] = std::thread(print_id, i);
}
std::cout << "Start" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(5));
{
std::unique_lock<std::mutex> lck(mtx);
global_status = 1;
cv.notify_all();
}
for (auto& th : threads) th.join();
return 0;
}
I still can't figure out why block the global_status variable when I change its value?
I change the value global_status from only one thread - why then they block it mutex ? Or is it not necessary?
I change the value global_status from only one thread - why then they block it mutex
You need the mutex because you read the value in different threads.

C++Mutex and conditional Variable Unlocking/Synchronisation

I'm wanting to have several threads all waiting on a conditional variable (CV) and when the main thread updates a variable they all execute. However, I need the main thread to wait until all these have completed before moving on. The other threads don't end and simply go back around and wait again, so I can't use thread.join() for example.
I've got the first half working, I can trigger the threads, but the main just hangs and doesn't continue. Below is my current code
#include <iostream> // std::cout
#include <thread> // std::thread
#include <mutex> // std::mutex, std::unique_lock
#include <condition_variable> // std::condition_variable
#include <Windows.h>
#define N 3
std::mutex mtx;
std::condition_variable cv;
bool ready = false;
bool finished[N];
void print_id(int id) {
while (1) {
std::unique_lock<std::mutex> lck(mtx); //Try and Lock the Mutex
while (finished[id]) cv.wait(lck); //Wait until finished is false
// ...
std::cout << "thread " << id << '\n';
finished[id] = true; //Set finished to be true. When true, program should continue
}
}
int main()
{
std::thread threads[N];
// spawn 10 threads:
for (int i = 0; i < N; ++i) {
threads[i] = std::thread(print_id, i); //Create n threads
finished[i] = true; //Set default finished to be true
}
std::cout << "N threads ready to race...\n";
for (int i = 0; i < 5; i++) {
std::unique_lock<std::mutex> lck(mtx); //Lock mutex
for (int i = 0; i < N; i++) {
finished[i] = false; //Set finished to false, this will break the CV in each thread
}
cv.notify_all(); //Notify all threads
cv.wait(lck, [] {return finished[0] == true; }); //Wait until all threads have finished (but not ended)
std::cout << "finished, Sleeping for 2s\n";
Sleep(2000);
}
return 0;
}
Thank you.
Edit: I am aware I am only currently checking the status of the finished[0] and not each one. This is done just for simplicity atm and would eventually need to be all of them. I will write a function to manage this later.
You have cv.wait(lck, [] {return finished[0] == true; }); in main thread, but it is not being notified.
You'd need to notify it, and you'd better use another condition_variable for it, not the same as for worker thead notifiecation.

How to none blocked join to std thread

I want to keep my code clean and do the things right, to any std::thread I need to do join or detach, but how can I wait (at the main thread) for another thread without blocking the execution of the main thread?
void do_computation()
{
// Calculate 1000 digits of Pi.
}
int main()
{
std::thread td1(&do_computation);
while (running)
{
// Check if thread td1 finish and if yes print a message
// Here are some stuff of the main to do...
// Print to UI, update timer etc..
}
// If the thread has not finished yet here, just kill it.
}
The answer is semaphores. You can use a binary semaphore to synchronize your threads.
You may use System V semaphores or pthread mutexes, but they are somehow legacy in C++. Using Tsuneo Yoshioka's answer, we could implement a C++ way of semaphore, though.
#include <mutex>
#include <condition_variable>
class Semaphore {
public:
Semaphore (int count_ = 0)
: count(count_) {}
inline void notify()
{
std::unique_lock<std::mutex> lock(mtx);
count++;
cv.notify_one();
}
inline void wait()
{
std::unique_lock<std::mutex> lock(mtx);
while(count == 0){
cv.wait(lock);
}
count--;
}
private:
std::mutex mtx;
std::condition_variable cv;
int count;
};
Your implementation may make use of the Semaphore class, like so.
void do_computation()
{
//calculate 1000 digits of Pi.
semaphore.notify();
}
int main()
{
Semaphore semaphore(0);
std::thread td1(&do_computation);
semaphore.wait();
}
You can use std::promise and std::future. More info here and here.
#include <vector>
#include <thread>
#include <future>
#include <numeric>
#include <iostream>
#include <chrono>
void accumulate(std::vector<int>::iterator first,
std::vector<int>::iterator last,
std::promise<int> accumulate_promise)
{
int sum = std::accumulate(first, last, 0);
accumulate_promise.set_value(sum); // Notify future
}
void do_work(std::promise<void> barrier)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
barrier.set_value();
}
int main()
{
// Demonstrate using promise<int> to transmit a result between threads.
std::vector<int> numbers = { 1, 2, 3, 4, 5, 6 };
std::promise<int> accumulate_promise;
std::future<int> accumulate_future = accumulate_promise.get_future();
std::thread work_thread(accumulate, numbers.begin(), numbers.end(),
std::move(accumulate_promise));
accumulate_future.wait(); // wait for result
std::cout << "result=" << accumulate_future.get() << '\n';
work_thread.join(); // wait for thread completion
// Demonstrate using promise<void> to signal state between threads.
std::promise<void> barrier;
std::future<void> barrier_future = barrier.get_future();
std::thread new_work_thread(do_work, std::move(barrier));
barrier_future.wait();
new_work_thread.join();
}

thread pooling in c++ - how to end the program

I've implemented thread pooling following the answer of Kerrek SB in this question.
I've implemented MPMC queue for the functions and vector threads for the threads.
Everything worked perfectly, except that I don't know how to terminate the program, in the end if I just do thread.join since the thread is still waiting for more tasks to do, it will not join and the main thread will not continue.
Any idea how to end the program correctly?
For completeness, this is my code:
function_pool.h
#pragma once
#include <queue>
#include <functional>
#include <mutex>
#include <condition_variable>
class Function_pool
{
private:
std::queue<std::function<void()>> m_function_queue;
std::mutex m_lock;
std::condition_variable m_data_condition;
public:
Function_pool();
~Function_pool();
void push(std::function<void()> func);
std::function<void()> pop();
};
function_pool.cpp
#include "function_pool.h"
Function_pool::Function_pool() : m_function_queue(), m_lock(), m_data_condition()
{
}
Function_pool::~Function_pool()
{
}
void Function_pool::push(std::function<void()> func)
{
std::unique_lock<std::mutex> lock(m_lock);
m_function_queue.push(func);
// when we send the notification immediately, the consumer will try to
get the lock , so unlock asap
lock.unlock();
m_data_condition.notify_one();
}
std::function<void()> Function_pool::pop()
{
std::unique_lock<std::mutex> lock(m_lock);
m_data_condition.wait(lock, [this]() {return !m_function_queue.empty();
});
auto func = m_function_queue.front();
m_function_queue.pop();
return func;
// Lock will be released
}
main.cpp
#include "function_pool.h"
#include <string>
#include <iostream>
#include <mutex>
#include <functional>
#include <thread>
#include <vector>
Function_pool func_pool;
void example_function()
{
std::cout << "bla" << std::endl;
}
void infinite_loop_func()
{
while (true)
{
std::function<void()> func = func_pool.pop();
func();
}
}
int main()
{
std::cout << "stating operation" << std::endl;
int num_threads = std::thread::hardware_concurrency();
std::cout << "number of threads = " << num_threads << std::endl;
std::vector<std::thread> thread_pool;
for (int i = 0; i < num_threads; i++)
{
thread_pool.push_back(std::thread(infinite_loop_func));
}
//here we should send our functions
func_pool.push(example_function);
for (int i = 0; i < thread_pool.size(); i++)
{
thread_pool.at(i).join();
}
int i;
std::cin >> i;
}
Your problem is located in infinite_loop_func, which is an infinite loop and by result doesn't terminate. I've read the previous answer which suggests throwing an exception, however, I don't like it since exceptions should not be used for the regular control flow.
The best way to solve this is to explicitly deal with the stop condition. For example:
std::atomic<bool> acceptsFunctions;
Adding this to the function pool allows you to clearly have state and to assert that no new functions being added when you destruct.
std::optional<std::function<void()>> Function_pool::pop()
Returning an empty optional (or function in C++14 and before), allows you to deal with an empty queue. You have to, as condition_variable can do spurious wakeups.
With this, m_data_condition.notify_all() can be used to wake all threads.
Finally we have to fix the infinite loop as it doesn't cover overcommitment and at the same time allows you to execute all functions still in the queue:
while (func_pool.acceptsFunctions || func_pool.containsFunctions())
{
auto f = func_pool.pop();
If (!f)
{
func_pool.m_data_condition.wait_for(1s);
continue;
}
auto &function = *f;
function ();
}
I'll leave it up to you to implement containsFunctions() and clean up the code (infinite_loop_func as member function?) Note that with a counter, you could even deal with background task being spawned.
You can always use a specific exception type to signal to infinite_loop_func that it should return...
class quit_worker_exception: public std::exception {};
Then change infinite_loop_func to...
void infinite_loop_func ()
{
while (true) {
std::function<void()> func = func_pool.pop();
try {
func();
}
catch (quit_worker_exception &ex) {
return;
}
}
}
With the above changes you could then use (in main)...
/*
* Enqueue `thread_pool.size()' function objects whose sole job is
* to throw an instance of `quit_worker_exception' when invoked.
*/
for (int i = 0; i < thread_pool.size(); i++)
func_pool.push([](){ throw quit_worker_exception(); });
/*
* Now just wait for each worker to terminate having received its
* quit_worker_exception.
*/
for (int i = 0; i < thread_pool.size(); i++)
thread_pool.at(i).join();
Each instance of infinite_loop_func will dequeue one function object which, when called, throws a quit_worker_exception causing it to return.
Follwoing [JVApen](https://stackoverflow.com/posts/51382714/revisions) suggestion, I copy my code in case anyone will want a working code:
function_pool.h
#pragma once
#include <queue>
#include <functional>
#include <mutex>
#include <condition_variable>
#include <atomic>
#include <cassert>
class Function_pool
{
private:
std::queue<std::function<void()>> m_function_queue;
std::mutex m_lock;
std::condition_variable m_data_condition;
std::atomic<bool> m_accept_functions;
public:
Function_pool();
~Function_pool();
void push(std::function<void()> func);
void done();
void infinite_loop_func();
};
function_pool.cpp
#include "function_pool.h"
Function_pool::Function_pool() : m_function_queue(), m_lock(), m_data_condition(), m_accept_functions(true)
{
}
Function_pool::~Function_pool()
{
}
void Function_pool::push(std::function<void()> func)
{
std::unique_lock<std::mutex> lock(m_lock);
m_function_queue.push(func);
// when we send the notification immediately, the consumer will try to get the lock , so unlock asap
lock.unlock();
m_data_condition.notify_one();
}
void Function_pool::done()
{
std::unique_lock<std::mutex> lock(m_lock);
m_accept_functions = false;
lock.unlock();
// when we send the notification immediately, the consumer will try to get the lock , so unlock asap
m_data_condition.notify_all();
//notify all waiting threads.
}
void Function_pool::infinite_loop_func()
{
std::function<void()> func;
while (true)
{
{
std::unique_lock<std::mutex> lock(m_lock);
m_data_condition.wait(lock, [this]() {return !m_function_queue.empty() || !m_accept_functions; });
if (!m_accept_functions && m_function_queue.empty())
{
//lock will be release automatically.
//finish the thread loop and let it join in the main thread.
return;
}
func = m_function_queue.front();
m_function_queue.pop();
//release the lock
}
func();
}
}
main.cpp
#include "function_pool.h"
#include <string>
#include <iostream>
#include <mutex>
#include <functional>
#include <thread>
#include <vector>
Function_pool func_pool;
class quit_worker_exception : public std::exception {};
void example_function()
{
std::cout << "bla" << std::endl;
}
int main()
{
std::cout << "stating operation" << std::endl;
int num_threads = std::thread::hardware_concurrency();
std::cout << "number of threads = " << num_threads << std::endl;
std::vector<std::thread> thread_pool;
for (int i = 0; i < num_threads; i++)
{
thread_pool.push_back(std::thread(&Function_pool::infinite_loop_func, &func_pool));
}
//here we should send our functions
for (int i = 0; i < 50; i++)
{
func_pool.push(example_function);
}
func_pool.done();
for (unsigned int i = 0; i < thread_pool.size(); i++)
{
thread_pool.at(i).join();
}
}

C++ - Multi-threading - Communication between threads

#include <iostream>
#include <thread>
#include <condition_variable>
#include <queue>
#include <cstdlib>
#include <chrono>
#include <ctime>
#include <random>
using namespace std;
//counts every number that is added to the queue
static long long producer_count = 0;
//counts every number that is taken out of the queue
static long long consumer_count = 0;
void generateNumbers(queue<int> & numbers, condition_variable & cv, mutex & m, bool & workdone){
while(!workdone) {
unique_lock<std::mutex> lk(m);
int rndNum = rand() % 100;
numbers.push(rndNum);
producer_count++;
cv.notify_one();
}
}
void work(queue<int> & numbers, condition_variable & cv, mutex & m, bool & workdone) {
while(!workdone) {
unique_lock<std::mutex> lk(m);
cv.wait(lk);
cout << numbers.front() << endl;
numbers.pop();
consumer_count++;
}
}
int main() {
condition_variable cv;
mutex m;
bool workdone = false;
queue<int> numbers;
//start threads
thread producer(generateNumbers, ref(numbers), ref(cv), ref(m), ref(workdone));
thread consumer(work, ref(numbers), ref(cv), ref(m), ref(workdone));
//wait for 3 seconds, then join the threads
this_thread::sleep_for(std::chrono::seconds(3));
workdone = true;
producer.join();
consumer.join();
//output the counters
cout << producer_count << endl;
cout << consumer_count << endl;
return 0;
}
Hello Everyone,
I tried to implement the Producer-Consumer-Pattern with C++.
The producer thread generates random integers, adds them to a queue and then notifies the consumer thread that a new number was added.
The consumer thread waits for the notification and then prints the first element of the queue to the console and deletes it.
I incremented a counter for every number that is added to the queue and another counter for every number that is taken out of the queue.
I expected the two counters to hold the same value after the program is finished, however the difference is huge.
The counter that represents the addition to the queue is always in the million range (3871876 in my last test) and the counter that represents the consumer which takes numbers out of the queue is always below 100k (89993 in my last test).
Can someone explain to me why there is such a huge difference?
Do I have to add another condition variable so that the producer threads waits for the consumer thread as well?
Thanks!
No need for a second std::condition_variable, just reuse the one you have. As mentioned by other you should consider using std::atomic<bool> instead of plain bool. But I must admit that g++ with -O3 does not optimize it away.
#include <iostream>
#include <thread>
#include <condition_variable>
#include <queue>
#include <cstdlib>
#include <chrono>
#include <ctime>
#include <random>
#include <atomic>
//counts every number that is added to the queue
static long long producer_count = 0;
//counts every number that is taken out of the queue
static long long consumer_count = 0;
void generateNumbers(std::queue<int> & numbers, std::condition_variable & cv, std::mutex & m, std::atomic<bool> & workdone)
{
while(!workdone.load())
{
std::unique_lock<std::mutex> lk(m);
int rndNum = rand() % 100;
numbers.push(rndNum);
producer_count++;
cv.notify_one(); // Notify worker
cv.wait(lk); // Wait for worker to complete
}
}
void work(std::queue<int> & numbers, std::condition_variable & cv, std::mutex & m, std::atomic<bool> & workdone)
{
while(!workdone.load())
{
std::unique_lock<std::mutex> lk(m);
cv.notify_one(); // Notify generator (placed here to avoid waiting for the lock)
cv.wait(lk); // Wait for the generator to complete
std::cout << numbers.front() << std::endl;
numbers.pop();
consumer_count++;
}
}
int main() {
std::condition_variable cv;
std::mutex m;
std::atomic<bool> workdone(false);
std::queue<int> numbers;
//start threads
std::thread producer(generateNumbers, std::ref(numbers), std::ref(cv), std::ref(m), std::ref(workdone));
std::thread consumer(work, std::ref(numbers), std::ref(cv), std::ref(m), std::ref(workdone));
//wait for 3 seconds, then join the threads
std::this_thread::sleep_for(std::chrono::seconds(3));
workdone = true;
cv.notify_all(); // To prevent dead-lock
producer.join();
consumer.join();
//output the counters
std::cout << producer_count << std::endl;
std::cout << consumer_count << std::endl;
return 0;
}
EDIT:
To avoid the sporadic off-by-one error you could use this:
#include <iostream>
#include <thread>
#include <condition_variable>
#include <queue>
#include <cstdlib>
#include <chrono>
#include <ctime>
#include <random>
#include <atomic>
//counts every number that is added to the queue
static long long producer_count = 0;
//counts every number that is taken out of the queue
static long long consumer_count = 0;
void generateNumbers(std::queue<int> & numbers, std::condition_variable & cv, std::mutex & m, std::atomic<bool> & workdone)
{
while(!workdone.load())
{
std::unique_lock<std::mutex> lk(m);
int rndNum = rand() % 100;
numbers.push(rndNum);
producer_count++;
cv.notify_one(); // Notify worker
cv.wait(lk); // Wait for worker to complete
}
}
void work(std::queue<int> & numbers, std::condition_variable & cv, std::mutex & m, std::atomic<bool> & workdone)
{
while(!workdone.load() or !numbers.empty())
{
std::unique_lock<std::mutex> lk(m);
cv.notify_one(); // Notify generator (placed here to avoid waiting for the lock)
if (numbers.empty())
cv.wait(lk); // Wait for the generator to complete
if (numbers.empty())
continue;
std::cout << numbers.front() << std::endl;
numbers.pop();
consumer_count++;
}
}
int main() {
std::condition_variable cv;
std::mutex m;
std::atomic<bool> workdone(false);
std::queue<int> numbers;
//start threads
std::thread producer(generateNumbers, std::ref(numbers), std::ref(cv), std::ref(m), std::ref(workdone));
std::thread consumer(work, std::ref(numbers), std::ref(cv), std::ref(m), std::ref(workdone));
//wait for 3 seconds, then join the threads
std::this_thread::sleep_for(std::chrono::seconds(1));
workdone = true;
cv.notify_all(); // To prevent dead-lock
producer.join();
consumer.join();
//output the counters
std::cout << producer_count << std::endl;
std::cout << consumer_count << std::endl;
return 0;
}
Note that this code may not work properly.
the workdone variable is defined as a regular bool
and it is perfectly legitimate for the compiler to assume that it can be safely optimized away because it never changes inside the block of code.
if you have a jerk reaction to just add volatile... Nope, that won't work either.
You'll need to properly synchronize access to the workdone variable since both threads are reading, and another thread (the ui thread) is writing.
An alternate solution would be to use something like an event instead of a simple variable.
But the explanation to your problem.
Both threads have the same ending contition (!workdone), but they have a different duration, so there is currently nothing guaranteeing that producer and consumer are somehow synced to run at a similar amount of loops over time.