Threading queue in c++ - c++

Currently working on a project, im struggeling with threading and queue at the moment, the issue is that all threads take the same item in the queue.
Reproduceable example:
#include <iostream>
#include <queue>
#include <thread>
using namespace std;
void Test(queue<string> queue){
while (!queue.empty()) {
string proxy = queue.front();
cout << proxy << "\n";
queue.pop();
}
}
int main()
{
queue<string> queue;
queue.push("101.132.186.39:9090");
queue.push("95.85.24.83:8118");
queue.push("185.211.193.162:8080");
queue.push("87.106.37.89:8888");
queue.push("159.203.61.169:8080");
std::vector<std::thread> ThreadVector;
for (int i = 0; i <= 10; i++){
ThreadVector.emplace_back([&]() {Test(queue); });
}
for (auto& t : ThreadVector){
t.join();
}
ThreadVector.clear();
return 0;
}

You are giving each thread its own copy of the queue. I imagine that what you want is all the threads to work on the same queue and for that you will need to use some synchronization mechanism when multiple threads work on the shared queue as std queue is not thread safe.
edit: minor note: in your code you are spawning 11 threads not 10.
edit 2: OK, try this one to begin with:
std::mutex lock_work;
std::mutex lock_io;
void Test(queue<string>& queue){
while (!queue.empty()) {
string proxy;
{
std::lock_guard<std::mutex> lock(lock_work);
proxy = queue.front();
queue.pop();
}
{
std::lock_guard<std::mutex> lock(lock_io);
cout << proxy << "\n";
}
}
}

Look at this snippet:
void Test(std::queue<std::string> queue) { /* ... */ }
Here you pass a copy of the queue object to the thread.
This copy is local to each thread, so it gets destroyed after every thread exits so in the end your program does not have any effect on the actual queue object that resides in the main() function.
To fix this, you need to either make the parameter take a reference or a pointer:
void Test(std::queue<std::string>& queue) { /* ... */ }
This makes the parameter directly refer to the queue object present inside main() instead of creating a copy.
Now, the above code is still not correct since queue is prone to data-race and neither std::queue nor std::cout is thread-safe and can get interrupted by another thread while currently being accessed by one. To prevent this, use a std::mutex:
// ...
#include <mutex>
// ...
// The mutex protects the 'queue' object from being subjected to data-race amongst different threads
// Additionally 'io_mut' is used to protect the streaming operations done with 'std::cout'
std::mutex mut, io_mut;
void Test(std::queue<std::string>& queue) {
std::queue<std::string> tmp;
{
// Swap the actual object with a local temporary object while being protected by the mutex
std::lock_guard<std::mutex> lock(mut);
std::swap(tmp, queue);
}
while (!tmp.empty()) {
std::string proxy = tmp.front();
{
// Call to 'std::cout' needs to be synchronized
std::lock_guard<std::mutex> lock(io_mut);
std::cout << proxy << "\n";
}
tmp.pop();
}
}
This synchronizes each thread call and prevents access from any other threads while queue is still being accessed by a thread.
Edit:
Alternatively, it'd be much faster in my opinion to make each thread wait until one of them receives a notification of your push to std::queue. You can do this through the use of std::condition_variable:
// ...
#include <mutex>
#include <condition_variable>
// ...
std::mutex mut1, mut2;
std::condition_variable cond;
void Test(std::queue<std::string>& queue, std::chrono::milliseconds timeout = std::chrono::milliseconds{10}) {
std::unique_lock<std::mutex> lock(mut1);
// Wait until 'queue' is not empty...
cond.wait(lock, [queue] { return queue.empty(); });
while (!queue.empty()) {
std::string proxy = std::move(queue.front());
std::cout << proxy << "\n";
queue.pop();
}
}
// ...
int main() {
std::queue<string> queue;
std::vector<std::thread> ThreadVector;
for (int i = 0; i <= 10; i++)
ThreadVector.emplace_back([&]() { Test(queue); });
// Notify the vectors of each 'push()' call to 'queue'
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("101.132.186.39:9090");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("95.85.24.83:8118");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("185.211.193.162:8080");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("87.106.37.89:8888");
cond.notify_one();
}
{
std::unique_lock<std::mutex> lock(mut2);
queue.push("159.203.61.169:8080");
cond.notify_one();
}
for (auto& t : ThreadVector)
t.join();
ThreadVector.clear();
}

Related

Correct way to check bool flag in thread

How can I check bool variable in class considering thread safe?
For example in my code:
// test.h
class Test {
void threadFunc_run();
void change(bool _set) { m_flag = _set; }
...
bool m_flag;
};
// test.cpp
void Test::threadFunc_run()
{
// called "Playing"
while(m_flag == true) {
for(int i = 0; i < 99999999 && m_flag; i++) {
// do something .. 1
}
for(int i = 0; i < 111111111 && m_flag; i++) {
// do something .. 2
}
}
}
I wan to stop "Playing" as soon as change(..) function is executed in the external code.
It also wants to be valid in process of operating the for statement.
According to the search, there are variables for recognizing immediate changes, such as atomic or volatile.
If not immediately, is there a better way to use a normal bool?
Actually synchronizing threads safely requires more then a bool.
You will need a state, a mutex and a condition variable like this.
The approach also allows for quick reaction to stop from within the loop.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <future>
#include <mutex>
class Test
{
private:
// having just a bool to check the state of your thread is NOT enough.
// your thread will have some intermediate states as well
enum play_state_t
{
idle, // initial state, not started yet (not scheduled by OS threadscheduler yet)
playing, // running and doing work
stopping, // request for stop is issued
stopped // thread has stopped (could also be checked by std::future synchronization).
};
public:
void play()
{
// start the play loop, the lambda is not guaranteed to have started
// after the call returns (depends on threadscheduling of the underlying OS)
// I use std::async since that has far superior synchronization with the calling thead
// the returned future can be used to pass both values & exceptions back to it.
m_play_future = std::async(std::launch::async, [this]
{
// give a signal the asynchronous function has really started
set_state(play_state_t::playing);
std::cout << "play started\n";
// as long as state is playing keep doing the work
while (get_state() == play_state_t::playing)
{
// loop to show we can break fast out of it when stop is called
for (std::size_t i = 0; (i < 100l) && (get_state() == play_state_t::playing); ++i)
{
std::cout << ".";
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
}
set_state(play_state_t::stopped);
std::cout << "play stopped.\n";
});
// avoid race conditions really wait for
// trhead handling async to have started playing
wait_for_state(play_state_t::playing);
}
void stop()
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
if (m_state == play_state_t::playing)
{
std::cout << "\nrequest stop.\n";
m_state = play_state_t::stopping;
m_cv.wait(lock, [&] { return m_state == play_state_t::stopped; });
}
};
~Test()
{
stop();
}
private:
void set_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
m_state = state;
m_cv.notify_all(); // let other threads that are wating on condition variable wakeup to check new state
}
play_state_t get_state() const
{
std::unique_lock<std::mutex> lock{ m_mtx }; // only wait on condition variable in lock
return m_state;
}
void wait_for_state(const play_state_t state)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] { return m_state == state; });
}
// for more info on condition variables
// see : https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables
mutable std::mutex m_mtx;
std::condition_variable m_cv; // a condition variable is not really a variable more a signal to threads to wakeup
play_state_t m_state{ play_state_t::idle };
std::future<void> m_play_future;
};
int main()
{
Test test;
test.play();
std::this_thread::sleep_for(std::chrono::seconds(1));
test.stop();
return 0;
}

Pause threads from a different thread, and then wait until all are paused

I want to pause a number of worker thread from a creator thread. This can be done with a conditional variable, as seen in this code.
#include <iostream>
#include <vector>
#include <thread>
#include <condition_variable>
#include <atomic>
#define NR_ITERATIONS 3
#define NR_THREADS 5
class c_threads {
private:
bool m_worker_threads_pause;
//std::atomic<int> m_worker_threads_paused;
std::mutex m_worker_thread_mutex;
std::condition_variable m_worker_thread_conditional_variable;
void worker_thread() {
std::unique_lock<std::mutex> worker_thread_lock(m_worker_thread_mutex);
m_worker_thread_conditional_variable.wait(worker_thread_lock,
[this]{return !this->m_worker_threads_pause;}
);
std::cout << "worker thread function" << std::endl;
//...
}
void creator_thread() {
std::cout << "creator thread function" << std::endl;
{
std::lock_guard<std::mutex> lock_guard(m_worker_thread_mutex);
m_worker_threads_pause = true;
}
// wait_until( worker_threads_waiting == NR_THREADS);
//...
{
std::lock_guard<std::mutex> lock_guard(m_worker_thread_mutex);
m_worker_threads_pause = false;
}
m_worker_thread_conditional_variable.notify_all();
}
public:
c_threads() : m_worker_threads_pause(true)
/*m_worker_threads_paused(0)*/ {}
void start_job() {
std::vector<std::thread> worker_threads;
worker_threads.reserve(NR_THREADS);
for (int i=0;i<NR_THREADS;i++) {
worker_threads.emplace_back(&c_threads::worker_thread,this);
}
std::thread o_creator_thread(&c_threads::creator_thread,this);
o_creator_thread.join();
for (auto& thread : worker_threads) {
thread.join();
}
}
};
int main(int argc, char** argv) {
c_threads o_threads;
o_threads.start_job();
}
The problem is that the creator_thread function should wait until all worker_functions are waiting at the conditional variable before it proceeds.
Every time that the creator_thread function is called it should
Pause the worker threads
Wait until they are all paused at the condition variable
Proceed
How to achieve this?
There might be a better way, but I think you're going to have to do something a little more complicated, like create a gatekeeper object. Worker threads generally work like this:
while(iShouldKeepRunning()) {
... lock the mutex
... look for something to do
... if nothing to do, then wait on the condition
}
I think instead you would want some sort of "give me more work" object, or maybe a "is it safe to keep working" object that your creater thread can block.
while(iShouldKeepRunning()) {
... no mutex at all
... ask the gatekeeper for something to do / if it's safe to do something
... and the gatekeeper blocks as necessary
... do the work
}
The gatekeeper locks the mutex, checks if it's safe to give out work, and if it isn't, increments a "I'm making this guy wait" counter before blocking on the condvar.
Something like that.
The blocker might look something like:
class BlockMyThreads {
public:
int runningCount = 0;
int blockedCount = 0;
bool mayWork = true;
std::mutex myMutex;
std::condition_variable condVar;
void iAmWorking() {
std::unique_lock<std::mutex> lock(myMutex);
++runningCount;
}
void letMeWork() {
std::unique_lock<std::mutex> lock(myMutex);
while (!mayWork) {
++blockedCount;
condVar.wait(lock);
--blockedCount;
}
}
void block() {
std::unique_lock<std::mutex> lock(myMutex);
mayWork = false;
}
void release() {
std::unique_lock<std::mutex> lock(myMutex);
mayWork = true;
condVar.notifyAll(lock);
}
};
I haven't tested this, so there might be errors. Your worker threads would need to call iAmWorking() at the start (to give you a thread count) and you'd want to increment a decrement they call when they're done, I suppose.
The main thread can call block() and release() as you desire.

C++ condition_variable wait_for() blocks forever [duplicate]

I'm trying to create a producer-consumer program, where the consumers must keep running until all the producers are finished, then consume what's left in the queue (if there's anything left) and then end. You can check my code bellow, I think I know where the problem (probably deadlock) is, but I don't know how to make it work properly.
#include<iostream>
#include<cstdlib>
#include <queue>
#include <thread>
#include <mutex>
#include <condition_variable>
using namespace std;
class Company{
public:
Company() : producers_done(false) {}
void start(int n_producers, int n_consumers); // start customer&producer threads
void stop(); // join all threads
void consumer();
void producer();
/* some other stuff */
private:
condition_variable cond;
mutex mut;
bool producers_done;
queue<int> products;
vector<thread> producers_threads;
vector<thread> consumers_threads;
/* some other stuff */
};
void Company::consumer(){
while(!products.empty()){
unique_lock<mutex> lock(mut);
while(products.empty() && !producers_done){
cond.wait(lock); // <- I think this is where the deadlock happens
}
if (products.empty()){
break;
}
products.pop();
cout << "Removed product " << products.size() << endl;
}
}
void Company::producer(){
while(true){
if((rand()%10) == 0){
break;
}
unique_lock<mutex> lock(mut);
products.push(1);
cout << "Added product " << products.size() << endl;
cond.notify_one();
}
}
void Company::stop(){
for(auto &producer_thread : producers_threads){
producer_thread.join();
}
unique_lock<mutex> lock(mut);
producers_done = true;
cout << "producers done" << endl;
cond.notify_all();
for(auto &consumer_thread : consumers_threads){
consumer_thread.join();
}
cout << "consumers done" << endl;
}
void Company::start(int n_producers, int n_consumers){
for(int i = 0; i<n_producers; ++i){
producers_threads.push_back(thread(&Company::producer, this));
}
for(int i = 0; i<n_consumers; ++i){
consumers_threads.push_back(thread(&Company::consumer, this));
}
}
int main(){
Company c;
c.start(2, 2);
c.stop();
return true;
}
I know, there are a lot of producer-consumer related questions here, and I've scrolled through at least 10 of them, but none provided answer to my issue.
When people use std::atomic along with std::mutex and std::condition_variable that results in deadlock in almost 100% of cases. This is because modifications to that atomic variable are not protected by the mutex and hence condition variable notifications get lost when that variable is updated after the mutex is locked but before condition variable wait in the consumer.
A fix would be to not use std::atomic and only modify and read producers_done while the mutex is held. E.g.:
void Company::consumer(){
for(;;){
unique_lock<mutex> lock(mut);
while(products.empty() && !producers_done)
cond.wait(lock);
if(products.empty())
break;
orders.pop();
}
}
Another error in the code is that in while(!products.empty()) it calls products.empty() without holding the mutex, resulting in a race condition.
The next error is keeping the mutex locked while waiting for the consumer threads to terminate. Fix:
{
unique_lock<mutex> lock(mut);
producers_done = true;
// mutex gets unlocked here.
}
cond.notify_all();
for(auto &consumer_thread : consumers_threads)
consumer_thread.join();

Thread synchronization problem in thread's procedure

I have a question. I add the object to the map and in the thread call the run() procedure for all elements in the map.
I correctly understand that in this code there is a synchronization problem in the process procedure. Can I add a mutex? Given that this procedure is called in the thread?
class Network {
public:
Network() {
std::cout << "Network constructor" << std::endl;
}
void NetworkInit(const std::string& par1) {
this->par1 = par1;
}
~Network() {
std::cout << "Network destructor" << std::endl;
my_map.clear();
}
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
cv.notify_one();
}
void removeLogic(uint32_t Id) {
std::unique_lock<std::mutex> lk(mutex);
cv.wait(lk, [this]{return !my_map.empty(); });
auto p = this->my_map.find(roomId);
if (p != end(this->my_map)) {
this->my_map.erase(roomId);
}
lk.unlock();
}
/**
* Start thread
*/
void StartThread(int id = 1) {
running = true;
first = std::thread([this, id] { process(id); });
first.detach();
}
/**
* Stop thread
*/
void StopThread() {
running = false;
}
private:
std::thread first;
std::atomic<bool> running = ATOMIC_VAR_INIT(true);
void process(int id) {
while (running) {
for (const auto& it:my_map) {
it.second->run();
}
std::this_thread::sleep_for(10ms);
}
}
private:
std::mutex mutex;
std::condition_variable cv;
using MyMapType = std::map<uint32_t, std::shared_ptr<Logic> >;
MyMapType my_map;
std::string par1;
};
The first idea is to protect the map as a whole with a mutex that is released during run. This works for addLogic because inserting into a map invalidates no iterators, but not for deleteLogic which might invalidate the very iterator value being used by process.
More efficient, lock-free approaches like hazard pointers may be applicable here, but the basic idea is to use a deferred deletion list. Assuming that the intent of concurrent deletion is cancellation of the task (not merely cleanup after all work is completed), it’s sensible to have the consumer thread to check immediately before execution. Using a set (to correspond to your map) will let the deletion list be dynamic and those checks be efficient.
So have another mutex protect the deletion list and take it at the beginning of each iteration in process:
void addLogic(uint32_t Id, std::shared_ptr<Logic> lgc) {
std::lock_guard<std::mutex> lk(mutex);
my_map.insert(std::pair<uint32_t, std::shared_ptr<Logic>>(Id, lgc));
}
void removeLogic(uint32_t Id) {
std::lock_guard<std::mutex> kg(kill_mutex);
kill.insert(Id);
}
private:
std::set<uint32_t> kill;
std::mutex mutex,kill_mutex;
void process(int id) {
for(;running;std::this_thread::sleep_for(10ms)) {
std::unique_lock<std::mutex> lg(mutex);
for(auto i=my_map.begin(),e=my_map.end();i!=e;) {
if(std::lock_guard<std::mutex>(kill_mutex),kill.erase(i->first)) {
i=my_map.erase(i);
continue; // test i!=e again
}
lg.unlock();
i->second->run();
lg.lock();
++i;
}
}
}
This code omits your condition_variable usage: it’s not necessary to wait before enqueuing something for deletion.
The solution with low level concurrency primitives usually does not scale and is not easy to maintain.
A better alternative would be to have a thread-safe "control" queue of map update or worker termination instructions.
Something like this:
enum Op {
ADD,
DROP,
STOP
};
struct Request {
Op op;
uint32_t id;
std::function<void()> action;
};
...
// the map which required protection in your code
std::map<uint32_t, std::function<void()>> subs;
// requests queue and its mutex (not very optimal, just to demonstrate the idea)
std::vector<Request> requests;
std::mutex mutex;
// the worker thread
std::thread worker([&](){
// the temporary buffer where requests are drained to from the queue before processing
decltype(requests) buffer;
// the main loop
while (true) {
// requests collection (requires synchronization)
{
std::lock_guard<decltype(mutex)> const guard {mutex};
buffer.swap(requests);
}
// requests processing
for(auto&& request: buffer) {
switch (request.op) {
case ADD:
subs[request.id] = std::move(request.action);
break;
case DROP:
subs.erase(request.id);
break;
case STOP: goto endloop;
}
}
// map iteration
for (auto&& entry: subs) {
entry.second();
}
}
endloop:;
});

How to say to std::thread to stop?

I have two questions.
1) I want to launch some function with an infinite loop to work like a server and checking for messages in a separate thread. However I want to close it from the parent thread when I want. I'm confusing how to std::future or std::condition_variable in this case. Or is it better to create some global variable and change it to true/false from the parent thread.
2) I'd like to have something like this. Why this one example crashes during the run time?
#include <iostream>
#include <chrono>
#include <thread>
#include <future>
std::mutex mu;
bool stopServer = false;
bool serverFunction()
{
while (true)
{
// checking for messages...
// processing messages
std::this_thread::sleep_for(std::chrono::seconds(1));
mu.lock();
if (stopServer)
break;
mu.unlock();
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
system("pause");
mu.lock();
stopServer = true;
mu.unlock();
serverThread.join();
}
Why this one example crashes during the run time?
When you leave the inner loop of your thread, you leave the mutex locked, so the parent thread may be blocked forever if you use that mutex again.
You should use std::unique_lock or something similar to avoid problems like that.
You leave your mutex locked. Don't lock mutexes manually in 999/1000 cases.
In this case, you can use std::unique_lock<std::mutex> to create a RAII lock-holder that will avoid this problem. Simply create it in a scope, and have the lock area end at the end of the scope.
{
std::unique_lock<std::mutex> lock(mu);
stopServer = true;
}
in main and
{
std::unique_lock<std::mutex> lock(mu);
if (stopServer)
break;
}
in serverFunction.
Now in this case your mutex is pointless. Remove it. Replace bool stopServer with std::atomic<bool> stopServer, and remove all references to mutex and mu from your code.
An atomic variable can safely be read/written to from different threads.
However, your code is still busy-waiting. The right way to handle a server processing messages is a condition variable guarding the message queue. You then stop it by front-queuing a stop server message (or a flag) in the message queue.
This results in a server thread that doesn't wake up and pointlessly spin nearly as often. Instead, it blocks on the condition variable (with some spurious wakeups, but rare) and only really wakes up when there are new messages or it is told to shut down.
template<class T>
struct cross_thread_queue {
void push( T t ) {
{
auto l = lock();
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop() {
auto l = lock();
cv.wait( l, [&]{ return halt || !data.empty(); } );
if (halt) return {};
T r = data.front();
data.pop_front();
return std::move(r); // returning to optional<T>, so we'll explicitly `move` here.
}
void terminate() {
{
auto l = lock();
data.clear();
halt = true;
}
cv.notify_all();
}
private:
std::mutex m;
std::unique_lock<std::mutex> lock() {
return std::unique_lock<std::mutex>(m);
}
bool halt = false;
std::deque<T> data;
std::condition_variable cv;
};
We use boost::optional for the return type of pop -- if the queue is halted, pop returns an empty optional. Otherwise, it blocks until there is data.
You can replace this with anything optional-like, even a std::pair<bool, T> where the first element says if there is anything to return, or a std::unique_ptr<T>, or a std::experimental::optional, or a myriad of other choices.
cross_thread_queue<int> queue;
bool serverFunction()
{
while (auto message = queue.pop()) {
// processing *message
std::cout << "Processing " << *message << std::endl;
}
std::cout << "Exiting func..." << std::endl;
return true;
}
int main()
{
std::thread serverThread(serverFunction);
// some stuff
queue.push(42);
system("pause");
queue.terminate();
serverThread.join();
}
live example.