I have one thread that pushes to the queue and one that consumes elements from the queue.
Processing of one of the elements is asynchronous but I do not want to process other elements during this one is processing.
(Let's assume that output stream and queue is thread safe)
I wonder what is the best way to implement consuming thread... I think while(true) and conditions is not the best choice.
Is it simple implementation (process2 has to be ansynchronous).
#include <iostream>
#include <queue>
#include <thread>
#include <atomic>
#include <future>
std::atomic_bool isProcess2Processing{false};
void process0()
{
std::cout << "process0" << std::endl;
}
void process1()
{
std::cout << "process1" << std::endl;
}
void process2()
{
std::async(std::launch::async, []() { isProcess2Processing = true; std::cout << "start process2" << std::endl; while (std::rand() > 10000) {}; std::cout << "finished proces2" << std::endl; isProcess2Processing = false; });
}
void consume(int x)
{
if (x == 0)
{
process0();
}
else if (x == 1)
{
process1();
}
else
{
process2();
}
}
int main()
{
std::queue<int> q;
std::thread consumingThread([&q]() {
while (true) {
if (!q.empty() && !isProcess2Processing) {
consume(q.front());
q.pop();
}
}
});
while (true)
{
q.push(std::rand() % 3);
}
}
I wonder what is the best way to implement consuming thread... I think
while(true) and conditions is not the best choice.
Your contemplation here is justified: The biggest problem with using a while-loop like this (i.e. without any waiting involved) is that you are wasting CPU time and power. Your secondary thread (and per the code given your main thread too) keeps a CPU core all to itself for some time for no apparent reason, so that other tasks cannot get CPU time for themselves.
The most naive way of going about changing that is adding some kind of sleep like here:
std::thread consumingThread([&q]() {
while (true) {
if (!q.empty() && !isProcess2Processing) {
consume(q.front());
q.pop();
}
std::this_thread::sleep_for(std::chrono::milliseconds(5));
}
});
Here you'll be sleeping for 5ms during which the scheduler will be able to let other tasks do their work.
A few things in addition you should make sure to have are exit condition for each loop and also call consumingThread.join(); before you leave main().
Related
For below program, thread Pool always picks the same thread ID 0x7000095f9000! Why so?
Should every push condi.notify_one() wake up all threads same time? What could be the reason same thread ID get picked?
Computer supports 3 threads.
Any other info on using function objects would be helpful!!
O/P
Checking if not empty
Not Empty
0x700009576000 0
Checking if not empty
Checking if not empty
Checking if not empty
Not Empty
0x7000095f9000 1
Checking if not empty
Not Empty
0x7000095f9000 2
Checking if not empty
Not Empty
0x7000095f9000 3
Checking if not empty
Not Empty
0x7000095f9000 4
Checking if not empty
Not Empty
0x7000095f9000 5
Checking if not empty
Code
#include <iostream>
#include <vector>
#include <queue>
#include <thread>
#include <condition_variable>
#include <chrono>
using namespace std;
class TestClass{
public:
void producer(int i) {
unique_lock<mutex> lockGuard(mtx);
Q.push(i);
cond.notify_all();
}
void consumer() {
{
unique_lock<mutex> lockGuard(mtx);
cout << "Checking if not empty" << endl;
cond.wait(lockGuard, [this]() {
return !Q.empty();
});
cout << "Not Empty" << endl;
cout << this_thread::get_id()<<" "<<Q.front()<<endl;
Q.pop();
}
};
void consumerMain() {
while(1) {
consumer();
std::this_thread::sleep_for(chrono::seconds(1));
}
}
private:
mutex mtx;
condition_variable cond;
queue<int> Q;
};
int main()
{
std::vector<std::thread> vecOfThreads;
std::function<void(TestClass&)> func = [&](TestClass &obj) {
while(1) {
obj.consumer();
}
};
unsigned MAX_THREADS = std::thread::hardware_concurrency()-1;
TestClass obj;
for(int i=0; i<MAX_THREADS; i++) {
std::thread th1(func, std::ref(obj));
vecOfThreads.emplace_back(std::move(th1));
}
for(int i=0; i<4*MAX_THREADS/2; i++) {
obj.producer(i);
}
for (std::thread & th : vecOfThreads)
{
if (th.joinable())
th.join();
}
return 0;
}
Any other info on using function objects would be helpful!! Thanks in advance!!
Any other pointers?
The very short unlocking of the mutex that happens in the consumer threads will in your case most probably let the running thread acquire the lock again, and again and again.
If you instead simulate some work being done after the workload has been picked from the queue by calling consumerMain (which sleeps a little) instead of consumer, you would likely see different threads picking up the workload.
while(1) {
obj.consumerMain();
}
So I have two threads where they share the same variable, 'counter'. I want to synchronize my threads by only continuing execution once both threads have reached that point. Unfortunately I enter a deadlock state as my thread isn't changing it's checking variable. The way I have it is:
volatile int counter = 0;
Thread() {
- some calculations -
counter++;
while(counter != 2) {
std::this_thread::yield();
}
counter = 0;
- rest of the calculations -
}
The idea is that since I have 2 threads, once they reach that point - at different times - they will increment the counter. If the counter isn't equal to 2, then the thread that reached there first will have to wait until the other has incremented the counter so that they are synced up. Does anyone know where the issue lies here?
To add more information about the problem, I have two threads which perform half of the operations on an array. Once they are done, I want to make sure that they both have completed finish their calculations. Once they are, I can signal the printer thread to wake up and perform it's operation of printing and clearing the array. If I do this before both threads have completed, there will be issues.
Pseudo code:
Thread() {
getLock()
1/2 of the calculations on array
releaseLock()
wait for both to finish - this is the issue
wake up printer thread
}
In situations like this, you must use an atomic counter.
std::atomic_uint counter = 0;
In the given example, there is also no sign that counter got initialized.
You are probably looking for std::conditional_variable: A conditional variable allows one thread to signal to another thread. Because it doesn't look like you are using the counter, and you're only using it for synchronisation, here is some code from another answer (disclaimer: it's one of my answers) that shows std::conditional_variable processing logic on different threads, and performing synchronisation around a value:
unsigned int accountAmount;
std::mutex mx;
std::condition_variable cv;
void depositMoney()
{
// go to the bank etc...
// wait in line...
{
std::unique_lock<std::mutex> lock(mx);
std::cout << "Depositing money" << std::endl;
accountAmount += 5000;
}
// Notify others we're finished
cv.notify_all();
}
void withdrawMoney()
{
std::unique_lock<std::mutex> lock(mx);
// Wait until we know the money is there
cv.wait(lock);
std::cout << "Withdrawing money" << std::endl;
accountAmount -= 2000;
}
int main()
{
accountAmount = 0;
// Run both threads simultaneously:
std::thread deposit(&depositMoney);
std::thread withdraw(&withdrawMoney);
// Wait for both threads to finish
deposit.join();
withdraw.join();
std::cout << "All transactions processed. Final amount: " << accountAmount << std::endl;
return 0;
}
I would look into using a countdown latch. The idea is to have one or more threads block until the desired operation is completed. In this case you want to wait until both threads are finished modifying the array.
Here is a simple example:
#include <condition_variable>
#include <mutex>
#include <thread>
class countdown_latch
{
public:
countdown_latch(int count)
: count_(count)
{
}
void wait()
{
std::unique_lock<std::mutex> lock(mutex_);
while (count_ > 0)
condition_variable_.wait(lock);
}
void countdown()
{
std::lock_guard<std::mutex> lock(mutex_);
--count_;
if (count_ == 0)
condition_variable_.notify_all();
}
private:
int count_;
std::mutex mutex_;
std::condition_variable condition_variable_;
};
and usage would look like this
std::atomic<int> result = 0;
countdown_latch latch(2);
void perform_work()
{
++result;
latch.countdown();
}
int main()
{
std::thread t1(perform_work);
std::thread t2(perform_work);
latch.wait();
std::cout << "result = " << result;
t1.join();
t2.join();
}
I have a XML file with a sequence of nodes. Each node represents an element that I need to parse and add in a sorted list (the order must be the same of the nodes found in the file).
At the moment I am using a sequential solution:
struct Graphic
{
bool parse()
{
// parsing...
return parse_outcome;
}
};
vector<unique_ptr<Graphic>> graphics;
void producer()
{
for (size_t i = 0; i < N_GRAPHICS; i++)
{
auto g = new Graphic();
if (g->parse())
graphics.emplace_back(g);
else
delete g;
}
}
So, only if the graphic (that actually is an instance of a class derived from Graphic, a Line, a Rectangle and so on, that is why the new) can be properly parse, it will be added to my data structure.
Since I only care about the order in which thes graphics are added to my list, I though to call the parse method asynchronously, such that the producer has the task of read each node from the file and add this graphic to the data structure, while the consumer has the task of parse each graphic whenever a new graphic is ready to be parsed.
Now I have several consumer threads (created in the main) and my code looks like the following:
queue<pair<Graphic*, size_t>> q;
mutex m;
atomic<size_t> n_elements;
void producer()
{
for (size_t i = 0; i < N_GRAPHICS; i++)
{
auto g = new Graphic();
graphics.emplace_back(g);
q.emplace(make_pair(g, i));
}
n_elements = graphics.size();
}
void consumer()
{
pair<Graphic*, size_t> item;
while (true)
{
{
std::unique_lock<std::mutex> lk(m);
if (n_elements == 0)
return;
n_elements--;
item = q.front();
q.pop();
}
if (!item.first->parse())
{
// here I should remove the item from the vector
assert(graphics[item.second].get() == item.first);
delete item.first;
graphics[item.second] = nullptr;
}
}
}
I run the producer first of all in my main, so that when the first consumer starts the queue is already completely full.
int main()
{
producer();
vector<thread> threads;
for (auto i = 0; i < N_THREADS; i++)
threads.emplace_back(consumer);
for (auto& t : threads)
t.join();
return 0;
}
The concurrent version seems to be at least twice as faster as the original one.
The full code has been uploaded here.
Now I am wondering:
Are there any (synchronization) errors in my code?
Is there a way to achieve the same result faster (or better)?
Also, I noticed that on my computer I get the best result (in terms of elapsed time) if I set the number of thread equals to 8. More (or less) threads give me worst results. Why?
Blockquote
There isn't synchronization errors, but I think that the memory managing could be better, since your code leaked if parse() throws an exception.
There isn't synchronization errors, but I think that your memory managing could be better, since you will have leaks if parse() throw an exception.
Blockquote
Is there a way to achieve the same result faster (or better)?
Probably. You could use a simple implementation of a thread pool and a lambda that do the parse() for you.
The code below illustrate this approach. I use the threadpool implementation
here
#include <iostream>
#include <stdexcept>
#include <vector>
#include <memory>
#include <chrono>
#include <utility>
#include <cassert>
#include <ThreadPool.h>
using namespace std;
using namespace std::chrono;
#define N_GRAPHICS (1000*1000*1)
#define N_THREADS 8
struct Graphic;
using GPtr = std::unique_ptr<Graphic>;
static vector<GPtr> graphics;
struct Graphic
{
Graphic()
: status(false)
{
}
bool parse()
{
// waste time
try
{
throw runtime_error("");
}
catch (runtime_error)
{
}
status = true;
//return false;
return true;
}
bool status;
};
int main()
{
auto start = system_clock::now();
auto producer_unit = []()-> GPtr {
std::unique_ptr<Graphic> g(new Graphic);
if(!g->parse()){
g.reset(); // if g don't parse, return nullptr
}
return g;
};
using ResultPool = std::vector<std::future<GPtr>>;
ResultPool results;
// ThreadPool pool(thread::hardware_concurrency());
ThreadPool pool(N_THREADS);
for(int i = 0; i <N_GRAPHICS; ++i){
// Running async task
results.emplace_back(pool.enqueue(producer_unit));
}
for(auto &t : results){
auto value = t.get();
if(value){
graphics.emplace_back(std::move(value));
}
}
auto duration = duration_cast<milliseconds>(system_clock::now() - start);
cout << "Elapsed: " << duration.count() << endl;
for (size_t i = 0; i < graphics.size(); i++)
{
if (!graphics[i]->status)
{
cerr << "Assertion failed! (" << i << ")" << endl;
break;
}
}
cin.get();
return 0;
}
It is a bit faster (1s) on my machine, more readable, and removes the necessity of shared datas (synchronization is evil, avoid it or hide it in a reliable and efficient way).
say we have a simple async call we want to kill/terminate/eliminate on timeout
// future::wait_for
#include <iostream> // std::cout
#include <future> // std::async, std::future
#include <chrono> // std::chrono::milliseconds
// a non-optimized way of checking for prime numbers:
bool is_prime (int x) {
for (int i=2; i<x; ++i) if (x%i==0) return false;
return true;
}
int main ()
{
// call function asynchronously:
std::future<bool> fut = std::async (is_prime,700020007);
// do something while waiting for function to set future:
std::cout << "checking, please wait";
std::chrono::milliseconds span (100);
while (fut.wait_for(span)==std::future_status::timeout)
std::cout << '.';
bool x = fut.get();
std::cout << "\n700020007 " << (x?"is":"is not") << " prime.\n";
return 0;
}
we want to kill it as soon as first timeout happens. Cant find a method in future.
The closest I could find to stop a running task was std::packaged_task reset method yet it does not say if it can interrupt a running task. So how one kills a task running asyncrinusly not using boost thread or other non stl libraries?
It's not possible to stop a std::async out of the box... However, You can do this, pass a bool to terminate the is_prime method and throw an exception if there is a timeout:
// future::wait_for
#include <iostream> // std::cout
#include <future> // std::async, std::future
#include <chrono> // std::chrono::milliseconds
// A non-optimized way of checking for prime numbers:
bool is_prime(int x, std::atomic_bool & run) {
for (int i = 2; i < x && run; ++i)
{
if (x%i == 0) return false;
}
if (!run)
{
throw std::runtime_error("timed out!");
}
return true;
}
int main()
{
// Call function asynchronously:
std::atomic_bool run;
run = true;
std::future<bool> fut = std::async(is_prime, 700020007, std::ref(run));
// Do something while waiting for function to set future:
std::cout << "checking, please wait";
std::chrono::milliseconds span(100);
while (fut.wait_for(span) == std::future_status::timeout)
{
std::cout << '.';
run = false;
}
try
{
bool x = fut.get();
std::cout << "\n700020007 " << (x ? "is" : "is not") << " prime.\n";
}
catch (const std::runtime_error & ex)
{
// Handle timeout here
}
return 0;
}
Why being able to stop thread is bad.
Stopping threads at an arbitrary point is dangerous and will lead to resource leaks, where resources being pointers, handles to files and folders, and other things the program should do.
When killing a thread, the thread may or may not be doing work. Whatever it was doing, it won’t get to complete and any variables successfully created will not get their destructors called because there is no thread to run them on.
I have outlined some of the issues here.
I think its not possible to safely interrupt running cycle from outside of cycle itself, so STL doesn't provide such a functionality. Of course, one could try to kill running thread, but it's not safe as may lead to resource leaking.
You can check for timeout inside is_prime function and return from it if timeout happens. Or you can try to pass a reference to std::atomic<bool> to is_prime and check its value each iteration. Then, when timeout happens you change the value of the atomic in the main so is_prime returns.
I have a total n00b question here on synchronization. I have a 'writer' thread which assigns a different value 'p' to a promise at each iteration. I need 'reader' threads which wait for shared_futures of this value and then process them, and my question is how do I use future/promise to ensure that the reader threads wait for a new update of 'p' before performing their processing task at each iteration? Many thanks.
You can "reset" a promise by assigning it to a blank promise.
myPromise = promise< int >();
A more complete example:
promise< int > myPromise;
void writer()
{
for( int i = 0; i < 10; ++i )
{
cout << "Setting promise.\n";
myPromise.set_value( i );
myPromise = promise< int >{}; // Reset the promise.
cout << "Waiting to set again...\n";
this_thread::sleep_for( chrono::seconds( 1 ));
}
}
void reader()
{
int result;
do
{
auto myFuture = myPromise.get_future();
cout << "Waiting to receive result...\n";
result = myFuture.get();
cout << "Received " << result << ".\n";
} while( result < 9 );
}
int main()
{
std::thread write( writer );
std::thread read( reader );
write.join();
read.join();
return 0;
}
A problem with this approach, however, is that synchronization between the two threads can cause the writer to call promise::set_value() more than once between the reader's calls to future::get(), or future::get() to be called while the promise is being reset. These problems can be avoided with care (e.g. with proper sleeping between calls), but this takes us into the realm of hacking and guesswork rather than logically correct concurrency.
So although it's possible to reset a promise by assigning it to a fresh promise, doing so tends to raise broader synchronization issues.
A promise/future pair is designed to carry only a single value (or exception.). To do what you're describing, you probably want to adopt a different tool.
If you wish to have multiple threads (your readers) all stop at a common point, you might consider a barrier.
The following code demonstrates how the producer/consumer pattern can be implemented with future and promise.
There are two promise variables, used by a producer and a consumer thread. Each thread resets one of the two promise variables and waits for the other one.
#include <iostream>
#include <future>
#include <thread>
using namespace std;
// produces integers from 0 to 99
void producer(promise<int>& dataready, promise<void>& consumed)
{
for (int i = 0; i < 100; ++i) {
// do some work here ...
consumed = promise<void>{}; // reset
dataready.set_value(i); // make data available
consumed.get_future().wait(); // wait for the data to be consumed
}
dataready.set_value(-1); // no more data
}
// consumes integers
void consumer(promise<int>& dataready, promise<void>& consumed)
{
for (;;) {
int n = dataready.get_future().get(); // wait for data ready
if (n >= 0) {
std::cout << n << ",";
dataready = promise<int>{}; // reset
consumed.set_value(); // mark data as consumed
// do some work here ...
}
else
break;
}
}
int main(int argc, const char*argv[])
{
promise<int> dataready{};
promise<void> consumed{};
thread th1([&] {producer(dataready, consumed); });
thread th2([&] {consumer(dataready, consumed); });
th1.join();
th2.join();
std::cout << "\n";
return 0;
}