I am creating an asynchronous class that logs strings into a file. Should I be creating the thread within the class itself? I was thinking something like this as a start function
void Async_Log::start (void)
{
std::thread thread_log(
[&]()
{
std::ofstream fout;
fout.open(fileName);
while(true)
{
if(q.size())
{
std::lock_guard<std::mutex> lock(m);
fout << q.front() << "\t At Time: " << std::clock() << std::endl;
q.pop();
}
}
fout.close();
});
}
Or would it be better to leave the threading up to main. My first concern is if threading is unique (so if I instantiate the class 2 times with two different files will the thread_log be over written or have a conflict).
There is nothing wrong to have a dedicated thread in the class, but I want to note several things:
Inside your thread you implement busy waiting for log messages. This is completely redundant and very expensive! Your thread consumes CPU even when there are no messages in the queue. What you need is blocking queue there, that would block on pop() method. You can find implementation of the blocking queue for C++ here or here.
There is need to provide possibilty to terminate your logging thread. This you can do eigher by having 'terminate' variable that you check in the loop, or by sending special 'poison pill' message to the logger.
Related
I am trying to get the following scenario to work, but have not been successful so far.
I have 2 threads, a worker (that writes) and a reader.
The worker continuously modifies the values of a class "someClassToModify".
The Reader makes a read access every x seconds (unknown) and reads the current state of the class someClassToModify.
How do I make sure that the Reader reads someClassToModify immediately, without delay, when it "wants" to and the Worker continues immediately afterwards ?
What is important here is that the reader always gets immediate access when it needs it to take a "snapshot" of someClassToModify .
At the moment, the Writer seems to be always "faster", and sometimes several more "Writes" are made before it is the reader's turn. That is, the reader then does not get the actual value that he wanted.
Example:
Work
Work
Work
Work
Read
Work
Work
Read
Work
Work
Work
Work
Read
....
SomeClass someClassToModify; //Class to modify and read
std::thread Worker([this] {
while(true) {
// work work work (write)
// modify someClassToModify
}
}).detach();
std::thread Reader([this] {
while(true) {
//at an unknown time (random) read value from someClassToModify
}
}).detach();
thanks for your help here
So, first of all you should note that threads switch at random. That means that worker can do something multiple times before even a single reader gets chance to do anything.
Second thing is that reader can access function and for example come to 3rd line and then the context switch happens. You want to avoid that.
The way to avoid that is by using lock or semaphore.
That means that your class someClassToModify method should disable context switch while reading. It doesnt need to have anything special for explicit switch as it will sleep afterwards for X seconds and during that time the modifier will work.
You should check std::lock.
Basically you would want something like this, wrap this inside your class
#include<iostream>
#include<thread>
#include<mutex>
#include <chrono>
int i = 0;
int x = 1000;
//this is mutex for waiting
std::mutex myLock;
void read() {
while (true) {
//you lock the function, so it will not change context
myLock.lock();
std::cout << i << std::endl;
std::cout << "reader" << std::endl;
//once it is finish you can unlock it
myLock.unlock();
//wait for x miliseconds
std::this_thread::sleep_for(std::chrono::milliseconds(x));
}
}
void write() {
while (true) {
//writer also has lock so you dont end up with bad values on i
myLock.lock();
i++;
std::cout << "writer" << std::endl;
myLock.unlock();
}
}
int main() {
std::thread th1(read);
std::thread th2(write);
th1.join();
th2.join();
return 0;
}
I have tried to find a similar problem, but was unable to find it, or my knowledge is not enough to recognize the similarity.
I have a main loop creating an object, whereas this object has an infinite loop to process a matrix and do stuff with this matrix. I call this process function within a separate thread an detach it, so it is able to process the matrix multiple times, while the main loop might just wait for something and does nothing.
After a while the main loop receives a new matrix, while I represented it by just creating a new matrix and passes this new matrix into the object. The idea is that, due to waiting a few seconds before processing in the infinite while loop again, the update function can lock the mutex and the mutex is not (almost) frequently locked.
Below I tried to code a minimal Example.
class Test
{
public:
Test();
~Test();
void process(){
while(1){
boost::mutes::scoped_lock locker(mtx);
std::cout << "A" << std::endl;
// do stuff with Matrix
std::cout << "B" << std::endl;
mtx.unlock();
//wait for few microseconds
}
}
void updateMatrix(matrix MatrixNew){
boost::mutes::scoped_lock locker(mtx);
std::cout << "1" << std::endl;
Matrix = MatrixNew;
std::cout << "2" << std::endl;
}
private:
boost::mutex mtx;
matrix Matrix;
}
int main(){
Test test;
boost::thread thread_;
thread_ = boost::thread(&Test::process,boost::ref(test));
thread_.detach();
while(once_in_a_while){
Matrix MatrixNew;
test.updateMatrix(MatrixNew);
}
}
Unfortunately a race condition occurs. Process and Update have multiple steps within the locked mutex environment, while I print stuff to the console between these steps. I found that, both, the matrix is messed up and Letters/Numbers to occur parallel and not consecutive.
Any ideas why this occurs?
Best wishes and thanks in advance
while(1){
boost::mutes::scoped_lock locker(mtx);
std::cout << "A" << std::endl;
// do stuff with Matrix
std::cout << "B" << std::endl;
mtx.unlock();
//wait for few microseconds
}
Here you manually unlock mtx. Then sometime later the scoped_lock (called locker) also unlocks the mutex in its destructor (which is the point of that class). I dont know what the guarantee's boost::mutex requires but unlocking it more times than you locked it can't lead to anything good.
Instead of
mtx.unlock(); you presumably want locker.unlock();
Edit: A recommendation here would be to avoid using boost for this and use standard c++ instead. threading has been part of the standard since C++11 (8 years!) so presumably all your tools will now support it. Using standardised code/tools gives you better documentation and better help as they're much more well known. I'm not knocking boost (a lot of the standard started in boost) but once something has been consumed into the standard you should strongly think about using it.
I wanna to check if a thread job has been finished to call it again and send another parameter to that. The code is sth like this:
void SendMassage(double Speed)
{
Sleep(200);
cout << "Speed:" << Speed << endl;
}
int main() {
int Speed_1 = 0;
thread f(SendMassage, Speed_1);
for (int i = 0; i < 50; i++)
{
Sleep(20);
if (?)
{
another call of thread // If last thread done then call it again, otherwise not.
}
Speed_1++;
}
}
How should I do it?
Use, e.g., an atomic flag to indicate that the thread has finished:
std::atomic<bool> finished_flag{false};
void SendMassage(double Speed) {
Sleep(200);
cout << "Speed:" << Speed << endl;
finished_flag = true;
}
int main() {
int Speed_1 = 0;
thread f(SendMassage, Speed_1);
while (Speed_1 < 50) {
Sleep(20);
if (finished_flag) {
f.join();
finished_flag = false;
f = std::thread(SendMassage, Speed_1);
}
Speed_1++;
}
f.join();
}
Working example: https://wandbox.org/permlink/BrEMHFvlInshBy5V
Note that I assumed that, according to your code, you don't want to block when checking whether the thread f has finished. Otherwise, simply call f.join().
If you want to wait untill a thread has finished it's job without using Sleep, you neeed to call it's join method, like so
thread t(SendMassage, Speed_1);
t.join();
//Code here will start executing after returning from join
You can read more about it here http://en.cppreference.com/w/cpp/thread/thread/join
About sending another parameter, I think the best way would be splitting it into another function that you would call after this thread has been joined, if you need some information about something that's known only inside the function, you could create a class that would store that information in it's fields, and use it in the function you're threading.
The possibly most simple way of doing so is just joining the thread. Nothing clever, but...
OK, but why would you then want to have another thread at all if your main thread passes all its time sleeping anyway, so you quite sure are looking for something cleverer.
I personally like the principle of queues; you could use e. g. a std::deque for:
Your producer thread places in some values, your consumer thread just takes them out. Of course, you need to protect your queue via a std::mutex (or by other appropriate means) against race conditions...
The consumer would be running in an endless loop, processing the queue, if entries are available, or sleep if this is not the case. Have a look at this response for how to do the waiting...
There is the danger, though, that your queue runs full, so you might define some threshold when you stop or at least slow down producing new values, if you discover your producer being too fast. The queue has another advantage, though: If your producer is too fast, you might have more than one consumer, all serving the same queue (depending on your needs, putting together the results might need some extra efforts to keep ordering of correct).
Admitted, that's quite some work to do, it might be worth the effort, it might be overkill. If simpler approaches fit your needs already, Daniel's answer is fine, too...
I need to parallelize some tasks in a C++ program and am completely new to parallel programming. I've made some progress through internet searches so far, but am a bit stuck now. I'd like to reuse some threads in a loop, but clearly don't know how to do what I'm trying for.
I am acquiring data from two ADC cards on the computer (acquired in parallel), then I need to perform some operations on the collected data (processed in parallel) while collecting the next batch of data. Here is some pseudocode to illustrate
//Acquire some data, wait for all the data to be acquired before proceeding
std::thread acq1(AcquireData, boardHandle1, memoryAddress1a);
std::thread acq2(AcquireData, boardHandle2, memoryAddress2a);
acq1.join();
acq2.join();
while(user doesn't interrupt)
{
//Process first batch of data while acquiring new data
std::thread proc1(ProcessData,memoryAddress1a);
std::thread proc2(ProcessData,memoryAddress2a);
acq1(AcquireData, boardHandle1, memoryAddress1b);
acq2(AcquireData, boardHandle2, memoryAddress2b);
acq1.join();
acq2.join();
proc1.join();
proc2.join();
/*Proceed in this manner, alternating which memory address
is written to and being processed until the user interrupts the program.*/
}
That's the main gist of it. The next run of the loop would write to the "a" memory addresses while processing the "b" data and continue to alternate (I can get the code to do that, just took it out to prevent cluttering up the problem).
Anyway, the problem (as I'm sure some people can already tell) is that the second time I try to use acq1 and acq2, the compiler (VS2012) says "IntelliSense: call of an object of a class type without appropriate operator() or conversion functions to pointer-to-function type". Likewise, if I put std::thread in front of acq1 and acq2 again, it says " error C2374: 'acq1' : redefinition; multiple initialization".
So the question is, can I reassign threads to a new task when they have completed their previous task? I always wait for the previous use of the thread to end before calling it again, but I don't know how to reassign the thread, and since it's in a loop, I can't make a new thread each time (or if I could, that seems wasteful and unnecessary, but I could be mistaken).
Thanks in advance
The easiest way is to use a waitable queue of std::function objects. Like this:
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <queue>
#include <functional>
#include <chrono>
class ThreadPool
{
public:
ThreadPool (int threads) : shutdown_ (false)
{
// Create the specified number of threads
threads_.reserve (threads);
for (int i = 0; i < threads; ++i)
threads_.emplace_back (std::bind (&ThreadPool::threadEntry, this, i));
}
~ThreadPool ()
{
{
// Unblock any threads and tell them to stop
std::unique_lock <std::mutex> l (lock_);
shutdown_ = true;
condVar_.notify_all();
}
// Wait for all threads to stop
std::cerr << "Joining threads" << std::endl;
for (auto& thread : threads_)
thread.join();
}
void doJob (std::function <void (void)> func)
{
// Place a job on the queu and unblock a thread
std::unique_lock <std::mutex> l (lock_);
jobs_.emplace (std::move (func));
condVar_.notify_one();
}
protected:
void threadEntry (int i)
{
std::function <void (void)> job;
while (1)
{
{
std::unique_lock <std::mutex> l (lock_);
while (! shutdown_ && jobs_.empty())
condVar_.wait (l);
if (jobs_.empty ())
{
// No jobs to do and we are shutting down
std::cerr << "Thread " << i << " terminates" << std::endl;
return;
}
std::cerr << "Thread " << i << " does a job" << std::endl;
job = std::move (jobs_.front ());
jobs_.pop();
}
// Do the job without holding any locks
job ();
}
}
std::mutex lock_;
std::condition_variable condVar_;
bool shutdown_;
std::queue <std::function <void (void)>> jobs_;
std::vector <std::thread> threads_;
};
void silly (int n)
{
// A silly job for demonstration purposes
std::cerr << "Sleeping for " << n << " seconds" << std::endl;
std::this_thread::sleep_for (std::chrono::seconds (n));
}
int main()
{
// Create two threads
ThreadPool p (2);
// Assign them 4 jobs
p.doJob (std::bind (silly, 1));
p.doJob (std::bind (silly, 2));
p.doJob (std::bind (silly, 3));
p.doJob (std::bind (silly, 4));
}
The std::thread class is designed to execute exactly one task (the one you give it in the constructor) and then end. If you want to do more work, you'll need a new thread. As of C++11, that's all we have. Thread pools didn't make it into the standard. (I'm uncertain what C++14 has to say about them.)
Fortunately, you can easily implement the required logic yourself. Here is the large-scale picture:
Start n worker threads that all do the following:
Repeat while there is more work to do:
Grab the next task t (possibly waiting until one becomes ready).
Process t.
Keep inserting new tasks in the processing queue.
Tell the worker threads that there is nothing more to do.
Wait for the worker threads to finish.
The most difficult part here (which is still fairly easy) is properly designing the work queue. Usually, a synchronized linked list (from the STL) will do for this. Synchronized means that any thread that wishes to manipulate the queue must only do so after it has acquired a std::mutex so to avoid race conditions. If a worker thread finds the list empty, it has to wait until there is some work again. You can use a std::condition_variable for this. Each time a new task is inserted into the queue, the inserting thread notifies a thread that waits on the condition variable and will therefore stop blocking and eventually start processing the new task.
The second not-so-trivial part is how to signal to the worker threads that there is no more work to do. Clearly, you can set some global flag but if a worker is blocked waiting at the queue, it won't realize any time soon. One solution could be to notify_all() threads and have them check the flag each time they are notified. Another option is to insert some distinct “toxic” item into the queue. If a worker encounters this item, it quits itself.
Representing a queue of tasks is straight-forward using your self-defined task objects or simply lambdas.
All of the above are C++11 features. If you are stuck with an earlier version, you'll need to resort to third-party libraries that provide multi-threading for your particular platform.
While none of this is rocket science, it is still easy to get wrong the first time. And unfortunately, concurrency-related bugs are among the most difficult to debug. Starting by spending a few hours reading through the relevant sections of a good book or working through a tutorial can quickly pay off.
This
std::thread acq1(...)
is the call of an constructor. constructing a new object called acq1
This
acq1(...)
is the application of the () operator on the existing object aqc1. If there isn't such a operator defined for std::thread the compiler complains.
As far as I know you may not reused std::threads. You construct and start them. Join with them and throw them away,
Well, it depends if you consider moving a reassigning or not. You can move a thread but not make a copy of it.
Below code will create new pair of threads each iteration and move them in place of old threads. I imagine this should work, because new thread objects will be temporaries.
while(user doesn't interrupt)
{
//Process first batch of data while acquiring new data
std::thread proc1(ProcessData,memoryAddress1a);
std::thread proc2(ProcessData,memoryAddress2a);
acq1 = std::thread(AcquireData, boardHandle1, memoryAddress1b);
acq2 = std::thread(AcquireData, boardHandle2, memoryAddress2b);
acq1.join();
acq2.join();
proc1.join();
proc2.join();
/*Proceed in this manner, alternating which memory address
is written to and being processed until the user interrupts the program.*/
}
What's going on is, the object actually does not end it's lifetime at the end of the iteration, because it is declared in the outer scope in regard to the loop. But a new object gets created each time and move takes place. I don't see what can be spared (I might be stupid), so I imagine this it's exactly the same as declaring acqs inside the loop and simply reusing the symbol. All in all ... yea, it's about how you classify a create temporary and move.
Also, this clearly starts a new thread each loop (of course ending the previously assigned thread), it doesn't make a thread wait for new data and magically feed it to the processing pipe. You would need to implement it a differently like. E.g: Worker threads pool and communication over queues.
References: operator=, (ctor).
I think the errors you get are self-explanatory, so I'll skip explaining them.
I think you need a much more simpler answer for running a set of threads more than once, this is the best solution:
do{
std::vector<std::thread> thread_vector;
for (int i=0;i<nworkers;i++)
{
thread_vector.push_back(std::thread(yourFunction,Parameter1,Parameter2, ...));
}
for(std::thread& it: thread_vector)
{
it.join();
}
q++;
} while(q<NTIMES);
You also could make your own Thread class and call its run method like:
class MyThread
{
public:
void run(std::function<void()> func) {
thread_ = std::thread(func);
}
void join() {
if(thread_.joinable())
thread_.join();
}
private:
std::thread thread_;
};
// Application code...
MyThread myThread;
myThread.run(AcquireData);
I am a newcomer to the Boost library, and am trying to implement a simple producer and consumer threads that operate on a shared queue. My example implementation looks like this:
#include <iostream>
#include <deque>
#include <boost/thread.hpp>
boost::mutex mutex;
std::deque<std::string> queue;
void producer()
{
while (true) {
boost::lock_guard<boost::mutex> lock(mutex);
std::cout << "producer() pushing string onto queue" << std::endl;
queue.push_back(std::string("test"));
}
}
void consumer()
{
while (true) {
boost::lock_guard<boost::mutex> lock(mutex);
if (!queue.empty()) {
std::cout << "consumer() popped string " << queue.front() << " from queue" << std::endl;
queue.pop_front();
}
}
}
int main()
{
boost::thread producer_thread(producer);
boost::thread consumer_thread(consumer);
sleep(5);
producer_thread.detach();
consumer_thread.detach();
return 0;
}
This code runs as I expect, but when main exits, I get
/usr/include/boost/thread/pthread/mutex.hpp:45:
boost::mutex::~mutex(): Assertion `!pthread_mutex_destroy(&m)' failed.
consumer() popped string test from queue
Aborted
(I'm not sure if the output from consumer is relevant in that position, but I've left it in.)
Am I doing something wrong in my usage of Boost?
A bit off-topic but relevant imo (...waits for flames in comments).
The consumer model here is very greedy, looping and checking for data on the queue continually. It will be more efficient (waste less CPU cycles) if you have your consumer threads awakened determistically when data is available, using inter-thread signalling rather than this lock-and-peek loop. Think about it this way: while the queue is empty, this is essentially a tight loop only broken by the need to acquire the lock. Not ideal?
void consumer()
{
while (true) {
boost::lock_guard<boost::mutex> lock(mutex);
if (!queue.empty()) {
std::cout << "consumer() popped string " << queue.front() << " from queue" << std::endl;
queue.pop_front();
}
}
}
I understand that you are learning but I would not advise use of this in 'real' code. For learning the library though, it's fine. To your credit, this is a more complex example than necessary to understand how to use the lock_guard, so you are aiming high!
Eventually you will most likely build (or better if available, reuse) code for a queue that signals workers when they are required to do work, and you will then use the lock_guard inside your worker threads to mediate accesses to shared data.
You give your threads (producer & consumer) the mutex object and then detach them. They are supposed to run forever. Then you exit from your program and the mutex object is no longer valid. Nevertheless your threads still try to use it, they don't know that it is no longer valid. If you had used the NDEBUG define you would have got a coredump.
Are you trying to write a daemon application and this is the reason for detaching threads?
When main exits, all the global objects are destroyed. Your threads, however, do continue to run. You therefore end up with problems because the threads are accessing a deleted object.
Bottom line is that you must terminate the threads before exiting. The only what to do this though is to get the main program to wait (by using a boost::thread::join) until the threads have finished running. You may want to provide some way of signaling the threads to finish running to save from waiting too long.
The other issue is that your consumer thread continues to run even when there is not data. You might want to wait on a boost::condition_variable until signaled that there is new data.