Message passing between threads using a command file - c++

This project asked for 4 threads that has a command file with
instructions such as SEND, Receive and quit. When the file says "2
send" the thread that in the second place in the array should wake
up and receive its message. I need to know how to make a thread read
it's message if the command file has a message for it?

The biggest issue I see for your design is the fact that each thread reads its line randomly independent from any other thread. After this it would have to check wether the current line is actually meant for it i.e. starting with the appropriate number. What happens if not ? Too complicated.
I would split this issue up into one reader thread and a set of worker threads. The first reads lines from a file and dispatches it to the workers by pushing it into the current workers queue. All synchronized with a per worker mutex and conditional variable The following is implemented in C++11 but should as well be doable in pthread_* style.
#include <thread>
#include <iostream>
#include <queue>
#include <mutex>
#include <fstream>
#include <list>
#include <sstream>
#include <condition_variable>
class worker {
public:
void operator()(int n) {
while(true) {
std::unique_lock<std::mutex> l(_m);
_c.wait(l);
if(!_q.empty()) {
{
std::unique_lock<std::mutex> l(_mm);
std::cerr << "#" << n << " " << _q.back() <<std::endl;
}
_q.pop();
}
}
}
private:
std::mutex _m;
std::condition_variable _c;
std::queue<std::string> _q;
// Only needed to synchronize I/O
static std::mutex _mm;
// Reader may write into our queue
friend class reader;
};
std::mutex worker::_mm;
class reader {
public:
reader(worker & w0,worker & w1,worker & w2,worker & w3) {
_v.push_back(&w0);
_v.push_back(&w1);
_v.push_back(&w2);
_v.push_back(&w3);
}
void operator()() {
std::ifstream fi("commands.txt");
std::string s;
while(std::getline(fi,s)) {
std::stringstream ss(s);
int n;
if((ss >> n >> std::ws) && n>=0 && n<_v.size()) {
std::string s0;
if(std::getline(ss,s0)) {
std::unique_lock<std::mutex> l(_v[n]->_m);
_v[n]->_q.push(s0);
_v[n]->_c.notify_one();
}
}
}
std::cerr << "done" << std::endl;
}
private:
std::vector<worker *> _v;
};
int main(int c,char **argv) {
worker w0;
worker w1;
worker w2;
worker w3;
std::thread tw0([&w0]() { w0(0); });
std::thread tw1([&w1]() { w1(1); });
std::thread tw2([&w2]() { w2(2); });
std::thread tw3([&w3]() { w3(3); });
reader r(w0,w1,w2,w3);
std::thread tr([&r]() { r(); });
tr.join();
tw0.join();
tw1.join();
tw2.join();
tw3.join();
}
The example code only reads from "commands.txt" until EOF. I assume you'd like to read continuously like the "tail -f" command. That's however not doable with std::istream.
The code of course is clumsy but I guess it gives you an idea. One should for example add a blocking mechanism if the workers are way too slow processing their stuff and the queues may eat up all the precious RAM.

Related

Function Objects & multithreading Pool giving same thread ID

For below program, thread Pool always picks the same thread ID 0x7000095f9000! Why so?
Should every push condi.notify_one() wake up all threads same time? What could be the reason same thread ID get picked?
Computer supports 3 threads.
Any other info on using function objects would be helpful!!
O/P
Checking if not empty
Not Empty
0x700009576000 0
Checking if not empty
Checking if not empty
Checking if not empty
Not Empty
0x7000095f9000 1
Checking if not empty
Not Empty
0x7000095f9000 2
Checking if not empty
Not Empty
0x7000095f9000 3
Checking if not empty
Not Empty
0x7000095f9000 4
Checking if not empty
Not Empty
0x7000095f9000 5
Checking if not empty
Code
#include <iostream>
#include <vector>
#include <queue>
#include <thread>
#include <condition_variable>
#include <chrono>
using namespace std;
class TestClass{
public:
void producer(int i) {
unique_lock<mutex> lockGuard(mtx);
Q.push(i);
cond.notify_all();
}
void consumer() {
{
unique_lock<mutex> lockGuard(mtx);
cout << "Checking if not empty" << endl;
cond.wait(lockGuard, [this]() {
return !Q.empty();
});
cout << "Not Empty" << endl;
cout << this_thread::get_id()<<" "<<Q.front()<<endl;
Q.pop();
}
};
void consumerMain() {
while(1) {
consumer();
std::this_thread::sleep_for(chrono::seconds(1));
}
}
private:
mutex mtx;
condition_variable cond;
queue<int> Q;
};
int main()
{
std::vector<std::thread> vecOfThreads;
std::function<void(TestClass&)> func = [&](TestClass &obj) {
while(1) {
obj.consumer();
}
};
unsigned MAX_THREADS = std::thread::hardware_concurrency()-1;
TestClass obj;
for(int i=0; i<MAX_THREADS; i++) {
std::thread th1(func, std::ref(obj));
vecOfThreads.emplace_back(std::move(th1));
}
for(int i=0; i<4*MAX_THREADS/2; i++) {
obj.producer(i);
}
for (std::thread & th : vecOfThreads)
{
if (th.joinable())
th.join();
}
return 0;
}
Any other info on using function objects would be helpful!! Thanks in advance!!
Any other pointers?
The very short unlocking of the mutex that happens in the consumer threads will in your case most probably let the running thread acquire the lock again, and again and again.
If you instead simulate some work being done after the workload has been picked from the queue by calling consumerMain (which sleeps a little) instead of consumer, you would likely see different threads picking up the workload.
while(1) {
obj.consumerMain();
}

Concurrent parsing and filtering with unique_lock

I want to read an xml document, filter anything I don't want (the filtering is not part of the question) and then write back to cout or an ofstream.
However, the way I want to do this concurrently, is the following: Thread1 (parser) read the xml file, read one line (std::string line in my code) at a time, and then supply that line to thread2 whose only job is to filter that line. After it is finished, it supplies that line back to thread1, to write to output.
However, my program right now is not safe at all. First of all, sometimes thread2 is able to acquire the mutex first, and I get an error even if I do std::unique_lock locker(mu, defer_lock), or even I make thread2 wait for 100 ms: std::this_thread::sleepfor(100ms). I'm not sure how to make sure that the acquisition of the mutex is done by thread1 first.
A more important problem is that even if thread1 acquires the mutex first, I get an error at the line locker.unlock() of function2, right after my program has read the first line of the document. Searching for this error, it seems like I could be trying to unlock a mutex that is already unlocked. Shouldn't the initialization of unique_lock in function2 lock the mutex?
Finally, I think I might either be using the wrong resources, or that the structure of my program is wrong. Could I have some suggestions to make the code better?
#include <iostream>
#include <string>
#include <fstream>
#include <thread>
#include <mutex>
#include <condition_variable>
std::mutex mu;
std::condition_variable cond;
bool checkThread1 = false;
bool checkThread2 = false;
std::string line;
bool checkIfEndOfFile = false;
//std::once_flag flag;
void function1()
{
std::ifstream in("example.xml");
std::unique_lock<std::mutex> locker(mu);
while (getline(in, line))
{
locker.unlock();
cond.notify_one();
checkThread2 = !checkThread2;
cond.wait(locker, []() {return checkThread1; });
}
}
void function2()
{
std::string substring = "";
std::unique_lock<std::mutex> locker(mu);
cond.wait(locker, []() {return checkThread2; });
std::string tmp2 = "";
std::string tmp1;
for (int i = 0; i < line.length(); i++)
{
// Filter
}
locker.unlock();
cond.notify_one();
checkThread1 = !checkThread1;
}
int main()
{
std::thread parser(function1);
std::thread filterer(function2);
parser.join();
filterer.join();
}

Implementing Double Buffering using Futures and Promises using c++11

I started learning multi-threading and came across futures and promises for synchronizing threads over shared resources. So, I thought of implementing a famous Double Buffering problem using Futures and Promises( single producer and single consumer).
The basic methodology what I have thought is :
ProducerThread:
loop:
locks_buffer1_mutex
fills_buffer1
unlocks_buffer1_mutex
passes number 1 to Consumer thread using promise.setvalue()
locks_buffer2_mutex
fills_buffer2
unlocks_buffer2_mutex
passes number 2 to Consumer thread using promise.setvalue()
back_to_loop
ConsumerThread :
loop:
wait_for_value_from_promise
switch
case 1:
lock_buffer1_mutex
process(buffer1)
unlock_buffer1_mutex
print_values
break
case 2:
lock_buffer2_mutex
process(buffer2)
unlock_buffer2_mutex
print_values
break
back_to_loop
Here is the code:
#include <iostream>
#include <thread>
#include <vector>
#include <future>
#include <mutex>
#include <iterator>
std::mutex print_mutex;
std::mutex buffer1_mutex;
std::mutex buffer2_mutex;
std::vector<int> buffer1;
std::vector<int> buffer2;
bool notify;
void DataAcquisition(std::promise<int> &p)
{
std::this_thread::sleep_for(std::chrono::seconds(2));
while(true)
{
{
std::lock_guard<std::mutex> buff1_lock(buffer1_mutex);
for(int i=0;i<200;i++)
{
buffer1.push_back(i);
}
}
p.set_value(1);
{
std::lock_guard<std::mutex> buff2_lock(buffer2_mutex);
for(int i=0;i<200;i++)
{
buffer2.push_back(199-i);
}
}
p.set_value(2);
}
}
void DataExtraction(std::future<int> &f)
{
std::vector<int>::const_iterator first,last;
std::vector<int> new_vector;
std::ostream_iterator<int> outit(std::cout, " ");
while(true)
{
int i = f.get();
std::cout << "The value of i is :" << i << std::endl;
switch(i)
{
case 1:
{
std::lock_guard<std::mutex> buff1_lock(buffer1_mutex);
first = buffer1.begin();
last = first + 10;
}
new_vector = std::vector<int>(first,last);
{
std::lock_guard<std::mutex> print_lock(print_mutex);
std::copy(new_vector.begin(),new_vector.end(),outit);
}
break;
case 2:
{
std::lock_guard<std::mutex> buff2_lock(buffer2_mutex);
first = buffer2.begin();
last = first + 10;
}
new_vector = std::vector<int>(first,last);
{
std::lock_guard<std::mutex> print_lock(print_mutex);
std::copy(new_vector.begin(),new_vector.end(),outit);
}
break;
}
}
}
int main()
{
std::promise<int> p;
std::future<int> f = p.get_future();
std::thread thread1(DataAcquisition,std::ref(p));
std::thread thread2(DataExtraction,std::ref(f));
thread1.join();
thread2.join();
return 0;
}
When I execute this code I came across through his giagntic problem, which I am fully unaware of
terminate called after throwing an instance of 'std::future_error' terminate called recursively
what(): 0 1 2 3 4 5 6 7 8 9 Promise already satisfied
Press <RETURN> to close the window
I have googled about this error, it is suggested to link -lpthread switch during linking and compile time. but could n't resolve the issue.
Please help me, Where am i going wrong..
You can't call set_value for a promise more that once, which is illustrated by the following code:
#include <future>
int main() {
std::promise<int> p;
p.set_value(1);
p.set_value(2); // Promise already satisfied
}
You have to look for another approach. For example, you could use two std::condition_variables - set them in producer, and wait for them in consumer.

Threading in c++

Lets say I want to take an input from user and perform a search in a text file for that input. The search will be performed for every character user inputs. There will be a loop performing search and there will be another loop to check if new character is input by the user. Second loop will restart the first loop if new char is given by the user.
Please just explain how to perform above with c++. I think threads need to be created.
Below variables will be used to maintain common values:
static var`
bool change;
while(!change)
{
change=false
<<do something, like search in file>>
}
Other loop will be like below:
while(1)
{
if(user enters another char)
{
var=new value input by the user;
change=true;
}
else change=false;
}
Thanks!
Something like this? Now I wrote this on ideone and their threads didn't work for me so I wasn't able to test it but yeah.. Something close to this should work. Probably a bad example. A thread pool would be best.
#include <iostream>
#include <thread>
#include <atomic>
#include <queue>
#include <mutex>
#include <chrono>
std::mutex lock;
std::atomic<bool> stop(false);
std::queue<std::function<void()>> jobs;
void One()
{
while(!stop)
{
if (!jobs.empty())
{
if (lock.try_lock())
{
std::function<void()> job = jobs.front();
jobs.pop();
lock.unlock();
job();
}
}
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
}
void Two()
{
std::string var;
while(true)
{
if (std::cin>> var)
{
std::lock_guard<std::mutex> glock(lock);
jobs.push([] {std::cout<<"Task added to the queue..\n";});
}
else
break;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
}
int main()
{
std::thread T(One);
Two();
stop = true;
T.join();
return 0;
}
Create two threads: one for reading user-input, and another for performing the search.
Use a binary-semaphore in order to synchronize between the two threads in a Consumer-Producer manner, i.e., one thread acquires the semaphore and the other thread releases it:
static BinarySemaphore binSem;
static int inputCharacter = 0;
static void ReadUserInput();
static void PerformSearch();
void Run()
{
BinarySemaphore_Init(&binSem,0);
CreateThread(ReadUserInput,LOWER_PRIORITY);
CreateThread(PerformSearch,HIGHER_PRIORITY);
}
static void ReadUserInput()
{
while (inputCharacter != '\n')
{
inputCharacter = getc(stdin);
BinarySemaphore_Set(&binSem);
}
}
static void PerformSearch()
{
while (inputCharacter != '\n')
{
BinarySemaphore_Get(&binSem,WAIT_FOREVER);
// <<do something, like search in file>>
}
}
Please note that you need to create the thread which performs the search, with priority higher than that of the thread which reads user-input (as in the code above).

boost::asio, thread pools and thread monitoring

I've implemented a thread pool using boost::asio, and some number boost::thread objects calling boost::asio::io_service::run(). However, a requirement that I've been given is to have a way to monitor all threads for "health". My intent is to make a simple sentinel object that can be passed through the thread pool -- if it makes it through, then we can assume that the thread is still processing work.
However, given my implementation, I'm not sure how (if) I can monitor all the threads in the pool reliably. I've simply delegated the thread function to boost::asio::io_service::run(), so posting a sentinel object into the io_service instance won't guarantee which thread will actually get that sentinel and do the work.
One option may be to just periodically insert the sentinel, and hope that it gets picked up by each thread at least once in some reasonable amount of time, but that obviously isn't ideal.
Take the following example. Due to the way that the handler is coded, in this instance we can see that each thread will do the same amount of work, but in reality I will not have control of the handler implementation, some can be long running while others will be almost immediate.
#include <iostream>
#include <boost/asio.hpp>
#include <vector>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
void handler()
{
std::cout << boost::this_thread::get_id() << "\n";
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
}
int main(int argc, char **argv)
{
boost::asio::io_service svc(3);
std::unique_ptr<boost::asio::io_service::work> work(new boost::asio::io_service::work(svc));
boost::thread one(boost::bind(&boost::asio::io_service::run, &svc));
boost::thread two(boost::bind(&boost::asio::io_service::run, &svc));
boost::thread three(boost::bind(&boost::asio::io_service::run, &svc));
svc.post(handler);
svc.post(handler);
svc.post(handler);
svc.post(handler);
svc.post(handler);
svc.post(handler);
svc.post(handler);
svc.post(handler);
svc.post(handler);
svc.post(handler);
work.reset();
three.join();
two.join();
one.join();
return 0;
}
You can use a common io_service instance between all the threads and a private io_service instance for every thread. Every thread will execute a method like this:
void Mythread::threadLoop()
{
while(/* termination condition */)
{
commonIoService.run_one();
privateIoService.run_one();
commonConditionVariable.timed_wait(time);
}
}
By this way, if you want to ensure that some task is executed in a thread, you only have to post this task in its owned io_service.
To post a task in your thread pool you can do:
void MyThreadPool::post(Hander handler)
{
commonIoService.post(handler);
commonConditionVariable.notify_all();
}
The solution that I used relies on the fact that I own the implementation of the tread pool objects. I created a wrapper type that will update statistics, and copy the user defined handlers that are posted to the thread pool. Only this wrapper type is ever posted to the underlying io_service. This method allows me to keep track of the handlers that are posted/executed, without having to be intrusive into the user code.
Here's a stripped down and simplified example:
#include <iostream>
#include <memory>
#include <vector>
#include <boost/thread.hpp>
#include <boost/asio.hpp>
// Supports scheduling anonymous jobs that are
// executable as returning nothing and taking
// no arguments
typedef std::function<void(void)> functor_type;
// some way to store per-thread statistics
typedef std::map<boost::thread::id, int> thread_jobcount_map;
// only this type is actually posted to
// the asio proactor, this delegates to
// the user functor in operator()
struct handler_wrapper
{
handler_wrapper(const functor_type& user_functor, thread_jobcount_map& statistics)
: user_functor_(user_functor)
, statistics_(statistics)
{
}
void operator()()
{
user_functor_();
// just for illustration purposes, assume a long running job
boost::this_thread::sleep(boost::posix_time::milliseconds(100));
// increment executed jobs
++statistics_[boost::this_thread::get_id()];
}
functor_type user_functor_;
thread_jobcount_map& statistics_;
};
// anonymous thread function, just runs the proactor
void thread_func(boost::asio::io_service& proactor)
{
proactor.run();
}
class ThreadPool
{
public:
ThreadPool(size_t thread_count)
{
threads_.reserve(thread_count);
work_.reset(new boost::asio::io_service::work(proactor_));
for(size_t curr = 0; curr < thread_count; ++curr)
{
boost::thread th(thread_func, boost::ref(proactor_));
// inserting into this map before any work can be scheduled
// on it, means that we don't have to look it for lookups
// since we don't dynamically add threads
thread_jobcount_.insert(std::make_pair(th.get_id(), 0));
threads_.emplace_back(std::move(th));
}
}
// the only way for a user to get work into
// the pool is to use this function, which ensures
// that the handler_wrapper type is used
void schedule(const functor_type& user_functor)
{
handler_wrapper to_execute(user_functor, thread_jobcount_);
proactor_.post(to_execute);
}
void join()
{
// join all threads in pool:
work_.reset();
proactor_.stop();
std::for_each(
threads_.begin(),
threads_.end(),
[] (boost::thread& t)
{
t.join();
});
}
// just an example showing statistics
void log()
{
std::for_each(
thread_jobcount_.begin(),
thread_jobcount_.end(),
[] (const thread_jobcount_map::value_type& it)
{
std::cout << "Thread: " << it.first << " executed " << it.second << " jobs\n";
});
}
private:
std::vector<boost::thread> threads_;
std::unique_ptr<boost::asio::io_service::work> work_;
boost::asio::io_service proactor_;
thread_jobcount_map thread_jobcount_;
};
struct add
{
add(int lhs, int rhs, int* result)
: lhs_(lhs)
, rhs_(rhs)
, result_(result)
{
}
void operator()()
{
*result_ = lhs_ + rhs_;
}
int lhs_,rhs_;
int* result_;
};
int main(int argc, char **argv)
{
// some "state objects" that are
// manipulated by the user functors
int x = 0, y = 0, z = 0;
// pool of three threads
ThreadPool pool(3);
// schedule some handlers to do some work
pool.schedule(add(5, 4, &x));
pool.schedule(add(2, 2, &y));
pool.schedule(add(7, 8, &z));
// give all the handlers time to execute
boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
std::cout
<< "x = " << x << "\n"
<< "y = " << y << "\n"
<< "z = " << z << "\n";
pool.join();
pool.log();
}
Output:
x = 9
y = 4
z = 15
Thread: 0000000000B25430 executed 1 jobs
Thread: 0000000000B274F0 executed 1 jobs
Thread: 0000000000B27990 executed 1 jobs