Shared container accessed by different threads - c++

I have a concurrent_vector (ppl) container declared as a global variable that represents the entry to two functions/threads.
I want it to be accessed by the threads simultaneously (one for reading and one for writing/resizing). My program (in C++) includes a section where I check if the container is empty. Since one thread shows that the buffer is empty while the other desn't, it seems to me that both threads operate on two distinct containers although I defined only one.
#include "stdafx.h"
#include "ppl.h"
concurrent_vector<dataElm> ResultImage;
int AcquireImages(CameraPtr pCam){
continue_recording = true;
pCam->BeginAcquisition();
int imageCnt = 0;
while (continue_recording == true)
{
ImagePtr _p = pCam->GetNextImage(1000);
imageCnt = imageCnt + 1;
dataElm obj = constructelm(_p, &loc, imageCnt - 1);
ResultImage.push_back(obj);
cout << "is buffer empty? " << ResultImage.empty() << endl;
}
//...
}
void Cam(){
//...
pCam->Init();
INodeMap& nodeMap = pCam->GetNodeMap();
result = result | AcquireImages(pCam);
pCam = nullptr;
//...
}
void saveImages() {
//...
cout << "ResultImage.empty() = " << ResultImage.empty() << endl;
if (ResultImage.empty() == false) {
//saving the image
}
else
{
Sleep(20);
}
}
int main(int, char**){
std::thread producer(Cam);
std::thread consumer1(saveImages);
producer.join();
consumer1.join();
return 0;
}
error message
Also, do I need to add synchronization primitives even though I'm using concurrent_vector?
I'm new to multi-threading so I'm sorry if my question seems stupid and excuse my english, i'm not american native.

Related

Is this a good method for counting threads in C++?

I've been looking for a reliable way to count the number threads accessible to a program to be used. I didn't want to use a constant though and make the assumption that every system had the same number of accessible threads. I've devised this method of trying to figure it out. Is it a good method?
#include "pch.h"
#include <iostream>
#include <thread>
using namespace std;
struct list
{
void *data;
list *next;
list(list *x = nullptr)
{
data = x;
next = nullptr;
}
void add()
{
next = new list;
}
};
void sleepo(int xz)
{
for (int x = 0; x < 10000000; x++)
{
xz++;
}
}
int main()
{
int count = 1;
list *iterator = new list;
cout << "Attmepting to count threads..." << endl;
while (true)
{
try
{
iterator->data = new thread(sleepo, count);
iterator->add();
iterator = iterator->next;
count++;
}
catch(system_error)
{
break;
}
}
cout << "There are " << count << " threads." << endl;
}
No, there is no standard way to count the number of threads that have been started by a program. Neither in the C++ standard, nor for example in the POSIX standard.
I've devised this method of trying to figure it out. Is it a good method?
If you are in control of the creation of every thread, then that would work just fine. But it won't work if you for example use a library that also creates threads unless you can somehow inject code that increments your counter.
In order to know the current number of threads instead of total number of started threads, you would need to decrement the counter each time you join.

How to apply a concurrent solution to a Producer-Consumer like situation

I have a XML file with a sequence of nodes. Each node represents an element that I need to parse and add in a sorted list (the order must be the same of the nodes found in the file).
At the moment I am using a sequential solution:
struct Graphic
{
bool parse()
{
// parsing...
return parse_outcome;
}
};
vector<unique_ptr<Graphic>> graphics;
void producer()
{
for (size_t i = 0; i < N_GRAPHICS; i++)
{
auto g = new Graphic();
if (g->parse())
graphics.emplace_back(g);
else
delete g;
}
}
So, only if the graphic (that actually is an instance of a class derived from Graphic, a Line, a Rectangle and so on, that is why the new) can be properly parse, it will be added to my data structure.
Since I only care about the order in which thes graphics are added to my list, I though to call the parse method asynchronously, such that the producer has the task of read each node from the file and add this graphic to the data structure, while the consumer has the task of parse each graphic whenever a new graphic is ready to be parsed.
Now I have several consumer threads (created in the main) and my code looks like the following:
queue<pair<Graphic*, size_t>> q;
mutex m;
atomic<size_t> n_elements;
void producer()
{
for (size_t i = 0; i < N_GRAPHICS; i++)
{
auto g = new Graphic();
graphics.emplace_back(g);
q.emplace(make_pair(g, i));
}
n_elements = graphics.size();
}
void consumer()
{
pair<Graphic*, size_t> item;
while (true)
{
{
std::unique_lock<std::mutex> lk(m);
if (n_elements == 0)
return;
n_elements--;
item = q.front();
q.pop();
}
if (!item.first->parse())
{
// here I should remove the item from the vector
assert(graphics[item.second].get() == item.first);
delete item.first;
graphics[item.second] = nullptr;
}
}
}
I run the producer first of all in my main, so that when the first consumer starts the queue is already completely full.
int main()
{
producer();
vector<thread> threads;
for (auto i = 0; i < N_THREADS; i++)
threads.emplace_back(consumer);
for (auto& t : threads)
t.join();
return 0;
}
The concurrent version seems to be at least twice as faster as the original one.
The full code has been uploaded here.
Now I am wondering:
Are there any (synchronization) errors in my code?
Is there a way to achieve the same result faster (or better)?
Also, I noticed that on my computer I get the best result (in terms of elapsed time) if I set the number of thread equals to 8. More (or less) threads give me worst results. Why?
Blockquote
There isn't synchronization errors, but I think that the memory managing could be better, since your code leaked if parse() throws an exception.
There isn't synchronization errors, but I think that your memory managing could be better, since you will have leaks if parse() throw an exception.
Blockquote
Is there a way to achieve the same result faster (or better)?
Probably. You could use a simple implementation of a thread pool and a lambda that do the parse() for you.
The code below illustrate this approach. I use the threadpool implementation
here
#include <iostream>
#include <stdexcept>
#include <vector>
#include <memory>
#include <chrono>
#include <utility>
#include <cassert>
#include <ThreadPool.h>
using namespace std;
using namespace std::chrono;
#define N_GRAPHICS (1000*1000*1)
#define N_THREADS 8
struct Graphic;
using GPtr = std::unique_ptr<Graphic>;
static vector<GPtr> graphics;
struct Graphic
{
Graphic()
: status(false)
{
}
bool parse()
{
// waste time
try
{
throw runtime_error("");
}
catch (runtime_error)
{
}
status = true;
//return false;
return true;
}
bool status;
};
int main()
{
auto start = system_clock::now();
auto producer_unit = []()-> GPtr {
std::unique_ptr<Graphic> g(new Graphic);
if(!g->parse()){
g.reset(); // if g don't parse, return nullptr
}
return g;
};
using ResultPool = std::vector<std::future<GPtr>>;
ResultPool results;
// ThreadPool pool(thread::hardware_concurrency());
ThreadPool pool(N_THREADS);
for(int i = 0; i <N_GRAPHICS; ++i){
// Running async task
results.emplace_back(pool.enqueue(producer_unit));
}
for(auto &t : results){
auto value = t.get();
if(value){
graphics.emplace_back(std::move(value));
}
}
auto duration = duration_cast<milliseconds>(system_clock::now() - start);
cout << "Elapsed: " << duration.count() << endl;
for (size_t i = 0; i < graphics.size(); i++)
{
if (!graphics[i]->status)
{
cerr << "Assertion failed! (" << i << ")" << endl;
break;
}
}
cin.get();
return 0;
}
It is a bit faster (1s) on my machine, more readable, and removes the necessity of shared datas (synchronization is evil, avoid it or hide it in a reliable and efficient way).

openmpi/c++: defining a mpi data type for class with members of variable length (pointers pointing to malloced memory)?

i am currently learning to use openmpi, my aim is to parallelize a simple program whose code i will post bellow.
The program is for testing my concept of paralleling a much bigger program, i hope to learn all i need to know for my actual problem if i succeed with this.
Basically it is a definition of a simple c++ class for lists. A list consists of two arrays, one integer and one double. Entries with the same indicies belong together, in a way that the integer entry is some kind of list entry identifier (maybe an object ID) and the double entry is some kind of quantifier (maybe the weight if an object).
The basic purpose of the program is to add lists together (this is the task i want to parallelize). Adding works as follows: For each entry in one list it is checked if there is the same integer entry in the the other list, if so then the double entry gets added to the double entry in the other list, if there is no such entry in the other list then both the integer and the double entries gets added to the end of the list.
Basically each summand in this list addition represents a storage and each entry is a type of object with a given amount (int is the type and double is the amount), so adding two lists means putting the stuff from the second storage to the first.
The order of the list entries is irrelevant, this means that the addition of lists is not only associative but commutative too!
My plan is to add a very large number of such lists (a few billions) so parallelizing could be to let each thread add a subset of lists first and when this is finished distribute all such sublists (one for each thread) to all of the threads.
My current understanding of openmpi is that only the last step (distributing of finished sublists) needs any special non standard stuff. Basically i need a AllReduce but with a custom data type and a custom operaton.
The first problem i have is understanding how to create a fitting MPI data type. I came to the conclusion that i probably need MPI_Type_create_struct to create a struct type.
I found this site with a nice example: http://mpi.deino.net/mpi_functions/MPI_Type_create_struct.html
from which i learned a lot but the problem is, that in this case there are fixed member arrays. In my case i have lists with arbitrary sized member variables or better with pointers pointing to memory blocks of arbitrary size. So doing it like in the example would lead to creating a new MPI datatype for each list size (using fixed sized lists could help but only in this minimalistic case, but i want to learn how to do it with arbitrary sized lists are preparation for my actual problem).
So my question is: how to create a data type for this special case? What is the best way?
I even thought to maybe write some non mpi code to serialize my class/object, (which would be a lot of work for my real problem but in this example it should be easy) to a single block of bits. Then i could simply use a MPI function to distribute those blocks to all threads and then i just have to translate it back to the actual object, and then i could let each thread simply add the "number-of-threads" lists together to have the same full reduced list on all threads (because the operation is commutative it is not important if the order is the same on each thread in the end).
The problem is that i do not know which MPI function to use to distribute a such memory blocks to each thread so that in the end each thread has an array of "number-of-threads" such blocks (similar like AllReduce but with blocks).
But thats just another idea, i would like to hear from you whats the best way.
Thank you, here is my fully working example program (ignore the MPI parts thats just preparation, you can simply compile with: g++)
As you can see, i needed to create custom copy constructors because standard of the pointer members. I hope thats not a problem for MPI?
#include <iostream>
#include <cstdlib>
#if (CFG_MPI > 0)
#include <mpi.h>
#else
#define MPI_Barrier(xxx) // dummy code if not parallel
#endif
class list {
private:
int *ilist;
double *dlist;
int n;
public:
list(int n, int *il, double *dl) {
int i;
if (n>0) {
this->ilist = (int*)malloc(n*sizeof(int));
this->dlist = (double*)malloc(n*sizeof(double));
if (!ilist || !dlist) std::cout << "ERROR: malloc in constructor failed!" << std::endl;
} else {
this->ilist = NULL;
this->dlist = NULL;
}
for (i=0; i<n; i++) {
this->ilist[i] = il[i];
this->dlist[i] = dl[i];
}
this->n = n;
}
~list() {
free(ilist);
free(dlist);
ilist = NULL;
dlist = NULL;
this->n=0;
}
list(const list& cp) {
int i;
this->n = cp.n;
this->ilist = NULL;
this->dlist = NULL;
if (this->n > 0) {
this->ilist = (int*)malloc(this->n*sizeof(int));
this->dlist = (double*)malloc(this->n*sizeof(double));
if (!ilist || !dlist) std::cout << "ERROR: malloc in copy constructor failed!" << std::endl;
}
for (i=0; i<this->n; i++) {
this->ilist[i] = cp.ilist[i];
this->dlist[i] = cp.dlist[i];
}
}
list& operator=(const list& cp) {
if(this == &cp) return *this;
this->~list();
int i;
this->n = cp.n;
if (this->n > 0) {
this->ilist = (int*)malloc(this->n*sizeof(int));
this->dlist = (double*)malloc(this->n*sizeof(double));
if (!ilist || !dlist) std::cout << "ERROR: malloc in copy constructor failed!" << std::endl;
} else {
this->ilist = NULL;
this->dlist = NULL;
}
for (i=0; i<this->n; i++) {
this->ilist[i] = cp.ilist[i];
this->dlist[i] = cp.dlist[i];
}
return *this;
}
void print() {
int i;
for (i=0; i<this->n; i++)
std::cout << i << " : " << "[" << this->ilist[i] << " - " << (double)dlist[i] << "]" << std::endl;
}
list& operator+=(const list& cp) {
int i,j;
if(this == &cp) {
for (i=0; i<this->n; i++)
this->dlist[i] *= 2;
return *this;
}
double *dl;
int *il;
il = (int *) realloc(this->ilist, (this->n+cp.n)*sizeof(int));
dl = (double *) realloc(this->dlist, (this->n+cp.n)*sizeof(double));
if (!il || !dl)
std::cout << "ERROR: 1st realloc in operator += failed!" << std::endl;
else {
this->ilist = il;
this->dlist = dl;
il = NULL;
dl = NULL;
}
for (i=0; i<cp.n; i++) {
for (j=0; j<this->n; j++) {
if (this->ilist[j] == cp.ilist[i]) {
this->dlist[j] += cp.dlist[i];
break;
}
} if (j == this->n) {// no matching entry found in this
this->ilist[this->n] = cp.ilist[i];
this->dlist[this->n] = cp.dlist[i];
this->n++;
}
}
il = (int *) realloc(this->ilist, (this->n)*sizeof(int));
dl = (double *) realloc(this->dlist, (this->n)*sizeof(double));
if (!il || !dl)
std::cout << "ERROR: 2nd realloc in operator += failed!" << std::endl;
else {
this->ilist = il;
this->dlist = dl;
}
return *this;
}
};
int main(int argc, char **argv) {
int npe, myid;
#if (CFG_MPI > 0)
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&npe);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
#else
npe=1;
myid=0;
#endif
if (!myid) // reduce output
std::cout << "NPE = " << npe << " MYID = " << myid << std::endl;
int ilist[5] = {14,17,4,29,0};
double dlist[5] = {0.0, 170.0, 0.0, 0.0, 24.523};
int ilist2[6] = {14,117,14,129,0, 34};
double dlist2[6] = {0.5, 170.5, 0.5, 0.5, 24.0, 1.2};
list tlist(5, ilist, dlist);
list tlist2(6, ilist2, dlist2);
if (!myid) {
tlist.print();
tlist2.print();
}
tlist +=tlist2;
if (myid) tlist.print();
#if (CFG_MPI > 0)
MPI_Finalize();
#endif
return 0;
}

C++ Syncing threads in most elegant way

I am try to solve the following problem, I know there are multiple solutions but I'm looking for the most elegant way (less code) to solve it.
I've 4 threads, 3 of them try to write a unique value (0,1,or 2) to a volatile integer variable in an infinite loop, the forth thread try to read the value of this variable and print the value to the stdout also in an infinite loop.
I'd like to sync between the thread so the thread that writes 0 will be run and then the "print" thread and then the thread that writes 1 and then again the print thread, an so on...
So that finally what I expect to see at the output of the "print" thread is a sequence of zeros and then sequence of 1 and then 2 and then 0 and so on...
What is the most elegant and easy way to sync between these threads.
This is the program code:
volatile int value;
int thid[4];
int main() {
HANDLE handle[4];
for (int ii=0;ii<4;ii++) {
thid[ii]=ii;
handle[ii] = (HANDLE) CreateThread( NULL, 0, (LPTHREAD_START_ROUTINE) ThreadProc, &thid[ii], 0, NULL);
}
return 0;
}
void WINAPI ThreadProc( LPVOID param ) {
int h=*((int*)param);
switch (h) {
case 3:
while(true) {
cout << value << endl;
}
break;
default:
while(true) {
// setting a unique value to the volatile variable
value=h;
}
break;
}
}
your problem can be solved with the producer consumer pattern.
I got inspired from Wikipedia so here is the link if you want some more details.
https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem
I used a random number generator to generate the volatile variable but you can change that part.
Here is the code: it can be improved in terms of style (using C++11 for random numbers) but it produces what you expect.
#include <iostream>
#include <sstream>
#include <vector>
#include <stack>
#include <thread>
#include <mutex>
#include <atomic>
#include <condition_variable>
#include <chrono>
#include <stdlib.h> /* srand, rand */
using namespace std;
//random number generation
std::mutex mutRand;//mutex for random number generation (given that the random generator is not thread safe).
int GenerateNumber()
{
std::lock_guard<std::mutex> lk(mutRand);
return rand() % 3;
}
// print function for "thread safe" printing using a stringstream
void print(ostream& s) { cout << s.rdbuf(); cout.flush(); s.clear(); }
// Constants
//
const int num_producers = 3; //the three producers of random numbers
const int num_consumers = 1; //the only consumer
const int producer_delay_to_produce = 10; // in miliseconds
const int consumer_delay_to_consume = 30; // in miliseconds
const int consumer_max_wait_time = 200; // in miliseconds - max time that a consumer can wait for a product to be produced.
const int max_production = 1; // When producers has produced this quantity they will stop to produce
const int max_products = 1; // Maximum number of products that can be stored
//
// Variables
//
atomic<int> num_producers_working(0); // When there's no producer working the consumers will stop, and the program will stop.
stack<int> products; // The products stack, here we will store our products
mutex xmutex; // Our mutex, without this mutex our program will cry
condition_variable is_not_full; // to indicate that our stack is not full between the thread operations
condition_variable is_not_empty; // to indicate that our stack is not empty between the thread operations
//
// Functions
//
// Produce function, producer_id will produce a product
void produce(int producer_id)
{
while (true)
{
unique_lock<mutex> lock(xmutex);
int product;
is_not_full.wait(lock, [] { return products.size() != max_products; });
product = GenerateNumber();
products.push(product);
print(stringstream() << "Producer " << producer_id << " produced " << product << "\n");
is_not_empty.notify_all();
}
}
// Consume function, consumer_id will consume a product
void consume(int consumer_id)
{
while (true)
{
unique_lock<mutex> lock(xmutex);
int product;
if(is_not_empty.wait_for(lock, chrono::milliseconds(consumer_max_wait_time),
[] { return products.size() > 0; }))
{
product = products.top();
products.pop();
print(stringstream() << "Consumer " << consumer_id << " consumed " << product << "\n");
is_not_full.notify_all();
}
}
}
// Producer function, this is the body of a producer thread
void producer(int id)
{
++num_producers_working;
for(int i = 0; i < max_production; ++i)
{
produce(id);
this_thread::sleep_for(chrono::milliseconds(producer_delay_to_produce));
}
print(stringstream() << "Producer " << id << " has exited\n");
--num_producers_working;
}
// Consumer function, this is the body of a consumer thread
void consumer(int id)
{
// Wait until there is any producer working
while(num_producers_working == 0) this_thread::yield();
while(num_producers_working != 0 || products.size() > 0)
{
consume(id);
this_thread::sleep_for(chrono::milliseconds(consumer_delay_to_consume));
}
print(stringstream() << "Consumer " << id << " has exited\n");
}
//
// Main
//
int main()
{
vector<thread> producers_and_consumers;
// Create producers
for(int i = 0; i < num_producers; ++i)
producers_and_consumers.push_back(thread(producer, i));
// Create consumers
for(int i = 0; i < num_consumers; ++i)
producers_and_consumers.push_back(thread(consumer, i));
// Wait for consumers and producers to finish
for(auto& t : producers_and_consumers)
t.join();
return 0;
}
Hope that helps, tell me if you need more info or if you disagree with something :-)
And Good Bastille Day to all French people!
If you want to synchronise the threads, then using a sync object to hold each of the threads in a "ping-pong" or "tick-tock" pattern.
In C++ 11 you can use condition variables, the example here shows something similar to what you are asking for.

One producer, two consumers acting on one 'queue' produced by producer

Preface: I'm new to multithreaded programming, and a little rusty with C++. My requirements are to use one mutex, and two conditions mNotEmpty and mEmpty. I must also create and populate the vectors in the way mentioned below.
I have one producer thread creating a vector of random numbers of size n*2, and two consumers inserting those values into two separate vectors of size n.
I am doing the following in the producer:
Lock the mutex: pthread_mutex_lock(&mMutex1)
Wait for consumer to say vector is empty: pthread_cond_wait(&mEmpty,&mMutex1)
Push back a value into the vector
Signal the consumer that the vector isn't empty anymore: pthread_cond_signal(&mNotEmpty)
Unlock the mutex: pthread_mutex_unlock(&mMutex1)
Return to step 1
In the consumer:
Lock the mutex: pthread_mutex_lock(&mMutex1)
Check to see if the vector is empty, and if so signal the producer: pthread_cond_signal(&mEmpty)
Else insert value into one of two new vectors (depending on which thread) and remove from original vector
Unlock the mutex: pthread_mutex_unlock(&mMutex1)
Return to step 1
What's wrong with my process? I keep getting segmentation faults or infinite loops.
Edit: Here's the code:
void Producer()
{
srand(time(NULL));
for(unsigned int i = 0; i < mTotalNumberOfValues; i++){
pthread_mutex_lock(&mMutex1);
pthread_cond_wait(&mEmpty,&mMutex1);
mGeneratedNumber.push_back((rand() % 100) + 1);
pthread_cond_signal(&mNotEmpty);
pthread_mutex_unlock(&mMutex1);
}
}
void Consumer(const unsigned int index)
{
for(unsigned int i = 0; i < mNumberOfValuesPerVector; i++){
pthread_mutex_lock(&mMutex1);
if(mGeneratedNumber.empty()){
pthread_cond_signal(&mEmpty);
}else{
mThreadVector.at(index).push_back[mGeneratedNumber.at(0)];
mGeneratedNumber.pop_back();
}
pthread_mutex_unlock(&mMutex1);
}
}
I'm not sure I understand the rationale behind the way you're doing
things. In the usual consumer-provider idiom, the provider pushes as
many items as possible into the channel, waiting only if there is
insufficient space in the channel; it doesn't wait for empty. So the
usual idiom would be:
provider (to push one item):
pthread_mutex_lock( &mutex );
while ( ! spaceAvailable() ) {
pthread_cond_wait( &spaceAvailableCondition, &mutex );
}
pushTheItem();
pthread_cond_signal( &itemAvailableCondition );
pthread_mutex_unlock( &mutex );
and on the consumer side, to get an item:
pthread_mutex_lock( &mutex );
while ( ! itemAvailable() ) {
pthread_cond_wait( &itemAvailableCondition, &mutex );
}
getTheItem();
pthread_cond_signal( &spaceAvailableCondition );
pthread_mutex_unlock( &mutex );
Note that for each condition, one side signals, and the other waits. (I
don't see any wait in your consumer.) And if there is more than one
process on either side, I'd recommend using pthread_cond_broadcast,
rather than pthread_cond_signal.
There are a number of other issues in your code. Some of them look more
like typos: you should copy/paste actual code to avoid this. Do you
really mean to read and pop mGeneratedValues, when you push into
mGeneratedNumber, and check whether that is empty? (If you actually
do have two different queues, then you're popping from a queue where no
one has pushed.) And you don't have any loops waiting for the
conditions; you keep iterating through the number of elements you
expect (incrementing the counter each time, so you're likely to
gerninate long before you should)—I can't see an infinite loop,
but I can readily see a endless wait in pthread_cond_wait in the
producer. I don't see a core dump off hand, but what happens when one
of the processes terminates (probably the consumer, because it never
waits for anything); if it ends up destroying the mutex or the condition
variables, you could get a core dump when another process attempts to
use them.
In producer, call pthread_cond_wait only when queue is not empty. Otherwise you get blocked forever due to a race condition.
You might want to consider taking mutex only after condition is fulfilled, e.g.
producer()
{
while true
{
waitForEmpty();
takeMutex();
produce();
releaseMutex();
}
}
consumer()
{
while true
{
waitForNotEmpty();
takeMutex();
consume();
releaseMutex();
}
}
Here is a solution to a similar problem like you. In this program producer produces a no and writes it to a array(buffer) and a maintains a file then update a status(status array) about it, while on getting data in the array(buffer) consumers start to consume(read and write to their file) and update a status that it has consumed. when producer looks that both the consumer has consumed the data it overrides the data with a new value and goes on. for convenience here i have restricted the code to run for 2000 nos.
// Producer-consumer //
#include <iostream>
#include <fstream>
#include <pthread.h>
#define MAX 100
using namespace std;
int dataCount = 2000;
int buffer_g[100];
int status_g[100];
void *producerFun(void *);
void *consumerFun1(void *);
void *consumerFun2(void *);
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t dataNotProduced = PTHREAD_COND_INITIALIZER;
pthread_cond_t dataNotConsumed = PTHREAD_COND_INITIALIZER;
int main()
{
for(int i = 0; i < MAX; i++)
status_g[i] = 0;
pthread_t producerThread, consumerThread1, consumerThread2;
int retProducer = pthread_create(&producerThread, NULL, producerFun, NULL);
int retConsumer1 = pthread_create(&consumerThread1, NULL, consumerFun1, NULL);
int retConsumer2 = pthread_create(&consumerThread2, NULL, consumerFun2, NULL);
pthread_join(producerThread, NULL);
pthread_join(consumerThread1, NULL);
pthread_join(consumerThread2, NULL);
return 0;
}
void *producerFun(void *)
{
//file to write produced data by producer
const char *producerFileName = "producer.txt";
ofstream producerFile(producerFileName);
int index = 0, producerCount = 0;
while(1)
{
pthread_mutex_lock(&mutex);
if(index == MAX)
{
index = 0;
}
if(status_g[index] == 0)
{
static int data = 0;
data++;
cout << "Produced: " << data << endl;
buffer_g[index] = data;
producerFile << data << endl;
status_g[index] = 5;
index ++;
producerCount ++;
pthread_cond_broadcast(&dataNotProduced);
}
else
{
cout << ">> Producer is in wait.." << endl;
pthread_cond_wait(&dataNotConsumed, &mutex);
}
pthread_mutex_unlock(&mutex);
if(producerCount == dataCount)
{
producerFile.close();
return NULL;
}
}
}
void *consumerFun1(void *)
{
const char *consumerFileName = "consumer1.txt";
ofstream consumerFile(consumerFileName);
int index = 0, consumerCount = 0;
while(1)
{
pthread_mutex_lock(&mutex);
if(index == MAX)
{
index = 0;
}
if(status_g[index] != 0 && status_g[index] != 2)
{
int data = buffer_g[index];
cout << "Cosumer1 consumed: " << data << endl;
consumerFile << data << endl;
status_g[index] -= 3;
index ++;
consumerCount ++;
pthread_cond_signal(&dataNotConsumed);
}
else
{
cout << "Consumer1 is in wait.." << endl;
pthread_cond_wait(&dataNotProduced, &mutex);
}
pthread_mutex_unlock(&mutex);
if(consumerCount == dataCount)
{
consumerFile.close();
return NULL;
}
}
}
void *consumerFun2(void *)
{
const char *consumerFileName = "consumer2.txt";
ofstream consumerFile(consumerFileName);
int index = 0, consumerCount = 0;
while(1)
{
pthread_mutex_lock(&mutex);
if(index == MAX)
{
index = 0;
}
if(status_g[index] != 0 && status_g[index] != 3)
{
int data = buffer_g[index];
cout << "Consumer2 consumed: " << data << endl;
consumerFile << data << endl;
status_g[index] -= 2;
index ++;
consumerCount ++;
pthread_cond_signal(&dataNotConsumed);
}
else
{
cout << ">> Consumer2 is in wait.." << endl;
pthread_cond_wait(&dataNotProduced, &mutex);
}
pthread_mutex_unlock(&mutex);
if(consumerCount == dataCount)
{
consumerFile.close();
return NULL;
}
}
}
Here is only one problem that producer in not independent to produce, that is it needs to take lock on the whole array(buffer) before it produces new data, and if the mutex is locked by consumer it waits for that and vice versa, i am trying to look for it.