Synchronize two sensors with different frame rate - c++

I want to synchronize the output of two sensors that works at different frame rate (~80ms vs ~40ms) in C++ using threads. The idea is like the producer-consumer problem but with 2 producers and 1 consumer, and without a buffer because only the last new products matters.
These are the points that shoud cover the problem:
Each sensor reading will be managed by a thread separately.
There will be a main thread that must take always the last new two data read from the sensors and process it.
The reading of one sensor should not block the reading of the other. I mean, the threads reading should not have the same mutex.
The main/process thread should not block the reading threads while it is working. I propose lock the data, make a local copy (it is faster than process directly), unlock and process the copy.
If there is no new data, the main thread should wait for it.
This is a time diagram of the requested functionality.
And this is the pseudocode:
void getSensor1(Data& data)
{
while (true)
{
mutex1.lock();
//Read data from sensor 1
mutex1.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(80 + (rand() % 5)));
}
}
void getSensor2(Data& data)
{
while (true)
{
mutex2.lock();
//Read data from sensor 2
mutex2.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(40 + (rand() % 5)));
}
}
int main()
{
Data sensor1;
Data sensor2;
std::thread threadGetScan(getSensor1, std::ref(sensor1));
std::thread threadGetFrame(getSensor2, std::ref(sensor2));
while(true)
{
// Wait for new data, lock, copy, unlock and process it
std::this_thread::sleep_for(std::chrono::milliseconds(100 + (rand() % 25)))
}
return 0;
}
Thanks in advance.

Since each sensor is only read from one thread, then mutex around the sensor access serves no purpose. You can get rid of that. Where you need thread safety is the means by which the thread which has read from a sensor passes data to the thread which is consuming it.
Have the thread reading from the sensor use only local variables, or variables only accessed by that thread, for its work of reading the sensor. Once it has the data completely, then put that data (or better yet, a pointer to the data) into a shared queue that the consuming thread will get it from.
Since you need to save only the latest data, your queue can have a max size of 1. Which can just be a pointer.
Access to this shared data structure should be protected with a mutex. But since it is just a single pointer, you can use std::atomic.
The reading thread could look like this:
void getData(std::atomic<Data*>& dataptr) {
while (true) {
Data* mydata = new Data; // local variable!
// stuff to put data into mydata
std::this_thread::sleep_for(80ms);
// Important! this line is only once that uses dataptr. It is atomic.
Data* olddata = std::atomic_exchange(&dataptr, mydata);
// In case the old data was never consumed, don't leak it.
if (olddata) delete olddata;
}
}
And the main thread could look like this:
void main_thread(void) {
std::atomic<Data*> sensorData1;
std::atomic<Data*> sensorData2;
std::thread sensorThread1(getData, std::ref(sensorData1));
std::thread sensorThread2(getData, std::ref(sensorData2));
while (true) {
std::this_thread::sleep_for(100ms);
Data* data1 = std::atomic_exchange(&sensorData1, (Data*)nullptr);
Data* data2 = std::atomic_exchange(&sensorData2, (Data*)nullptr);
// Use data1 and data2
delete data1;
delete data2;
}
}

After some researching work, I have found a solution that does what I wanted using mutexes and condition variables. I let you below the code I propose. Improvements and other suitable solutions are still accepted.
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <chrono>
#include <cstdlib>
#define SIZE_LOOP 1000
// Struct where the data sensors is synchronized
struct Data
{
int data1; // Data of sensor 1
int data2; // Data of sensor 2
};
std::mutex mtx1; // Mutex to access sensor1 shared data
std::mutex mtx2; // Mutex to access sensor2 shared data
std::condition_variable cv1; // Condition variable to wait for sensor1 data availability
std::condition_variable cv2; // Condition variable to wait for sensor2 data availability
bool ready1; // Flag to indicate sensor1 data is available
bool ready2; // Flag to indicate sensor2 is available
// Function that continuously reads data from sensor 1
void getSensor1(int& data1)
{
// Initialize flag to data not ready
ready1 = false;
// Initial delay
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
// Reading loop (i represents an incoming new data)
for(int i = 0; i < SIZE_LOOP; i++)
{
// Lock data access
std::unique_lock<std::mutex> lck1(mtx1);
// Read data
data1 = i;
std::cout << "Sensor1 (" << data1 << ")"<< std::endl;
// Set data to ready
ready1 = true;
// Notify if processing thread is waiting
cv1.notify_one();
// Unlock data access
lck1.unlock();
// Sleep to simulate frame rate
std::this_thread::sleep_for(std::chrono::milliseconds(2000 + (rand() % 500)));
}
}
// Function that continuously reads data from sensor 2
void getSensor2(int& data2)
{
// Initialize flag to data not ready
ready2 = false;
// Initial delay
std::this_thread::sleep_for(std::chrono::milliseconds(3000));
// Reading loop (i represents an incoming new data)
for(int i = 0; i < SIZE_LOOP; i++)
{
// Lock data access
std::unique_lock<std::mutex> lck2(mtx2);
// Read data
data2 = i;
std::cout << "Sensor2 (" << data2 << ")"<< std::endl;
// Set data to ready
ready2 = true;
// Notify if processing thread is waiting
cv2.notify_one();
// Unlock data access
lck2.unlock();
// Sleep to simulate frame rate
std::this_thread::sleep_for(std::chrono::milliseconds(1000 + (rand() % 500)));
}
}
// Function that waits until sensor 1 data is ready
void waitSensor1(const int& dataRead1, int& dataProc1)
{
// Lock data access
std::unique_lock<std::mutex> lck1(mtx1);
// Wait for new data
while(!ready1)
{
//std::cout << "Waiting sensor1" << std::endl;
cv1.wait(lck1);
}
//std::cout << "No Waiting sensor1" << std::endl;
// Make a local copy of the data (allows uncoupling read and processing tasks what means them can be done parallely)
dataProc1 = dataRead1;
std::cout << "Copying sensor1 (" << dataProc1 << ")"<< std::endl;
// Sleep to simulate copying load
std::this_thread::sleep_for(std::chrono::milliseconds(200));
// Set data flag to not ready
ready1 = false;
// Unlock data access
lck1.unlock();
}
// Function that waits until sensor 2 data is ready
void waitSensor2(const int& dataRead2, int& dataProc2)
{
// Lock data access
std::unique_lock<std::mutex> lck2(mtx2);
// Wait for new data
while(!ready2)
{
//std::cout << "Waiting sensor2" << std::endl;
cv2.wait(lck2);
}
//std::cout << "No Waiting sensor2" << std::endl;
// Make a local copy of the data (allows uncoupling read and processing tasks what means them can be done parallely)
dataProc2 = dataRead2;
std::cout << "Copying sensor2 (" << dataProc2 << ")"<< std::endl;
// Sleep to simulate copying load
std::this_thread::sleep_for(std::chrono::milliseconds(400));
// Set data flag to not ready
ready2 = false;
// Unlock data access
lck2.unlock();
}
// Main function
int main()
{
Data dataRead; // Data read
Data dataProc; // Data to process
// Threads that reads at some frame rate data from sensor 1 and 2
std::thread threadGetSensor1(getSensor1, std::ref(dataRead.data1));
std::thread threadGetSensor2(getSensor2, std::ref(dataRead.data2));
// Processing loop
for(int i = 0; i < SIZE_LOOP; i++)
{
// Wait until data from sensor 1 and 2 is ready
std::thread threadWaitSensor1(waitSensor1, std::ref(dataRead.data1), std::ref(dataProc.data1));
std::thread threadWaitSensor2(waitSensor2, std::ref(dataRead.data2), std::ref(dataProc.data2));
// Shyncronize data/threads
threadWaitSensor1.join();
threadWaitSensor2.join();
// Process synchronized data while sensors are throwing new data
std::cout << "Init processing (" << dataProc.data1 << "," << dataProc.data2 << ")"<< std::endl;
// Sleep to simulate processing load
std::this_thread::sleep_for(std::chrono::milliseconds(10000 + (rand() % 1000)));
std::cout << "End processing" << std::endl;
}
return 0;
}

Related

Why a waiting thread does not wake up after calling notify?

I am trying to simulate a sensor that outputs data at a certain frame rate while another is waiting to have a data ready and when it is ready it copies it locally and processes it.
Sensor sensor(1,1000);
Monitor monitor;
// Function that continuously reads data from sensor
void runSensor()
{
// Initial delay
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
for(int i = 0; i < SIZE_LOOP; i++)
{
monitor.captureData<Sensor>(sensor, &Sensor::captureData);
}
}
// Function that waits until sensor data is ready
void waitSensor()
{
monitor.saveData<Sensor>(sensor, &Sensor::saveData);
}
// Main function
int main()
{
// Threads that reads at some frame rate data from sensor
std::thread threadRunSensor(runSensor);
// Processing loop
for(int i = 0; i < SIZE_LOOP; i++)
{
// Wait until data from sensor is ready
std::thread threadWaitSensor(waitSensor);
// Wait until data is copied
threadWaitSensor.join();
// Process synchronized data while sensor are throwing new data
std::cout << "Init processing (" << sensor.getData() << /*"," << sensor2.getData() << */")"<< std::endl;
// Sleep to simulate processing load
std::this_thread::sleep_for(std::chrono::milliseconds(10000 + (rand() % 1000)));
//std::this_thread::sleep_for(std::chrono::milliseconds(500));
std::cout << "End processing" << std::endl;
}
return 0;
}
This is the sensor class. It has two methods. One that generates the data and other that copies the data locally.
class Sensor
{
private:
int counter;
int id;
int frameRate;
int dataCaptured;
int dataSaved;
public:
Sensor(int f_id, int f_frameRate)
{
id = f_id;
counter = 0;
frameRate = f_frameRate;
};
~Sensor(){};
void captureData()
{
dataCaptured = counter;
counter ++;
std::cout << "Sensor" << id << " (" << dataCaptured << ")"<< std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(frameRate + (rand() % 500)));
};
void saveData()
{
dataSaved = dataCaptured;
std::cout << "Copying sensor" << id << " (" << dataSaved << ")"<< std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(1 + (rand() % 5)));
}
int getData()
{
return dataSaved;
}
};
Then there is a class Monitor that ensures these operations are protected to concurrent accesses.
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <chrono>
#include <cstdlib>
#define SIZE_LOOP 1000
class Monitor
{
private:
std::mutex m_mutex;
std::condition_variable m_condVar;
bool m_isReady;
public:
Monitor()
{
init();
};
~Monitor()
{
};
void init()
{
m_isReady = false;
};
template<class T>
void captureData(T& objectCaptured, void (T::* f_captureFunction_p)())
{
// Lock read
std::unique_lock<std::mutex> lock = std::unique_lock<std::mutex>(m_mutex);
(objectCaptured.*f_captureFunction_p)();
m_isReady = true;
m_condVar.notify_one();
lock.unlock();
};
template<class T>
void saveData(T& objectSaved, void(T::*f_saveFunction_p)())
{
std::unique_lock<std::mutex> lock = std::unique_lock<std::mutex>(m_mutex);
while(!m_isReady)
{
m_condVar.wait(lock);
}
(objectSaved.*f_saveFunction_p)();
m_isReady = false;
lock.unlock();
};
};
Can anyone tell me why the waiting thread does not wakeup if the sensor is notifyng every frame rate?
The idea is having two threads with this workflow:
ThreadCapture captures a data consinuously notifying to ThreadProcessing when the data capture is done.
ThreadCapture must waits to capture a new data only if the current captured data is being copied on ThreadProcessing.
ThreadProcessing waits to a new captured data, makes a local copy, notifies to ThreadCapture that the copy is done and process the data.
The local copy is made on ThreadProcessing to allow ThreadCapture can capture new data while ThreadProcessing is processing.
Finally I found the solution adding a waiting step after the capture to give time to save data
template<class T>
void captureData(T& objectCaptured, void (T::* f_captureFunction_p)())
{
std::unique_lock<std::mutex> lockReady = std::unique_lock<std::mutex>(m_mutexReady, std::defer_lock);
std::unique_lock<std::mutex> lockProcess = std::unique_lock<std::mutex>(m_mutexProcess, std::defer_lock);
// Lock, capture, set data ready flag, unlock and notify
lockReady.lock();
(objectCaptured.*f_captureFunction_p)();
m_isReady = true;
lockReady.unlock();
m_conditionVariable.notify_one();
// Wait while data is ready and it is not being processed
lockReady.lock();
lockProcess.lock();
while(m_isReady && !m_isProcessing)
{
lockProcess.unlock();
m_conditionVariable.wait(lockReady);
lockProcess.lock();
}
lockProcess.unlock();
lockReady.unlock();
};
template<class T>
void saveData(T& objectSaved, void(T::*f_saveFunction_p)())
{
std::unique_lock<std::mutex> lockReady(m_mutexReady, std::defer_lock);
std::unique_lock<std::mutex> lockProcess(m_mutexProcess, std::defer_lock);
// Reset processing
lockProcess.lock();
m_isProcessing = false;
lockProcess.unlock();
// Wait until data is ready
lockReady.lock();
while(!m_isReady)
{
m_conditionVariable.wait(lockReady);
}
// Make a copy of the data, reset ready flag, unlock and notify
(objectSaved.*f_saveFunction_p)();
m_isReady = false;
lockReady.unlock();
m_conditionVariable.notify_one();
// Set processing
lockProcess.lock();
m_isProcessing = true;
lockProcess.unlock();
};
};

Cpp program using producer and consumer (mutex locks) on a shared buffer

I have a c++ code where I am trying to produce some values into thread and consume it using 2 other consumer threads. Mutex locks are used to provide synhronisation. But output does not show a random progress for the threads, as in, producer produces 5 at a time, 1 of the consumer consumes it all at a time not giving the other consumer or the producer a chance before it completely consumes the queue. ANy help would be much appreciated. I am also attaching the code and sample output.
// CPP program to demonstrate the given task
`#include <iostream>`
`#include <pthread.h>`
#include <queue>
#include <stdlib.h>
#include<unistd.h>
#define MAX 10
using namespace std;
// Declaring global variables
int sum_B = 0, sum_C = 0;
int consumerCount1 = 0;
int consumerCount2 = 0;
// Shared queue
queue<int> Q;
int item=0;
// Function declaration of all required functions
void* producerFun(void*);
void* add_B(void*);
void* add_C(void*);
// Getting the mutex
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t dataNotProduced =
PTHREAD_COND_INITIALIZER;
pthread_cond_t dataNotConsumed =
PTHREAD_COND_INITIALIZER;
// Function to generate random numbers and
// push them into queue using thread A
void* producerFun(void*)
{
static int producerCount = 0;
// Initialising the seed
srand(time(NULL));
while (1) {
// Getting the lock on queue using mutex
pthread_mutex_lock(&mutex);
if (Q.size() < MAX && item < MAX)
{
// Getting the random number
int num = rand() % 10 + 1;
// Pushing the number into queue
Q.push(num);
producerCount++;
item++;
cout << "Produced: " << num << " item: "<<item<<endl;
pthread_cond_broadcast(&dataNotProduced);
}
// If queue is full, release the lock and return
else if (item == MAX) {
pthread_mutex_unlock(&mutex);
cout<<"\nLeaving";
//sleep(1);
continue;
}
// If some other thread is exectuing, wait
/*else {
cout << ">> Producer is in wait.." << endl;
pthread_cond_wait(&dataNotConsumed, &mutex);
}*/
// Get the mutex unlocked
pthread_mutex_unlock(&mutex);
//sleep(0.5);
}
}
// Function definition for consumer thread B
void* add_B(void*)
{
while (1) {
// Getting the lock on queue using mutex
pthread_mutex_lock(&mutex);
// Pop only when queue has at least 1 element
if (Q.size() > 0) {
// Get the data from the front of queue
int data = Q.front();
// Add the data to the integer variable
// associated with thread B
sum_B += data;
consumerCount1++;
// Pop the consumed data from queue
Q.pop();
item--;
cout << "B thread consumed: " << data << "cc1: "<<consumerCount1<<endl;
//pthread_cond_signal(&dataNotConsumed);
}
// Check if consmed numbers from both threads
// has reached to MAX value
/*else if (consumerCount2 + consumerCount1 == MAX) {
pthread_mutex_unlock(&mutex);
return NULL;
}*/
// If some other thread is exectuing, wait
else {
cout << "B is in wait.." << endl;
pthread_cond_wait(&dataNotProduced, &mutex);
}
// Get the mutex unlocked
pthread_mutex_unlock(&mutex);
//sleep(0.5);
}
}
// Function definition for consumer thread C
void* add_C(void*)
{
while (1) {
// Getting the lock on queue using mutex
pthread_mutex_lock(&mutex);
// Pop only when queue has at least 1 element
if (Q.size() > 0) {
// Get the data from the front of queue
int data = Q.front();
// Add the data to the integer variable
// associated with thread B
sum_C += data;
// Pop the consumed data from queue
Q.pop();
item--;
consumerCount2++;
cout << "C thread consumed: " << data << "cc2: "<<consumerCount2<<endl;
//pthread_cond_signal(&dataNotConsumed);
}
// Check if consmed numbers from both threads
// has reached to MAX value
/*else if (consumerCount2 + consumerCount1 == MAX)
{
pthread_mutex_unlock(&mutex);
return NULL;
}*/
// If some other thread is exectuing, wait
else {
cout << ">> C is in wait.." << endl;
// Wait on a condition
pthread_cond_wait(&dataNotProduced, &mutex);
}
// Get the mutex unlocked
pthread_mutex_unlock(&mutex);
//sleep(0.5);
}
}
// Driver code
int main()
{
// Declaring integers used to
// identify the thread in the system
pthread_t producerThread, consumerThread1, consumerThread2;
// Function to create a threads
// (pthread_create() takes 4 arguments)
int retProducer = pthread_create(&producerThread,
NULL, producerFun, NULL);
int retConsumer1 = pthread_create(&consumerThread1,
NULL, *add_B, NULL);
int retConsumer2 = pthread_create(&consumerThread2,
NULL, *add_C, NULL);
// pthread_join suspends execution of the calling
// thread until the target thread terminates
//if (!retProducer)
pthread_join(producerThread, NULL);
//if (!retConsumer1)
pthread_join(consumerThread1, NULL);
//if (!retConsumer2)
pthread_join(consumerThread2, NULL);
// Checking for the final value of thread
if (sum_C > sum_B)
cout << "Winner is Thread C" << endl;
else if (sum_C < sum_B)
cout << "Winner is Thread B" << endl;
else
cout << "Both has same score" << endl;
return 0;
}
enter image description here

Filling and saving shared buffer between threads

I'm working with an API that retrieves I/Q data. Calling the function bbGetIQ(m_handle, &pkt);fills a buffer. This is a thread looping while the user hasn't input "stop". Pkt is a structure and the buffer used is pkt.iqData = &m_buffer[0]; which is a vector of float. The size of the vector is 5000 and each time we're looping the buffer is filled with 5000 values.
I want to save the data from the buffer into a file, and I was doing it right after a call to bbgetIQ but doing like so is a time consuming task, data wasn't retrieved fast enough resulting in the API dropping data so it can continue filling its buffer.
Here's what my code looked like :
void Acquisition::recordIQ(){
int cpt = 0;
ofstream myfile;
while(1){
while (keep_running)
{
cpt++;
if(cpt < 2)
myfile.open ("/media/ssd/IQ_Data.txt");
bbGetIQ(m_handle, &pkt); //Retrieve I/Q data
//Writing content of buffer into the file.
for(int i=0; i<m_buffer.size(); i++)
myfile << m_buffer[i] << endl;
}
cpt = 0;
myfile.close();
}
}
Then i tried to only write into the file when we leave the loop :
void Acquisition::recordIQ(){
int cpt = 0;
ofstream myfile;
int next=0;
vector<float> data;
while(1){
while ( keep_running)
{
if(keep_running == false){
myfile.open ("/media/ssd/IQ_Data.txt");
for(int i=0; i<data.size(); i++)
myfile << data[i] << endl;
myfile.close();
break;
}
cpt++;
data.resize(next + m_buffer.size());
bbGetIQ(m_handle, &pkt); //retrieve data
std::copy(m_buffer.begin(), m_buffer.end(), data.begin() + next); //copy content of the buffer into final vector
next += m_buffer.size(); //next index
}
cpt = 0;
}
}
I am no longer getting data loss from the API, but the issue is that i'm limited by the size of data vector. For example, I can't let it retrieve data all night.
My idea is to make 2 threads. One will retrieve data and the other will write the data into a file. The 2 threads will share a circular buffer where the first thread will fill the buffer and the second thread will read the buffer and write the content to a file. As it is a shared buffer, i guess i should use mutexes.
I'm new to multi-threading and mutex, so would this be a good idea? I don't really know where to start and how the consumer thread can read the buffer while the producer will fill it. Will locking the buffer while reading cause data drop by the API ? (because it won't be able to write it into the circular buffer).
EDIT : As i want my record thread to run in background so i can do other stuff while it's recording, i detached it and the user can launch a record by setting the condition keep_running to true.
thread t1(&Acquisition::recordIQ, &acq);
t1.detach();
You need to use something like this (https://en.cppreference.com/w/cpp/thread/condition_variable):
globals:
std::mutex m;
std::condition_variable cv;
std::vector<std::vector<float>> datas;
bool keep_running = true, start_running = false;
writing thread:
void writing_thread()
{
myfile.open ("/media/ssd/IQ_Data.txt");
while(1) {
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return keep_running && !datas.empty();});
if (!keep_running) break;
auto d = std::move(datas);
lk.unlock();
for(auto &entry : d) {
for(auto &e : entry)
myfile << e << endl;
}
}
}
sending thread:
void sending_thread() {
while(1) {
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, []{return keep_running && start_running;});
if (!keep_running) break;
}
bbGetIQ(m_handle, &pkt); //retrieve data
std::vector<float> d = m_buffer;
{
std::lock_guard<std::mutex> lk(m);
if (!keep_running) break;
datas.push_back(std::move(d));
}
cv.notify_one();
}
}
void start() {
{
std::unique_lock<std::mutex> lk(m);
start_running = true;
}
cv.notify_all();
}
void stop() {
{
std::unique_lock<std::mutex> lk(m);
start_running = false;
}
cv.notify_all();
}
void terminate() {
{
std::unique_lock<std::mutex> lk(m);
keep_running = false;
}
cv.notify_all();
thread1.join();
thread2.join();
}
In short:
Sending thread receives data from whatever it comes, locks mutex mt and moves data to datas storage. Then it uses cv condition variable to notify waiting threads, that there's something to do. Writing thread waits for condition variable to be signaled, then locks mutex mt, moves data from datas global variable to local, then releases mutex and proceed to write just received data to file. Key is to keep mutexed locked for least time possible.
EDIT:
to terminate whole thing you need to set keep_running to false. Then call cv.notify_all(). Then join threads involved. Order is important. You need to join threads, because writing thread might be still in process of writing data.
EDIT2:
added delayed start. Now create two threads, in one run sending_thread, in other writing_thread. Call start() to enable processing and stop() to stop it.

condition_variable is not working if use it inside structure

condition_variable is not working if use it inside a structure. If I have it as a global variable all works fine. But I need a condition_variable for each packet as I don't know when I will receive an answer and I need to wait for it for each packet. What am I doing wrong?
This is console output:
Wait: 416
StopWait: 423
From it I can see that I receive data and unlock thread after I lock it.
Structures
struct Waiting {
bool IsWaiting = false;
mutable std::condition_variable cv;
mutable std::mutex m;
clock_t localCLock = 0;
void Wait() const {
const double ms = Utils::MillisecondsSpent(localCLock);
std::cout << "Wait: " << ms << std::endl;
std::unique_lock<std::mutex> lock(m);
cv.wait(lock, [this] { return IsWaiting; });
}
void StopWait() {
const double ms = Utils::MillisecondsSpent(localCLock);
std::cout << "StopWait: " << ms << std::endl;
std::unique_lock<std::mutex> lock(m);
IsWaiting = true;
cv.notify_all();
}
};
struct Packet
{
Packet() : id(0), waiting(new Waiting) {}
int id;
Waiting* waiting;
};
class Map
{
static Map* instance;
Map();
~Map();
Map(const Map&) = delete;
public:
static Map* Instance() {
if (!instance) instance = new Map;
return instance;
}
std::map<int, Packet> packets;
};
Threads
//Send Thread - called first
while(true){
Packet packet;
packet.id = 1;
//some send packet logic here
...
///
Map::Instance()->packets.insert(std::pair<int, Packet>(p.id, p));
Map::Instance()->packets[id].waiting->Wait(); // thread now locked and never unlocks
const Packet received = Map::Instance()->packets[id];
Map::Instance()->packets.erase(id);
}
//Receive Thread - called second
while(true){
//some receive packet logic here
...
///
const Packet packet = ... // receive a packet data;
Map::Instance()->packets[packet.id] = packet;
Map::Instance()->packets[packet.id].answered = true;
Map::Instance()->packets[packet.id].waiting->StopWait(); // i unlock Send Thread, but it won't work
}
Synchronization issues and memory leaks aside, every time you assign a Packet you are copying it by value, and a new Waiting is allocated. There are many different dangling Waiting objects floating around in memory, and there's no reason that calling StopWait on one will trigger the condition_variable on another.
See the code comments I've added.
while(true){
// *** PACKET A ***
Packet packet;
packet.id = 1;
//*** PACKET B ***
Map::Instance()->packets.insert(std::pair<int, Packet>(p.id, p));
Map::Instance()->packets[id].waiting->Wait();
}
while(true){
// *** PACKET C ***
const Packet packet = ...
//You are overwriting PACKET B with a copy of PACKET C which is PACKET D.
//Don't you mean to find a packet which has the same id as the received packet instead of overwriting it?
Map::Instance()->packets[packet.id] = packet;
Map::Instance()->packets[packet.id].answered = true;
// There's no reason calling StopWait on PACKET D's Waiting object will release PACKET B.
Map::Instance()->packets[packet.id].waiting->StopWait();
}

Boost synchronization

I have NUM_THREADS threads, with the following codes in my thread:
/*
Calculate some_value;
*/
//Critical section to accummulate all thresholds
{
boost::mutex::scoped_lock lock(write_mutex);
T += some_value;
num_threads++;
if (num_threads == NUM_THREADS){
T = T/NUM_THREADS;
READY = true;
cond.notify_all();
num_threads = 0;
}
}
//Wait for average threshold to be ready
if (!READY)
{
boost::unique_lock<boost::mutex> lock(wait_mutex);
while (!READY){
cond.wait(lock);
}
}
//End critical section
/*
do_something;
*/
Basically, I want all the threads to wait for the READY signal before continuing. num_thread is set to 0, and READY is false before threads are created. Once in a while, deadlock occurs. Can anyone help please?
All the boost variables are globally declared as follows:
boost::mutex write_mutex;
boost::mutex wait_mutex;
boost::condition cond;
The code has a race condition on the READY flag (which I assume is just a bool variable). What may happen (i.e. one possible variant of thread execution interleaving) is:
Thread T1: Thread T2:
if (!READY)
{
unique_lock<mutex> lock(wait_mutex); mutex::scoped_lock lock(write_mutex);
while (!READY) /* ... */
{ READY = true;
/* !!! */ cond.notify_all();
cond.wait(lock);
}
}
The code testing the READY flag is not synchronized with the code setting it (note the locks are different for these critical sections). And when T1 is in a "hole" between the flag test and waiting at cond, T2 may set the flag and send a signal to cond which T1 may miss.
The simplest solution is to lock the right mutex for the update of READY and condition notification:
/*...*/
T = T/NUM_THREADS;
{
boost::mutex::scoped_lock lock(wait_mutex);
READY = true;
cond.notify_all();
}
It looks like Boost.Thread's barriers might be what you need.
Here's a working example that averages values provided by several worker threads. Each worker thread uses the same shared barrier (via the accumulator instance) to synchronize each other.
#include <cstdlib>
#include <iostream>
#include <vector>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
boost::mutex coutMutex;
typedef boost::lock_guard<boost::mutex> LockType;
class Accumulator
{
public:
Accumulator(int count) : barrier_(count), sum_(0), count_(count) {}
void accumulateAndWait(float value)
{
{
// Increment value
LockType lock(mutex_);
sum_ += value;
}
barrier_.wait(); // Wait for other the threads to wait on barrier.
}
void wait() {barrier_.wait();} // Wait on barrier without changing sum.
float sum() {LockType lock(mutex_); return sum_;} // Return current sum
float average() {LockType lock(mutex_); return sum_ / count_;}
// Reset the sum. The barrier is automatically reset when triggered.
void reset() {LockType lock(mutex_); sum_ = 0;}
private:
typedef boost::lock_guard<boost::mutex> LockType;
boost::barrier barrier_;
boost::mutex mutex_;
float sum_;
int count_;
};
/* Posts a value for the accumulator to add and waits for other threads
to do the same. */
void workerFunction(Accumulator& accumulator)
{
// Sleep for a random amount of time before posting value
int randomMilliseconds = std::rand() % 3000;
boost::posix_time::time_duration randomDelay =
boost::posix_time::milliseconds(randomMilliseconds);
boost::this_thread::sleep(randomDelay);
// Post some random value
float value = std::rand() % 100;
{
LockType lock(coutMutex);
std::cout << "Thread " << boost::this_thread::get_id() << " posting "
<< value << " after " << randomMilliseconds << "ms\n";
}
accumulator.accumulateAndWait(value);
float avg = accumulator.average();
// Print a message to indicate this thread is past the barrier.
{
LockType lock(coutMutex);
std::cout << "Thread " << boost::this_thread::get_id() << " unblocked. "
<< "Average = " << avg << "\n" << std::flush;
}
}
int main()
{
int workerThreadCount = 5;
Accumulator accumulator(workerThreadCount);
// Create and launch worker threads
boost::thread_group threadGroup;
for (int i=0; i<workerThreadCount; ++i)
{
threadGroup.create_thread(
boost::bind(&workerFunction, boost::ref(accumulator)));
}
// Wait for all worker threads to finish
threadGroup.join_all();
{
LockType lock(coutMutex);
std::cout << "All worker threads finished\n" << std::flush;
}
/* Pause a bit before exiting, to give worker threads a chance to
print their messages. */
boost::this_thread::sleep(boost::posix_time::seconds(1));
}
I get the following output:
Thread 0x100100f80 posting 72 after 1073ms
Thread 0x100100d30 posting 44 after 1249ms
Thread 0x1001011d0 posting 78 after 1658ms
Thread 0x100100ae0 posting 23 after 1807ms
Thread 0x100101420 posting 9 after 1930ms
Thread 0x100101420 unblocked. Average = 45.2
Thread 0x100100f80 unblocked. Average = 45.2
Thread 0x100100d30 unblocked. Average = 45.2
Thread 0x1001011d0 unblocked. Average = 45.2
Thread 0x100100ae0 unblocked. Average = 45.2
All worker threads finished