Why a waiting thread does not wake up after calling notify? - c++

I am trying to simulate a sensor that outputs data at a certain frame rate while another is waiting to have a data ready and when it is ready it copies it locally and processes it.
Sensor sensor(1,1000);
Monitor monitor;
// Function that continuously reads data from sensor
void runSensor()
{
// Initial delay
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
for(int i = 0; i < SIZE_LOOP; i++)
{
monitor.captureData<Sensor>(sensor, &Sensor::captureData);
}
}
// Function that waits until sensor data is ready
void waitSensor()
{
monitor.saveData<Sensor>(sensor, &Sensor::saveData);
}
// Main function
int main()
{
// Threads that reads at some frame rate data from sensor
std::thread threadRunSensor(runSensor);
// Processing loop
for(int i = 0; i < SIZE_LOOP; i++)
{
// Wait until data from sensor is ready
std::thread threadWaitSensor(waitSensor);
// Wait until data is copied
threadWaitSensor.join();
// Process synchronized data while sensor are throwing new data
std::cout << "Init processing (" << sensor.getData() << /*"," << sensor2.getData() << */")"<< std::endl;
// Sleep to simulate processing load
std::this_thread::sleep_for(std::chrono::milliseconds(10000 + (rand() % 1000)));
//std::this_thread::sleep_for(std::chrono::milliseconds(500));
std::cout << "End processing" << std::endl;
}
return 0;
}
This is the sensor class. It has two methods. One that generates the data and other that copies the data locally.
class Sensor
{
private:
int counter;
int id;
int frameRate;
int dataCaptured;
int dataSaved;
public:
Sensor(int f_id, int f_frameRate)
{
id = f_id;
counter = 0;
frameRate = f_frameRate;
};
~Sensor(){};
void captureData()
{
dataCaptured = counter;
counter ++;
std::cout << "Sensor" << id << " (" << dataCaptured << ")"<< std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(frameRate + (rand() % 500)));
};
void saveData()
{
dataSaved = dataCaptured;
std::cout << "Copying sensor" << id << " (" << dataSaved << ")"<< std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(1 + (rand() % 5)));
}
int getData()
{
return dataSaved;
}
};
Then there is a class Monitor that ensures these operations are protected to concurrent accesses.
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <chrono>
#include <cstdlib>
#define SIZE_LOOP 1000
class Monitor
{
private:
std::mutex m_mutex;
std::condition_variable m_condVar;
bool m_isReady;
public:
Monitor()
{
init();
};
~Monitor()
{
};
void init()
{
m_isReady = false;
};
template<class T>
void captureData(T& objectCaptured, void (T::* f_captureFunction_p)())
{
// Lock read
std::unique_lock<std::mutex> lock = std::unique_lock<std::mutex>(m_mutex);
(objectCaptured.*f_captureFunction_p)();
m_isReady = true;
m_condVar.notify_one();
lock.unlock();
};
template<class T>
void saveData(T& objectSaved, void(T::*f_saveFunction_p)())
{
std::unique_lock<std::mutex> lock = std::unique_lock<std::mutex>(m_mutex);
while(!m_isReady)
{
m_condVar.wait(lock);
}
(objectSaved.*f_saveFunction_p)();
m_isReady = false;
lock.unlock();
};
};
Can anyone tell me why the waiting thread does not wakeup if the sensor is notifyng every frame rate?
The idea is having two threads with this workflow:
ThreadCapture captures a data consinuously notifying to ThreadProcessing when the data capture is done.
ThreadCapture must waits to capture a new data only if the current captured data is being copied on ThreadProcessing.
ThreadProcessing waits to a new captured data, makes a local copy, notifies to ThreadCapture that the copy is done and process the data.
The local copy is made on ThreadProcessing to allow ThreadCapture can capture new data while ThreadProcessing is processing.

Finally I found the solution adding a waiting step after the capture to give time to save data
template<class T>
void captureData(T& objectCaptured, void (T::* f_captureFunction_p)())
{
std::unique_lock<std::mutex> lockReady = std::unique_lock<std::mutex>(m_mutexReady, std::defer_lock);
std::unique_lock<std::mutex> lockProcess = std::unique_lock<std::mutex>(m_mutexProcess, std::defer_lock);
// Lock, capture, set data ready flag, unlock and notify
lockReady.lock();
(objectCaptured.*f_captureFunction_p)();
m_isReady = true;
lockReady.unlock();
m_conditionVariable.notify_one();
// Wait while data is ready and it is not being processed
lockReady.lock();
lockProcess.lock();
while(m_isReady && !m_isProcessing)
{
lockProcess.unlock();
m_conditionVariable.wait(lockReady);
lockProcess.lock();
}
lockProcess.unlock();
lockReady.unlock();
};
template<class T>
void saveData(T& objectSaved, void(T::*f_saveFunction_p)())
{
std::unique_lock<std::mutex> lockReady(m_mutexReady, std::defer_lock);
std::unique_lock<std::mutex> lockProcess(m_mutexProcess, std::defer_lock);
// Reset processing
lockProcess.lock();
m_isProcessing = false;
lockProcess.unlock();
// Wait until data is ready
lockReady.lock();
while(!m_isReady)
{
m_conditionVariable.wait(lockReady);
}
// Make a copy of the data, reset ready flag, unlock and notify
(objectSaved.*f_saveFunction_p)();
m_isReady = false;
lockReady.unlock();
m_conditionVariable.notify_one();
// Set processing
lockProcess.lock();
m_isProcessing = true;
lockProcess.unlock();
};
};

Related

Synchronize two sensors with different frame rate

I want to synchronize the output of two sensors that works at different frame rate (~80ms vs ~40ms) in C++ using threads. The idea is like the producer-consumer problem but with 2 producers and 1 consumer, and without a buffer because only the last new products matters.
These are the points that shoud cover the problem:
Each sensor reading will be managed by a thread separately.
There will be a main thread that must take always the last new two data read from the sensors and process it.
The reading of one sensor should not block the reading of the other. I mean, the threads reading should not have the same mutex.
The main/process thread should not block the reading threads while it is working. I propose lock the data, make a local copy (it is faster than process directly), unlock and process the copy.
If there is no new data, the main thread should wait for it.
This is a time diagram of the requested functionality.
And this is the pseudocode:
void getSensor1(Data& data)
{
while (true)
{
mutex1.lock();
//Read data from sensor 1
mutex1.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(80 + (rand() % 5)));
}
}
void getSensor2(Data& data)
{
while (true)
{
mutex2.lock();
//Read data from sensor 2
mutex2.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(40 + (rand() % 5)));
}
}
int main()
{
Data sensor1;
Data sensor2;
std::thread threadGetScan(getSensor1, std::ref(sensor1));
std::thread threadGetFrame(getSensor2, std::ref(sensor2));
while(true)
{
// Wait for new data, lock, copy, unlock and process it
std::this_thread::sleep_for(std::chrono::milliseconds(100 + (rand() % 25)))
}
return 0;
}
Thanks in advance.
Since each sensor is only read from one thread, then mutex around the sensor access serves no purpose. You can get rid of that. Where you need thread safety is the means by which the thread which has read from a sensor passes data to the thread which is consuming it.
Have the thread reading from the sensor use only local variables, or variables only accessed by that thread, for its work of reading the sensor. Once it has the data completely, then put that data (or better yet, a pointer to the data) into a shared queue that the consuming thread will get it from.
Since you need to save only the latest data, your queue can have a max size of 1. Which can just be a pointer.
Access to this shared data structure should be protected with a mutex. But since it is just a single pointer, you can use std::atomic.
The reading thread could look like this:
void getData(std::atomic<Data*>& dataptr) {
while (true) {
Data* mydata = new Data; // local variable!
// stuff to put data into mydata
std::this_thread::sleep_for(80ms);
// Important! this line is only once that uses dataptr. It is atomic.
Data* olddata = std::atomic_exchange(&dataptr, mydata);
// In case the old data was never consumed, don't leak it.
if (olddata) delete olddata;
}
}
And the main thread could look like this:
void main_thread(void) {
std::atomic<Data*> sensorData1;
std::atomic<Data*> sensorData2;
std::thread sensorThread1(getData, std::ref(sensorData1));
std::thread sensorThread2(getData, std::ref(sensorData2));
while (true) {
std::this_thread::sleep_for(100ms);
Data* data1 = std::atomic_exchange(&sensorData1, (Data*)nullptr);
Data* data2 = std::atomic_exchange(&sensorData2, (Data*)nullptr);
// Use data1 and data2
delete data1;
delete data2;
}
}
After some researching work, I have found a solution that does what I wanted using mutexes and condition variables. I let you below the code I propose. Improvements and other suitable solutions are still accepted.
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <chrono>
#include <cstdlib>
#define SIZE_LOOP 1000
// Struct where the data sensors is synchronized
struct Data
{
int data1; // Data of sensor 1
int data2; // Data of sensor 2
};
std::mutex mtx1; // Mutex to access sensor1 shared data
std::mutex mtx2; // Mutex to access sensor2 shared data
std::condition_variable cv1; // Condition variable to wait for sensor1 data availability
std::condition_variable cv2; // Condition variable to wait for sensor2 data availability
bool ready1; // Flag to indicate sensor1 data is available
bool ready2; // Flag to indicate sensor2 is available
// Function that continuously reads data from sensor 1
void getSensor1(int& data1)
{
// Initialize flag to data not ready
ready1 = false;
// Initial delay
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
// Reading loop (i represents an incoming new data)
for(int i = 0; i < SIZE_LOOP; i++)
{
// Lock data access
std::unique_lock<std::mutex> lck1(mtx1);
// Read data
data1 = i;
std::cout << "Sensor1 (" << data1 << ")"<< std::endl;
// Set data to ready
ready1 = true;
// Notify if processing thread is waiting
cv1.notify_one();
// Unlock data access
lck1.unlock();
// Sleep to simulate frame rate
std::this_thread::sleep_for(std::chrono::milliseconds(2000 + (rand() % 500)));
}
}
// Function that continuously reads data from sensor 2
void getSensor2(int& data2)
{
// Initialize flag to data not ready
ready2 = false;
// Initial delay
std::this_thread::sleep_for(std::chrono::milliseconds(3000));
// Reading loop (i represents an incoming new data)
for(int i = 0; i < SIZE_LOOP; i++)
{
// Lock data access
std::unique_lock<std::mutex> lck2(mtx2);
// Read data
data2 = i;
std::cout << "Sensor2 (" << data2 << ")"<< std::endl;
// Set data to ready
ready2 = true;
// Notify if processing thread is waiting
cv2.notify_one();
// Unlock data access
lck2.unlock();
// Sleep to simulate frame rate
std::this_thread::sleep_for(std::chrono::milliseconds(1000 + (rand() % 500)));
}
}
// Function that waits until sensor 1 data is ready
void waitSensor1(const int& dataRead1, int& dataProc1)
{
// Lock data access
std::unique_lock<std::mutex> lck1(mtx1);
// Wait for new data
while(!ready1)
{
//std::cout << "Waiting sensor1" << std::endl;
cv1.wait(lck1);
}
//std::cout << "No Waiting sensor1" << std::endl;
// Make a local copy of the data (allows uncoupling read and processing tasks what means them can be done parallely)
dataProc1 = dataRead1;
std::cout << "Copying sensor1 (" << dataProc1 << ")"<< std::endl;
// Sleep to simulate copying load
std::this_thread::sleep_for(std::chrono::milliseconds(200));
// Set data flag to not ready
ready1 = false;
// Unlock data access
lck1.unlock();
}
// Function that waits until sensor 2 data is ready
void waitSensor2(const int& dataRead2, int& dataProc2)
{
// Lock data access
std::unique_lock<std::mutex> lck2(mtx2);
// Wait for new data
while(!ready2)
{
//std::cout << "Waiting sensor2" << std::endl;
cv2.wait(lck2);
}
//std::cout << "No Waiting sensor2" << std::endl;
// Make a local copy of the data (allows uncoupling read and processing tasks what means them can be done parallely)
dataProc2 = dataRead2;
std::cout << "Copying sensor2 (" << dataProc2 << ")"<< std::endl;
// Sleep to simulate copying load
std::this_thread::sleep_for(std::chrono::milliseconds(400));
// Set data flag to not ready
ready2 = false;
// Unlock data access
lck2.unlock();
}
// Main function
int main()
{
Data dataRead; // Data read
Data dataProc; // Data to process
// Threads that reads at some frame rate data from sensor 1 and 2
std::thread threadGetSensor1(getSensor1, std::ref(dataRead.data1));
std::thread threadGetSensor2(getSensor2, std::ref(dataRead.data2));
// Processing loop
for(int i = 0; i < SIZE_LOOP; i++)
{
// Wait until data from sensor 1 and 2 is ready
std::thread threadWaitSensor1(waitSensor1, std::ref(dataRead.data1), std::ref(dataProc.data1));
std::thread threadWaitSensor2(waitSensor2, std::ref(dataRead.data2), std::ref(dataProc.data2));
// Shyncronize data/threads
threadWaitSensor1.join();
threadWaitSensor2.join();
// Process synchronized data while sensors are throwing new data
std::cout << "Init processing (" << dataProc.data1 << "," << dataProc.data2 << ")"<< std::endl;
// Sleep to simulate processing load
std::this_thread::sleep_for(std::chrono::milliseconds(10000 + (rand() % 1000)));
std::cout << "End processing" << std::endl;
}
return 0;
}

Writing in file from shared buffer missing data and program crash without cout

I am making a program using threads and a shared buffer. The two threads run indefinitely in the background, one thread will fill a shared buffer with data and the other thread will write the content of the shared buffer into a file.
The user can start or stop the data filling which is resulting in the thread entering into a waiting state until the user starts the thread again. Each loop the buffer is filled with 50 floats.
This is the code :
#include <iostream>
#include <vector>
#include <iterator>
#include <utility>
#include <fstream>
#include <condition_variable>
#include <mutex>
#include <thread>
using namespace std;
std::mutex m;
std::condition_variable cv;
std::vector<std::vector<float>> datas;
bool keep_running = true, start_running = false;
void writing_thread()
{
ofstream myfile;
bool opn = false;
while(1)
{
while(keep_running)
{
// Open the file only once
if(!opn)
{
myfile.open("IQ_Datas.txt");
opn = true;
}
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, [] {return !datas.empty();});
auto d = std::move(datas);
lk.unlock();
for(auto &entry : d)
{
for(auto &e : entry)
myfile << e << endl;
}
}
if(opn)
{
myfile.close();
opn = false;
}
}
}
void sending_thread()
{
std::vector<float> m_buffer;
int cpt=0;
//Fill the buffer with 50 floats
for(float i=0; i<50; i++)
m_buffer.push_back(i);
while(1)
{
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, [] {return keep_running && start_running;});
}
while(keep_running)
{
//Each loop d is containing 50 floats
std::vector<float> d = m_buffer;
cout << "in3" << endl; //Commenting this line makes the program crash
{
std::lock_guard<std::mutex> lk(m);
if (!keep_running)break;
datas.push_back(std::move(d));
}
cv.notify_one();
cpt++;
}
cout << "Total data: " << cpt*50 << endl;
cpt = 0;
}
}
void start()
{
{
std::unique_lock<std::mutex> lk(m);
start_running = true;
}
cv.notify_all();
}
void stop()
{
{
std::unique_lock<std::mutex> lk(m);
start_running = false;
}
cv.notify_all();
}
int main()
{
int go = 0;
thread t1(sending_thread);
thread t2(writing_thread);
t1.detach();
t2.detach();
while(1)
{
std::cin >> go;
if(go == 1)
{
start();
keep_running = true;
}
else if(go == 0)
{
stop();
keep_running = false;
}
}
return 0;
}
I have 2 issues with this code :
When commenting the line cout << "in3" << endl; the program will crash after ~20-40 seconds with the error message : terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc. If i let the cout, the program will run without problems.
When the program is working, after stoping sending_thread i display the total amount of data that has been copied with cout << "Total data: " << cpt*50 << endl;. For small amount of datas, all of it is written correctly into the file but when the amount is big, there is missing data. Missing/Correct data (Total number of lines in the file does not match total data)
Why with the cout the program is running correctly ? And what is causing the missing data ? Is it because sending_thread is filling the buffer too fast while writing_threadtakes too much time to write into the file?
EDIT: Some precisions, adding more cout into sending_threadseems to fix all the issues. First thread produced 21 million floats and second thread successfully wrote in the file 21 million floats. It seems like without the cout, producer threads works too fast for the consumer thread to keep retrieving data from the shared buffer while writing it into a file.
To avoid:
Moved-from object 'datas' of type 'std::vector' is moved:
auto d = std::move(datas);
^~~~~~~~~~~~~~~~
Replace this:
// Wait until main() sends data
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, [] {return !datas.empty();});
auto d = std::move(datas);
lk.unlock();
With this:
// Wait until main() sends data
std::vector<std::vector<float>> d;
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, [] { return !datas.empty(); });
datas.swap(d);
}
Also replace your bool variables that are accessed from multiple threads with std::atomic_bool or std::atomic_flag.
The bad_alloc comes from sending_thread being much faster than writing_thread so it will run out of memory. When you slow down sending_thread enough (with printing), the problem is less visible, but you should have some synchronization to do it properly. You could make a wrapper class around it and provide insert and extraction methods to make sure all access is synchronized properly and also give it a max number of elements. An example:
template<typename T>
class atomic2dvector {
public:
atomic2dvector(size_t max_elements) : m_max_elements(max_elements) {}
atomic2dvector(const atomic2dvector&) = delete;
atomic2dvector(atomic2dvector&&) = delete;
atomic2dvector& operator=(const atomic2dvector&) = delete;
atomic2dvector& operator=(atomic2dvector&&) = delete;
~atomic2dvector() { shutdown(); }
bool insert_one(std::vector<T>&& other) {
std::unique_lock<std::mutex> lock(m_mtx);
while(m_current_elements + m_data.size() > m_max_elements && m_shutdown == false)
m_cv.wait(lock);
if(m_shutdown) return false;
m_current_elements += other.size();
m_data.emplace_back(std::forward<std::vector<T>>(other));
m_cv.notify_one();
return true;
}
std::vector<std::vector<T>> extract_all() {
std::vector<std::vector<T>> return_value;
std::unique_lock<std::mutex> lock(m_mtx);
while(m_data.empty() && m_shutdown == false) m_cv.wait(lock);
if(m_shutdown == false) {
m_current_elements = 0;
return_value.swap(m_data);
} else {
// return an empty vector if we should shutdown
}
m_cv.notify_one();
return return_value;
}
bool is_active() const { return m_shutdown == false; }
void shutdown() {
m_shutdown = true;
m_cv.notify_all();
}
private:
size_t m_max_elements;
size_t m_current_elements = 0;
std::atomic<bool> m_shutdown = false;
std::condition_variable m_cv{};
std::mutex m_mtx{};
std::vector<std::vector<T>> m_data{};
};
If you'd like to keep extracting data even after shutdown, you can change extract_all() to this:
std::vector<std::vector<T>> extract_all() {
std::vector<std::vector<T>> return_value;
std::unique_lock<std::mutex> lock(m_mtx);
while(m_data.empty() && m_shutdown == false) m_cv.wait(lock);
m_current_elements = 0;
return_value.swap(m_data);
m_cv.notify_one();
return return_value;
}
A full example could look like this:
#include <atomic>
#include <chrono>
#include <condition_variable>
#include <fstream>
#include <iostream>
#include <iterator>
#include <mutex>
#include <thread>
#include <utility>
#include <vector>
using namespace std;
template<typename T>
class atomic2dvector {
public:
atomic2dvector(size_t max_elements) : m_max_elements(max_elements) {}
atomic2dvector(const atomic2dvector&) = delete;
atomic2dvector(atomic2dvector&&) = delete;
atomic2dvector& operator=(const atomic2dvector&) = delete;
atomic2dvector& operator=(atomic2dvector&&) = delete;
~atomic2dvector() { shutdown(); }
bool insert_one(std::vector<T>&& other) {
std::unique_lock<std::mutex> lock(m_mtx);
while(m_current_elements + m_data.size() > m_max_elements &&
m_shutdown == false)
m_cv.wait(lock);
if(m_shutdown) return false;
m_current_elements += other.size();
m_data.emplace_back(std::forward<std::vector<T>>(other));
m_cv.notify_one();
return true;
}
std::vector<std::vector<T>> extract_all() {
std::vector<std::vector<T>> return_value;
std::unique_lock<std::mutex> lock(m_mtx);
while(m_data.empty() && m_shutdown == false) m_cv.wait(lock);
m_current_elements = 0;
return_value.swap(m_data);
m_cv.notify_one();
return return_value;
}
bool is_active() const { return m_shutdown == false; }
void shutdown() {
m_shutdown = true;
m_cv.notify_all();
}
private:
size_t m_max_elements;
size_t m_current_elements = 0;
std::atomic<bool> m_shutdown = false;
std::condition_variable m_cv{};
std::mutex m_mtx{};
std::vector<std::vector<T>> m_data{};
};
std::mutex m;
std::condition_variable cv;
atomic2dvector<float> datas(256 * 1024 * 1024 / sizeof(float)); // 0.25 GiB limit
std::atomic_bool start_running = false;
void writing_thread() {
std::ofstream myfile("IQ_Datas.txt");
if(myfile) {
std::cout << "writing_thread waiting\n";
std::vector<std::vector<float>> d;
while((d = datas.extract_all()).empty() == false) {
std::cout << "got " << d.size() << "\n";
for(auto& entry : d) {
for(auto& e : entry) myfile << e << "\n";
}
std::cout << "wrote " << d.size() << "\n\n";
}
}
std::cout << "writing_thread shutting down\n";
}
void sending_thread() {
std::vector<float> m_buffer;
std::uintmax_t cpt = 0;
// Fill the buffer with 50 floats
for(float i = 0; i < 50; i++) m_buffer.push_back(i);
while(true) {
{
std::unique_lock<std::mutex> lk(m);
cv.wait(lk, [] {
return start_running == true || datas.is_active() == false;
});
}
if(datas.is_active() == false) break;
std::cout << "sending...\n";
while(start_running == true) {
// Each loop d is containing 50 floats
std::vector<float> d = m_buffer;
if(datas.insert_one(std::move(d)) == false) break;
cpt++;
}
cout << "Total data: " << cpt * 50 << endl;
cpt = 0;
}
std::cout << "sending_thread shutting down\n";
}
void start() {
std::unique_lock<std::mutex> lk(m);
start_running = true;
cv.notify_all();
}
void stop() {
std::unique_lock<std::mutex> lk(m);
start_running = false;
cv.notify_all();
}
void quit() {
datas.shutdown();
cv.notify_all();
}
int main() {
int go = 0;
thread t1(sending_thread);
thread t2(writing_thread);
std::this_thread::sleep_for(std::chrono::milliseconds(100));
std::cout << "Enter 1 to make the sending thread send and 0 to make it stop "
"sending. Enter a non-integer to shutdown.\n";
while(std::cin >> go) {
if(go == 1) {
start();
} else if(go == 0) {
stop();
}
}
std::cout << "--- shutting down ---\n";
quit();
std::cout << "joining threads\n";
t1.join();
std::cout << "t1 joined\n";
t2.join();
std::cout << "t2 joined\n";
}

Do I need to implement blocking when using boost::asio?

My question is, if I run io_service::run () on multiple threads, do I need to implement blocking on these asynchronous functions?
example:
int i = 0;
int j = 0;
void test_timer(boost::system::error_code ec)
{
//I need to lock up here ?
if (i++ == 10)
{
j = i * 10;
}
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(500));
timer.async_wait(&test_timer);
}
void threadMain()
{
io_service.run();
}
int main()
{
boost::thread_group workers;
timer.async_wait(&test_timer);
for (int i = 0; i < 5; i++){
workers.create_thread(&threadMain);
}
io_service.run();
workers.join_all();
return 0;
}
The definition of async is that it is non-blocking.
If you mean to ask "do I have to synchronize access to shared objects from different threads" - that question is unrelated and the answer depends on the thread-safety documented for the object you are sharing.
For Asio, basically (rough summary) you need to synchronize concurrent access (concurrent as in: from multiple threads) to all types except boost::asio::io_context¹,².
Your Sample
Your sample uses multiple threads running the io service, meaning handlers run on any of those threads. This means that effectively you're sharing the globals and indeed they need protection.
However Because your application logic (the async call chain) dictates that only one operation is ever pending, and the next async operation on the shared timer object is always scheduled from within that chain, the access is logically all from a single thread (called an implicit strand. See Why do I need strand per connection when using boost::asio?
The simplest thing that would work:
Logical Strand
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <iostream>
boost::asio::io_service io_service;
boost::asio::deadline_timer timer { io_service };
struct state_t {
int i = 0;
int j = 0;
} state;
void test_timer(boost::system::error_code ec)
{
if (ec != boost::asio::error::operation_aborted) {
{
if (state.i++ == 10) {
state.j = state.i * 10;
if (state.j > 100)
return; // stop after 5 seconds
}
}
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(50));
timer.async_wait(&test_timer);
}
}
int main()
{
boost::thread_group workers;
timer.expires_from_now(boost::posix_time::milliseconds(50));
timer.async_wait(&test_timer);
for (int i = 0; i < 5; i++){
workers.create_thread([] { io_service.run(); });
}
workers.join_all();
std::cout << "i = " << state.i << std::endl;
std::cout << "j = " << state.j << std::endl;
}
Note I removed the io_service::run() from the main thread as it is redundant with the join() (unless you really wanted 6 threads running the handlers, not 5).
Prints
i = 11
j = 110
Caveat
There's a pitfall lurking here. Say, you didn't want to bail at a fixed number, like I did, but want to stop, you'd be tempted to do:
timer.cancel();
from main. That's not legal, because the deadline_timer object is not thread safe. You'd need to either
use a global atomic_bool to signal the request for termination
post the timer.cancel() on the same strand as the timer async chain. However, there is only an explicit strand, so you can't do it without changing the code to use an explicit strand.
More Timers
Let's complicate things by having two timers, with their own implicit strands. This means access to the timer instances still need not be synchronized, but access to i and j does need to be.
Note In this demo I use synchronized_value<> for elegance. You can write similar logic manually using mutex and lock_guard.
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/thread/synchronized_value.hpp>
#include <iostream>
boost::asio::io_service io_service;
struct state {
int i = 0;
int j = 0;
};
boost::synchronized_value<state> shared_state;
struct TimerChain {
boost::asio::deadline_timer _timer;
TimerChain() : _timer{io_service} {
_timer.expires_from_now(boost::posix_time::milliseconds(50));
resume();
}
void resume() {
_timer.async_wait(boost::bind(&TimerChain::test_timer, this, _1));
};
void test_timer(boost::system::error_code ec)
{
if (ec != boost::asio::error::operation_aborted) {
{
auto state = shared_state.synchronize();
if (state->i++ == 10) {
state->j = state->i * 10;
}
if (state->j > 100) return; // stop after some iterations
}
_timer.expires_at(_timer.expires_at() + boost::posix_time::milliseconds(50));
resume();
}
}
};
int main()
{
boost::thread_group workers;
TimerChain timer1;
TimerChain timer2;
for (int i = 0; i < 5; i++){
workers.create_thread([] { io_service.run(); });
}
workers.join_all();
auto state = shared_state.synchronize();
std::cout << "i = " << state->i << std::endl;
std::cout << "j = " << state->j << std::endl;
}
Prints
i = 12
j = 110
Adding The Explicit Strands
Now it's pretty straight-forward to add them:
struct TimerChain {
boost::asio::io_service::strand _strand;
boost::asio::deadline_timer _timer;
TimerChain() : _strand{io_service}, _timer{io_service} {
_timer.expires_from_now(boost::posix_time::milliseconds(50));
resume();
}
void resume() {
_timer.async_wait(_strand.wrap(boost::bind(&TimerChain::test_timer, this, _1)));
};
void stop() { // thread safe
_strand.post([this] { _timer.cancel(); });
}
// ...
Live On Coliru
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/thread/synchronized_value.hpp>
#include <iostream>
boost::asio::io_service io_service;
struct state {
int i = 0;
int j = 0;
};
boost::synchronized_value<state> shared_state;
struct TimerChain {
boost::asio::io_service::strand _strand;
boost::asio::deadline_timer _timer;
TimerChain() : _strand{io_service}, _timer{io_service} {
_timer.expires_from_now(boost::posix_time::milliseconds(50));
resume();
}
void resume() {
_timer.async_wait(_strand.wrap(boost::bind(&TimerChain::test_timer, this, _1)));
};
void stop() { // thread safe
_strand.post([this] { _timer.cancel(); });
}
void test_timer(boost::system::error_code ec)
{
if (ec != boost::asio::error::operation_aborted) {
{
auto state = shared_state.synchronize();
if (state->i++ == 10) {
state->j = state->i * 10;
}
}
// continue indefinitely
_timer.expires_at(_timer.expires_at() + boost::posix_time::milliseconds(50));
resume();
}
}
};
int main()
{
boost::thread_group workers;
TimerChain timer1;
TimerChain timer2;
for (int i = 0; i < 5; i++){
workers.create_thread([] { io_service.run(); });
}
boost::this_thread::sleep_for(boost::chrono::seconds(10));
timer1.stop();
timer2.stop();
workers.join_all();
auto state = shared_state.synchronize();
std::cout << "i = " << state->i << std::endl;
std::cout << "j = " << state->j << std::endl;
}
Prints
i = 400
j = 110
¹ (or using the legacy name boost::asio::io_service)
² lifetime mutations are not considered member operations in this respect (you have to manually synchronize construction/destruction of shared objects even for thread-safe objects)

Boost synchronization

I have NUM_THREADS threads, with the following codes in my thread:
/*
Calculate some_value;
*/
//Critical section to accummulate all thresholds
{
boost::mutex::scoped_lock lock(write_mutex);
T += some_value;
num_threads++;
if (num_threads == NUM_THREADS){
T = T/NUM_THREADS;
READY = true;
cond.notify_all();
num_threads = 0;
}
}
//Wait for average threshold to be ready
if (!READY)
{
boost::unique_lock<boost::mutex> lock(wait_mutex);
while (!READY){
cond.wait(lock);
}
}
//End critical section
/*
do_something;
*/
Basically, I want all the threads to wait for the READY signal before continuing. num_thread is set to 0, and READY is false before threads are created. Once in a while, deadlock occurs. Can anyone help please?
All the boost variables are globally declared as follows:
boost::mutex write_mutex;
boost::mutex wait_mutex;
boost::condition cond;
The code has a race condition on the READY flag (which I assume is just a bool variable). What may happen (i.e. one possible variant of thread execution interleaving) is:
Thread T1: Thread T2:
if (!READY)
{
unique_lock<mutex> lock(wait_mutex); mutex::scoped_lock lock(write_mutex);
while (!READY) /* ... */
{ READY = true;
/* !!! */ cond.notify_all();
cond.wait(lock);
}
}
The code testing the READY flag is not synchronized with the code setting it (note the locks are different for these critical sections). And when T1 is in a "hole" between the flag test and waiting at cond, T2 may set the flag and send a signal to cond which T1 may miss.
The simplest solution is to lock the right mutex for the update of READY and condition notification:
/*...*/
T = T/NUM_THREADS;
{
boost::mutex::scoped_lock lock(wait_mutex);
READY = true;
cond.notify_all();
}
It looks like Boost.Thread's barriers might be what you need.
Here's a working example that averages values provided by several worker threads. Each worker thread uses the same shared barrier (via the accumulator instance) to synchronize each other.
#include <cstdlib>
#include <iostream>
#include <vector>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
boost::mutex coutMutex;
typedef boost::lock_guard<boost::mutex> LockType;
class Accumulator
{
public:
Accumulator(int count) : barrier_(count), sum_(0), count_(count) {}
void accumulateAndWait(float value)
{
{
// Increment value
LockType lock(mutex_);
sum_ += value;
}
barrier_.wait(); // Wait for other the threads to wait on barrier.
}
void wait() {barrier_.wait();} // Wait on barrier without changing sum.
float sum() {LockType lock(mutex_); return sum_;} // Return current sum
float average() {LockType lock(mutex_); return sum_ / count_;}
// Reset the sum. The barrier is automatically reset when triggered.
void reset() {LockType lock(mutex_); sum_ = 0;}
private:
typedef boost::lock_guard<boost::mutex> LockType;
boost::barrier barrier_;
boost::mutex mutex_;
float sum_;
int count_;
};
/* Posts a value for the accumulator to add and waits for other threads
to do the same. */
void workerFunction(Accumulator& accumulator)
{
// Sleep for a random amount of time before posting value
int randomMilliseconds = std::rand() % 3000;
boost::posix_time::time_duration randomDelay =
boost::posix_time::milliseconds(randomMilliseconds);
boost::this_thread::sleep(randomDelay);
// Post some random value
float value = std::rand() % 100;
{
LockType lock(coutMutex);
std::cout << "Thread " << boost::this_thread::get_id() << " posting "
<< value << " after " << randomMilliseconds << "ms\n";
}
accumulator.accumulateAndWait(value);
float avg = accumulator.average();
// Print a message to indicate this thread is past the barrier.
{
LockType lock(coutMutex);
std::cout << "Thread " << boost::this_thread::get_id() << " unblocked. "
<< "Average = " << avg << "\n" << std::flush;
}
}
int main()
{
int workerThreadCount = 5;
Accumulator accumulator(workerThreadCount);
// Create and launch worker threads
boost::thread_group threadGroup;
for (int i=0; i<workerThreadCount; ++i)
{
threadGroup.create_thread(
boost::bind(&workerFunction, boost::ref(accumulator)));
}
// Wait for all worker threads to finish
threadGroup.join_all();
{
LockType lock(coutMutex);
std::cout << "All worker threads finished\n" << std::flush;
}
/* Pause a bit before exiting, to give worker threads a chance to
print their messages. */
boost::this_thread::sleep(boost::posix_time::seconds(1));
}
I get the following output:
Thread 0x100100f80 posting 72 after 1073ms
Thread 0x100100d30 posting 44 after 1249ms
Thread 0x1001011d0 posting 78 after 1658ms
Thread 0x100100ae0 posting 23 after 1807ms
Thread 0x100101420 posting 9 after 1930ms
Thread 0x100101420 unblocked. Average = 45.2
Thread 0x100100f80 unblocked. Average = 45.2
Thread 0x100100d30 unblocked. Average = 45.2
Thread 0x1001011d0 unblocked. Average = 45.2
Thread 0x100100ae0 unblocked. Average = 45.2
All worker threads finished

Implementing an event timer using boost::asio

The sample code looks long, but actually it's not so complicated :-)
What I'm trying to do is, when a user calls EventTimer.Start(), it will execute the callback handler (which is passed into the ctor) every interval milliseconds for repeatCount times.
You just need to look at the function EventTimer::Stop()
#include <iostream>
#include <string>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/function.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <ctime>
#include <sys/timeb.h>
#include <Windows.h>
std::string CurrentDateTimeTimestampMilliseconds() {
double ms = 0.0; // Milliseconds
struct timeb curtime;
ftime(&curtime);
ms = (double) (curtime.millitm);
char timestamp[128];
time_t now = time(NULL);
struct tm *tp = localtime(&now);
sprintf(timestamp, "%04d%02d%02d-%02d%02d%02d.%03.0f",
tp->tm_year + 1900, tp->tm_mon + 1, tp->tm_mday, tp->tm_hour, tp->tm_min, tp->tm_sec, ms);
return std::string(timestamp);
}
class EventTimer
{
public:
static const int kDefaultInterval = 1000;
static const int kMinInterval = 1;
static const int kDefaultRepeatCount = 1;
static const int kInfiniteRepeatCount = -1;
static const int kDefaultOffset = 10;
public:
typedef boost::function<void()> Handler;
EventTimer(Handler handler = NULL)
: interval(kDefaultInterval),
repeatCount(kDefaultRepeatCount),
handler(handler),
timer(io),
exeCount(-1)
{
}
virtual ~EventTimer()
{
}
void SetInterval(int value)
{
// if (value < 1)
// throw std::exception();
interval = value;
}
void SetRepeatCount(int value)
{
// if (value < 1)
// throw std::exception();
repeatCount = value;
}
bool Running() const
{
return exeCount >= 0;
}
void Start()
{
io.reset(); // I don't know why I have to put io.reset here,
// since it's already been called in Stop()
exeCount = 0;
timer.expires_from_now(boost::posix_time::milliseconds(interval));
timer.async_wait(boost::bind(&EventTimer::EventHandler, this));
io.run();
}
void Stop()
{
if (Running())
{
// How to reset everything when stop is called???
//io.stop();
timer.cancel();
io.reset();
exeCount = -1; // Reset
}
}
private:
virtual void EventHandler()
{
// Execute the requested operation
//if (handler != NULL)
// handler();
std::cout << CurrentDateTimeTimestampMilliseconds() << ": exeCount = " << exeCount + 1 << std::endl;
// Check if one more time of handler execution is required
if (repeatCount == kInfiniteRepeatCount || ++exeCount < repeatCount)
{
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(interval));
timer.async_wait(boost::bind(&EventTimer::EventHandler, this));
}
else
{
Stop();
std::cout << CurrentDateTimeTimestampMilliseconds() << ": Stopped" << std::endl;
}
}
private:
int interval; // Milliseconds
int repeatCount; // Number of times to trigger the EventHandler
int exeCount; // Number of executed times
boost::asio::io_service io;
boost::asio::deadline_timer timer;
Handler handler;
};
int main()
{
EventTimer etimer;
etimer.SetInterval(1000);
etimer.SetRepeatCount(1);
std::cout << CurrentDateTimeTimestampMilliseconds() << ": Started" << std::endl;
etimer.Start();
// boost::thread thrd1(boost::bind(&EventTimer::Start, &etimer));
Sleep(3000); // Keep the main thread active
etimer.SetInterval(2000);
etimer.SetRepeatCount(1);
std::cout << CurrentDateTimeTimestampMilliseconds() << ": Started again" << std::endl;
etimer.Start();
// boost::thread thrd2(boost::bind(&EventTimer::Start, &etimer));
Sleep(5000); // Keep the main thread active
}
/* Current Output:
20110520-125506.781: Started
20110520-125507.781: exeCount = 1
20110520-125507.781: Stopped
20110520-125510.781: Started again
*/
/* Expected Output (timestamp might be slightly different with some offset)
20110520-125506.781: Started
20110520-125507.781: exeCount = 1
20110520-125507.781: Stopped
20110520-125510.781: Started again
20110520-125512.781: exeCount = 1
20110520-125512.781: Stopped
*/
I don't know why that my second time of calling to EventTimer::Start() does not work at all. My questions are:
What should I do in
EventTimer::Stop() in order to reset
everything so that next time of
calling Start() will work?
Is there anything else I have to modify?
If I use another thread to start the EventTimer::Start() (see the commented code in the main function), when does the thread actually exit?
Thanks.
Peter
As Sam hinted, depending on what you're attempting to accomplish, most of the time it is considered a design error to stop an io_service. You do not need to stop()/reset() the io_service in order to reschedule a timer.
Normally you would leave a thread or thread pool running attatched to an io_service and then you would schedule whatever event you need with the io_service. With the io_service machinery in place, leave it up to the io_service to dispatch your scheduled work as requested and then you only have to work with the events or work requests that you schedule with the io_service.
It's not entirely clear to me what you are trying to accomplish, but there's a couple of things that are incorrect in the code you have posted.
io_service::reset() should only be invoked after a previous invocation of io_service::run() was stopped or ran out of work as the documentation describes.
you should not need explicit calls to Sleep(), the call to io_service::run() will block as long as it has work to do.
I figured it out, but I don't know why that I have to put io.reset() in Start(), since it's already been called in Stop().
See the updated code in the post.