Simplest/Effective approach of calling back a method from a lib file - c++

I am currently calling some methods from an external lib file. Is there a way for these methods to callback functions in my application once they are done as these methods might be running in separate threads?
The following diagram shows what I am trying to achieve
I wanted to know what is the best way of sending a message back to the calling application ? Any boost components that might help ?

Update after the edit:
It's not clear what you have. Do you control the thread entry point for the thread started by the external library (this would surprise me)?
Assuming:
the library function accepts a callback
assuming you don't control the source for the library function, not the thread function started by this library function in a background thread
you want to have the callback processed on the original thread
you could have the callback store an record in some kind of queue that you regularly check from the main thread (no busy loops, of course). Use a lock-free queue, or synchronize access to the queue using e.g. a std::mutex.
Update Here's such a queuing version Live on Coliru as well:
#include <thread>
#include <vector>
//////////////////////////////////////////////////////////
// fake external library taking a callback
extern void library_function(int, void(*cb)(int,int));
//////////////////////////////////////////////////////////
// our client code
#include <iostream>
#include <mutex>
void callback_handler(int i, int input)
{
static std::mutex mx;
std::lock_guard<std::mutex> lk(mx);
std::cout << "Callback #" << i << " from task for input " << input << "\n";
}
//////////////////////////////////////////////////////////
// callback queue
#include <deque>
#include <future>
namespace {
using pending_callback = std::packaged_task<void()>;
std::deque<pending_callback> callbacks;
std::mutex callback_mutex;
int process_pending_callbacks() {
std::lock_guard<std::mutex> lk(callback_mutex);
int processed = 0;
while (!callbacks.empty()) {
callbacks.front()();
++processed;
callbacks.pop_front();
}
return processed;
}
void enqueue(pending_callback cb) {
std::lock_guard<std::mutex> lk(callback_mutex);
callbacks.push_back(std::move(cb));
}
}
// this wrapper to "fake" a callback (instead queuing the real
// callback_handler)
void queue_callback(int i, int input)
{
enqueue(pending_callback(std::bind(callback_handler, i, input)));
}
int main()
{
// do something with delayed processing:
library_function(3, queue_callback);
library_function(5, queue_callback);
// wait for completion, periodically checking for pending callbacks
for (
int still_pending = 3 + 5;
still_pending > 0;
std::this_thread::sleep_for(std::chrono::milliseconds(10))) // no busy wait
{
still_pending -= process_pending_callbacks();
}
}
//////////////////////////////////////////////////////////
// somewhere, in another library:
void library_function(int some_input, void(*cb)(int,int))
{
std::thread([=] {
for (int i = 1; i <= some_input; ++i) {
std::this_thread::sleep_for(std::chrono::milliseconds(rand() % 5000)); // TODO abolish rand()
cb(i, some_input);
}
}).detach();
}
Typical output:
Callback #1 from task for input 5
Callback #2 from task for input 5
Callback #1 from task for input 3
Callback #3 from task for input 5
Callback #2 from task for input 3
Callback #4 from task for input 5
Callback #5 from task for input 5
Callback #3 from task for input 3
Note that
output is interspersed for both worker threads
but because the callbacks queue is FIFO, the sequence of callbacks per worker thread is preserved
This is what I thought of, before you edited the question: Live on Coliru
#include <thread>
#include <vector>
extern int library_function(bool);
static std::vector<std::thread> workers; // TODO implement a proper pool
void await_workers()
{
for(auto& th: workers)
if (th.joinable()) th.join();
}
template <typename F, typename C>
void do_with_continuation(F f, C continuation)
{
workers.emplace_back([=] () mutable {
auto result = f();
continuation(result);
});
}
#include <iostream>
#include <mutex>
void callback(int result)
{
static std::mutex mx;
std::lock_guard<std::mutex> lk(mx);
std::cout << "Resulting value from callback " << result << "\n";
}
int main()
{
// do something with delayed processing:
do_with_continuation(std::bind(library_function, false), callback);
do_with_continuation(std::bind(library_function, true), callback);
await_workers();
}
// somewhere, in another library:
#include <chrono>
int library_function(bool some_input)
{
std::this_thread::sleep_for(std::chrono::seconds(some_input? 6 : 3));
return some_input ? 42 : 0;
}
It will always print the output in the order:
Resulting value from callback 0
Resulting value from callback 42
Notes:
make sure you synchronize access to shared state from within such a callback (in this case, std::cout is protected by a lock)
you'd want to make a thread pool, instead of an ever-growing vector of (used) threads

Related

What could be a better for condition_variables

I am trying to make a multi threaded function it looks like:
namespace { // Anonymous namespace instead of static functions.
std::mutex log_mutex;
void Background() {
while(IsAlive){
std::queue<std::string> log_records;
{
// Exchange data for minimizing lock time.
std::unique_lock lock(log_mutex);
logs.swap(log_records);
}
if (log_records.empty()) {
Sleep(200);
continue;
}
while(!log_records.empty()){
ShowLog(log_records.front());
log_records.pop();
}
}
}
void Log(std::string log){
std::unique_lock lock(log_mutex);
logs.push(std::move(log));
}
}
I use Sleep to prevent high CPU usages due to continuously looping even if logs are empty. But this has a very visible draw back that it will print the logs in batches. I tried to get over this problem by using conditional variables but in there the problem is if there are too many logs in a short time then the cv is stopped and waked up many times leading to even more CPU usage. Now what can i do to solve this issue?
You can assume there may be many calls to log per second.
I would probably think of using a counting semaphore for this:
The semaphore would keep a count of the number of messages in the logs (initially zero).
Log clients would write a message and increment by one the number of messages by releasing the semaphore.
A log server would do an acquire on the semaphore, blocking until there was any message in the logs, and then decrementing by one the number of messages.
Notice:
Log clients get the logs queue lock, push a message, and only then do the release on the semaphore.
The log server can do the acquire before getting the logs queue lock; this would be possible even if there were more readers. For instance: 1 message in the log queue, server 1 does an acquire, server 2 does an acquire and blocks because semaphore count is 0, server 1 goes on and gets the logs queue lock...
#include <algorithm> // for_each
#include <chrono> // chrono_literasl
#include <future> // async, future
#include <iostream> // cout
#include <mutex> // mutex, unique_lock
#include <queue>
#include <semaphore> // counting_semaphore
#include <string>
#include <thread> // sleep_for
#include <vector>
std::mutex mtx{};
std::queue<std::string> logs{};
std::counting_semaphore c_semaphore{ 0 };
int main()
{
auto log = [](std::string message) {
std::unique_lock lock{ mtx };
logs.push(std::move(message));
c_semaphore.release();
};
auto log_client = [&log]() {
using namespace std::chrono_literals;
static size_t s_id{ 1 };
size_t id{ s_id++ };
for (;;)
{
log(std::to_string(id));
std::this_thread::sleep_for(id * 100ms);
}
};
auto log_server = []() {
for (;;)
{
c_semaphore.acquire();
std::unique_lock lock{ mtx };
std::cout << logs.front() << " ";
logs.pop();
}
};
std::vector<std::future<void>> log_clients(10);
std::for_each(std::begin(log_clients), std::end(log_clients),
[&log_client](auto& lc_fut) {
lc_fut = std::async(std::launch::async, log_client);
});
auto ls_fut{ std::async(std::launch::async, log_server) };
std::for_each(std::begin(log_clients), std::end(log_clients),
[](auto& lc_fut) { lc_fut.wait(); });
ls_fut.wait();
}

What's the good way to pass data to a thread in c++?

I'm learning multi-thread coding using c++. What I need to do is continuously read word from keyboard, and pass it to a data thread for data processing. I used global variable word[] to pass the data. When word[0] != 0 means a new input from keyboard. And the data thread will set word[0] to 0 once it read the data. It works! But I'm not sure if it safe or not, or there are better ways to do this. Here is my code:
#include <iostream>
#include <thread>
#include <cstdio>
#include <cstring>
using namespace std;
static const int buff_len = 32;
static char* word = new char[buff_len];
static void data_thread () { // thread to handle data
while (1)
{
if (word[0]) { // have a new word
char* w = new char[buff_len];
strcpy(w, word);
cout << "Data processed!\n";
word[0] = 0; // Inform the producer that we consumed the word
}
}
};
static void read_keyboard () {
char * linebuf = new char[buff_len];
thread * worker = new thread( data_thread );
while (1) //enter "end" to terminate the loop
{
if (!std::fgets( linebuf, buff_len, stdin)) // EOF?
return;
linebuf[strcspn(linebuf, "\n")] = '\0'; //remove new line '\n' from the string
word = linebuf; // Pass the word to the worker thread
while (word[0]); // Wait for the worker thread to consume it
}
worker->join(); // Wait for the worker to terminate
}
int main ()
{
read_keyboard();
return 0;
}
The problem with this type of multi threading implementation is busy waiting. The input reader & the data consumer both are busy waiting and wasting the cpu cycles. To overcome this you need Semaphore.
Semaphore s_full(0);
Semaphore s_empty(1);
void data_processor ()
{
while (true) {
// Wait for data availability.
s_full.wait();
// Data is available to you, consume it.
process_data();
// Unblock the data producer.
s_empty.signal();
}
}
void input_reader()
{
while (true) {
// Wait for empty buffer.
s_empty.wait();
// Read data.
read_input_data();
// Unblock data com=nsumer.
s.full.signal();
}
}
In addition this solution will work only for a single data consumer thread. But for multiple data consumer threads you'll need thread safe buffer queue and proper implementation of producer - consumer problem.
See below blog links for additional information to solve this problem:
Thread safe buffer queue:
https://codeistry.wordpress.com/2018/03/08/buffer-queue-handling-in-multithreaded-environment/
Producer - consumer problem:
https://codeistry.wordpress.com/2018/03/09/unordered-producer-consumer/
There are a few problems with your approach:
This method is not scalable. What if you have more than 1 processing thread?
You would need a mutex to synchronise read-write access to the memory stored by word. At the scale of this example, not a big deal. In a "serious" application you might not have the luxury of waiting till you get the data thread stops processing. In that case, you might be tempted to remove the while(word[0]) but that is unsafe.
You fire off a "daemon" thread (not exactly but close enough) to handle your computations. Most of the time the thread is waiting for your input and cannot proceed without it. This is inefficient, and modern C++ gives you a way around it without explicitly handling raw threads using std::async paradigm.
#include <future>
#include <string>
#include <iostream>
static std::string worker(const std::string &input)
{
// assume this is a lengthy operation
return input.substr(1);
}
int main()
{
while (true)
{
std::string input;
std::getline (std::cin, input);
if (input.empty())
break;
std::future<std::string> fut= std::async(std::launch::async, &worker, input);
// Other tasks
// size_t n_stars = count_number_of_stars();
//
std::string result = fut.get(); // wait for the task to complete
printf("Output : %s\n", result.c_str());
}
return 0;
}
Something like this in my opinion is the better approach. std::async will launch a thread (if std::launch::async option is specified) and return a waitable future. The computation will continue in the background, and you can do other work in the main thread. When you need to get the result of your computation, you can get() the result of the future(btw the future can be void too).
Also there are a lot of C-isms in your C++ code. Unless there is a reason to do so, why would you not use std::string?
In modern CPP multithreading, u should be using condition_variable, mutex, and queue to handle this. the mutex prevents mutual reach to the queue and the condition variable makes the reader thread sleep until the writer writes what it write. the following is an example
static void data_thread (std::queue<char> & dataToProcess, std::mutex & mut, std::condition_variable & cv, std::atomic<bool>& finished) { // thread to handle data
std::string readData;
while (!finished)
{
{
std::unique_lock lock{mut};
cv.wait(lock, [&] { return !dataToProcess.empty() || finished; });
if (finished) {
while (!dataToProcess.empty()){
readData += dataToProcess.front();
dataToProcess.pop();
}
}
else{
readData += dataToProcess.front();
dataToProcess.pop();
}
}
std::cout << "\nData processed\n";
}
std::cout << readData;
};
static void read_keyboard () {
std::queue<char> data;
std::condition_variable cv;
std::mutex mut;
std::atomic<bool> finished = false;
std::thread worker = std::thread( data_thread, std::ref(data), std::ref(mut), std::ref(cv), std::ref(finished) );
char temp;
while (true) //enter "end" to terminate the loop
{
if (!std::cin.get(temp)) // EOF?
{
std::cin.clear();
finished = true;
cv.notify_all();
break;
}
{
std::lock_guard lock {mut};
data.push(temp);
}
cv.notify_all();
}
worker.join(); // Wait for the worker to terminate
}
int main ()
{
read_keyboard();
return 0;
}
What you are looking for is a message queue. This needs mutex and condition variable.
Here is one on github (not mine but it popped up when I searched) https://github.com/khuttun/PolyM
and another
https://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
I will get told off for posting links, but I am not going to type the entire code here and github's not going anywhere soon

Modern C++. Return data structure from working thread continuing its execution

I need to launch working thread, perform some initialization, return data structure as initialization result and continue thread execution. What is the best (or possible) code to achieve this using modern c++ features only? Note, launched thread should continue its execution (thread does not terminated as usual). Unfortunately, most solutions assume worker thread termination.
Pseudo code:
// Executes in WorkerThread context
void SomeClass::Worker_treadfun_with_init()
{
// 1. Initialization calls...
// 2. Pass/signal initialization results to caller
// 3. Continue execution of WorkerThread
}
// Executes in CallerThread context
void SomeClass::Caller()
{
// 1. Create WorkerThread with SomeClass::Worker_treadfun_with_init()" thread function
// 2. Sleep thread for some initialization results
// 3. Grab results
// 3. Continue execution of CallerThread
}
I think std::future meets your requirements.
// Executes in WorkerThread context
void SomeClass::Worker_treadfun_with_init(std::promise<Result> &pro)
{
// 1. Initialization calls...
// 2. Pass/signal initialization results to caller
pro.set_value(yourInitResult);
// 3. Continue execution of WorkerThread
}
// Executes in CallerThread context
void SomeClass::Caller()
{
// 1. Create WorkerThread with SomeClass::Worker_treadfun_with_init()" thread function
std::promise<Result> pro;
auto f=pro.get_future();
std::thread([this,&pro](){Worker_treadfun_with_init(pro);}).detach();
auto result=f.get();
// 3. Grab results
// 3. Continue execution of CallerThread
}
Try using a pointer or reference to the data structure with the answer in it, and std::condition_variable to let you know when the answer has been computed:
#include <iostream>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <chrono>
#include <vector>
std::vector<double> g_my_answer;
std::mutex g_mtx;
std::condition_variable g_cv;
bool g_ready = false;
void Worker_treadfun_with_init()
{
//Do your initialization here
{
std::unique_lock<std::mutex> lck( g_mtx );
for( double val = 0; val < 10; val += 0.3 )
g_my_answer.push_back( val );
g_ready = true;
lck.unlock();
g_cv.notify_one();
}
//Keep doing your other work..., here we'll just sleep
for( int i = 0; i < 100; ++i )
{
std::this_thread::sleep_for( std::chrono::seconds(1) );
}
}
void Caller()
{
std::unique_lock<std::mutex> lck(g_mtx);
std::thread worker_thread = std::thread( Worker_treadfun_with_init );
//Calling wait will cause current thread to sleep until g_cv.notify_one() is called.
g_cv.wait( lck, [&g_ready](){ return g_ready; } );
//Print out the answer as the worker thread continues doing its work
for( auto val : g_my_answer )
std::cout << val << std::endl;
//Unlock mutex (or better yet have unique_lock go out of scope)
// incase worker thread needs to lock again to finish
lck.unlock();
//...
//Make sure to join the worker thread some time later on.
worker_thread.join();
}
Of course in actual code you wouldnt use global variables, and instead pass them by pointer or reference (or as member variables of SomeClass) to the worker function, but you get the point.

What is the equivalent of Qtimer in C++ using std or boost libraries?

I have to perform some task every 5 seconds till the program exits. I don't want to use a thread here.
In QT I could do like this
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(update()));
timer->start(1000);
but how do I do this in c++ using std or boost libraries?
Thank you
I have to assume that, by "I don't want to use a thread", you mean you don't want to create threads in your own code every time you need a timer. That's because doing it without threads is actually quite hard.
Assuming C++11, you can actually do this with just the core language (no Boost or any other stuff needed) and using a separate class handling the threading so that all you need in your own code is something like (for example, harassing your ex partner with spam emails, a rather dubious use case):
Periodic spamEx(std::chrono::seconds(60), SendEmaiToEx);
The following complete program, compiled with g++ -std=c++11 -o periodic periodic.cpp -lpthread will run a periodic callback function every second for five seconds(a):
#include <thread>
#include <chrono>
#include <functional>
#include <atomic>
// Not needed if you take couts out of Periodic class.
#include <iostream>
class Periodic {
public:
explicit Periodic(
const std::chrono::milliseconds &period,
const std::function<void ()> &func
)
: m_period(period)
, m_func(func)
, m_inFlight(true)
{
std::cout << "Constructing periodic" << std::endl;
m_thread = std::thread([this] {
while (m_inFlight) {
std::this_thread::sleep_for(m_period);
if (m _inFlight) {
m_func();
}
}
});
}
~Periodic() {
std::cout << "Destructed periodic" << std::endl;
m_inFlight = false;
m_thread.join();
std::cout << "Destructed periodic" << std::endl;
}
private:
std::chrono::milliseconds m_period;
std::function<void ()> m_func;
std::atomic<bool> m_inFlight;
std::thread m_thread;
};
// This is a test driver, the "meat" is above this.
#include <iostream>
void callback() {
static int counter = 0;
std::cout << "Callback " << ++counter << std::endl;
}
int main() {
std::cout << "Starting main" << std::endl;
Periodic p(std::chrono::seconds(1), callback);
std::this_thread::sleep_for(std::chrono::seconds(5));
std::cout << "Ending main" << std::endl;
}
When you create an instance of Periodic, it saves the relevant information and starts a thread to do the work. The thread (a lambda) is simply a loop which first delays for the period then calls your function. It continues to do this until the destructor indicates it should stop.
The output is, as expected:
Starting main
Constructing periodic
Callback 1
Callback 2
Callback 3
Callback 4
Ending main
Destructed periodic
(a) Note that the time given above is actually the time from the end of one callback to start of the next, not the time from start to start (what I would call true cycle time). Provided your callback is sufficiently quick compared to the period, the difference will hopefully be unnoticable.
In addition, the thread does this delay no matter what, so the destructor may be delayed for up to a full period before returning.
If you do require a start-to-start period and fast clean-up, you can use the following thread instead. It does true start-to-start timing by working out the duration of the callback and only delaying by the rest of the period (or not delaying at all if the callback used the entire period).
It also uses a smaller sleep so that clean-up is fast. The thread function would be:
m_thread = std::thread([this] {
// Ensure we wait the initial period, then start loop.
auto lastCallback = std::chrono::steady_clock::now();
while (m_inFlight) {
// Small delay, then get current time.
std::this_thread::sleep_for(std::chrono::milliseconds(100));
auto timeNow = std::chrono::steady_clock::now();
// Only callback if still active and current period has expired.
if (m_inFlight && timeNow - lastCallback >= m_period) {
// Start new period and call callback.
lastCallback = timeNow;
m_func();
}
}
});
Be aware that, if your callback takes longer than the period, you will basically be calling it almost continuously (there'll be a 100ms gap at least).
You realize that QTimer does use a thread - or polls the timer in the main event loop. You can do the same. The conceptual problem you're likely having is that you don't have a UI and therefore, probably didn't create an event loop.
Here's the simplest way to leverage Boost Asio to have an event loop:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/high_resolution_timer.hpp>
#include <functional>
#include <chrono>
#include <iostream>
using namespace std::chrono_literals;
using boost::system::error_code;
namespace ba = boost::asio;
int main() {
ba::io_service svc; // prefer io_context in recent boost versions
ba::high_resolution_timer timer{svc};
std::function<void()> resume;
resume = [&] {
timer.expires_from_now(50ms); // just for demo, don't wait 5s but 50ms
timer.async_wait([=,&timer](error_code ec) {
std::cout << "Timer: " << ec.message() << "\n";
if (!ec)
resume();
});
};
resume();
svc.run_for(200ms); // probably getting 3 or 4 successful callbacks
timer.cancel();
svc.run(); // graceful shutdown
}
Prints:
Timer: Success
Timer: Success
Timer: Success
Timer: Success
Timer: Operation canceled
That may not make too much sense depending on the rest of your application. In such cases, you can do the same but use a separate thread (yes) to run that event loop.

Thread pool stucks

I created a thread pooling to distribute 100 computations between 4 threads.
I cannot understand why the following code stucks after 4 computations. After each computation, the thread must be released and I expect that .joinable() returns false so the program will continue.
Results:
[[[01] calculated
] calculated
2] calculated
[3] calculated
Code:
#include <string>
#include <iostream>
#include <vector>
#include <thread>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/thread.hpp>
#include <cmath>
class AClass
{
public:
void calculation_single(std::vector<double> *s,int index)
{
(*s)[index]=sin(double(index));
std::cout<<"["<<index<<"] calculated \n";
}
void calculation()
{
const uint N_nums=100;
const uint N_threads=4;
std::vector<double> A;
A.assign(N_nums,0.0);
std::vector<std::thread> thread_pool;
for(uint i=0;i<N_threads;i++)
thread_pool.push_back(std::thread());
uint A_index=0;
while(A_index<N_nums)
{
int free_thread=-1;
for(uint i=0;i<N_threads && free_thread<0;i++)
if(!thread_pool[i].joinable())
free_thread=i;
if(free_thread>-1)
{
thread_pool[free_thread]=
std::thread(
&AClass::calculation_single,
this,
&A,
int(A_index));
A_index++;
}
else
{
boost::this_thread::sleep(boost::posix_time::milliseconds(1));
}
}
// wait for tasks to finish
for(std::thread& th : thread_pool)
th.join();
}
};
int main()
{
AClass obj;
obj.calculation();
return 0;
}
A thread is joinable if it isn't empty basically.
A thread with a completed task is not empty.
std::thread bob;
bob is not joinable.
Your threads are. Nothing you do makes them not joinable.
Also, busy waiting is a crappy thread pool.
Create a consumer producer queue, with a pool of threads consuming tasks and an abort method. Feed tasks into the queue with via a packaged task and return a std::future<T>. Don't spawn a new thread per task.