boos::asio async_wait seems to be blocking - c++

I was learning boost asio documentation.I came across this deadline_timer example.
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
/*This timer example shows a timer that fires once every second.*/
void print(const boost::system::error_code& e, boost::asio::deadline_timer* t, int* count)
{
if (*count < 5)
{
std::cout << *count << std::endl;
++(*count);
t->expires_at(t->expires_at() + boost::posix_time::seconds(1));
t->async_wait(boost::bind(print,boost::asio::placeholders::error, t, count));
}
}
int main()
{
boost::asio::io_service io;
int count = 0;
boost::asio::deadline_timer t(io, boost::posix_time::seconds(10));
auto myfunc = boost::bind(print, boost::asio::placeholders::error, &t ,&count);
t.async_wait(myfunc);
std::cout << "async wait " << std::endl;
io.run();
std::cout << "Just called io.run() " << std::endl;
std::cout << "Final count is " << count << std::endl;
return 0;
}
The async_wait() function seems to be blocking (i.e waiting for the 10 second timer to expire)
The output from the above program is as follows.
async wait
0
1
2
3
4
Just called io.run()
Final count is 5
I would expect an async_wait() to create a separate thread and wait for the timer to expire there meanwhile executing the main thread.
i.e I would expect the program to print
Just called io.run()
Final count is 5
while waiting for the timer to expire.? Is my understanding wrong?
This is my understanding of async_wait(). This implementation looks more like a blocking wait. Is my understanding wrong? What am I missing?

The io.run(); statement is the key to explaining the difference between the output you're getting and the output you're expecting.
In the ASIO framework, any asynchronous commands need to have a dedicated thread to run the callbacks upon. But because ASIO is relatively low-level, it expects you to provide the thread yourself.
As a result, what you're doing when you call io.run(); within the main thread is to specify to the framework that you intend to run all asynchronous commands on the main thread. That's acceptable, but that also means that the program will block on io.run();.
If you intend the commands to run on a separate thread, you'll have to write something like this:
std::thread run_thread([&]() {
io.run();
});
std::cout << "Just called io.run() " << std::endl;
std::cout << "Final count is " << count << std::endl;
run_thread.join();
return 0;

The async_wait function isn't blocking, run is. That's run's job. If you don't want a thread to block in the io_service's processing loop, don't have that thread call run.
The async_wait function doesn't create any threads. That would make it expensive and make it much harder to control the number of threads servicing the io_service.
Your expectation is unreasonable because returning from main terminates the process. So who or what would wait for the timer?

Related

ASIO io_service do not process post handlers on second run() call

I want to be able to post group of handlers to boost::asio::io_service and then run all of them. When all handlers finished, I want to add a new group of them and run() again. And repeat this forever in one thread.
But I have a problem that after the first run() call, next posted jobs are ignored.
Here is a small example (coliru):
#include <iostream>
#include <boost/asio.hpp>
int main()
{
boost::asio::io_service io;
io.post([]{ std::cout << "Hello";});
io.run();
io.post([]{ std::cout << ", World!" << std::endl; });
io.run();
}
It will print "Hello" message only and then successfully exit.
Why this example does not print "Hello, World!"?
Boost version: 1.71.0
You have to call restart:
A normal exit from the run() function implies that the io_context
object is stopped (the stopped() function returns true). Subsequent
calls to run(), run_one(), poll() or poll_one() will return
immediately unless there is a prior call to restart().
io.post([]{ std::cout << "Hello";});
io.run();
io.post([]{ std::cout << ", World!" << std::endl; });
io.restart(); // just here
io.run();

thread is determined during compile or runtime?

i just ask my self:when i make pool of threads in code
then i compile the code ,
does the compiled code have a copy for each thread?
and if i use macro function , and pass it to the threads,
is this macro expanded during compile time"what i think" or during runtime,
and if it is in compile time why this following code need mutex:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/date_time.hpp>
#include <iostream>
namespace asio = boost::asio;
#define PRINT_ARGS(msg) do {\
boost::lock_guard<boost::mutex> lg(mtx); \
std::cout << '[' << boost::this_thread::get_id() \
<< "] " << msg << std::endl; \
} while (0)
int main() {
asio::io_service service;
boost::mutex mtx;
for (int i = 0; i < 20; ++i) {
service.post([i, &mtx]() {
PRINT_ARGS("Handler[" << i << "]");
boost::this_thread::sleep(
boost::posix_time::seconds(1));
});
}
boost::thread_group pool;
for (int i = 0; i < 4; ++i) {
pool.create_thread([&service]() { service.run(); });
}
pool.join_all();
}
here the lock_guard make cout critical section,although only the main thread will be the thread posting to io_service
then the threads running tasks will work on already made queue of already made lamda functions >>>>which make me think there is no need for mutex?
is this thinking right?
here i will simulate the macro expansion during compilation:
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/date_time.hpp>
#include <iostream>
namespace asio = boost::asio;
#define PRINT_ARGS(msg) do {\
boost::lock_guard<boost::mutex> lg(mtx); \
std::cout << '[' << boost::this_thread::get_id() \
<< "] " << msg << std::endl; \
} while (0)
int main() {
asio::io_service service;
boost::mutex mtx;
for (int i = 0; i < 20; ++i) {
service.post([i, &mtx]() {
//PRINT_ARGS("Handler[" << i << "]");//>>>>>this will be
do {\\
boost::lock_guard<boost::mutex> lg(mtx); \\
std::cout << '[' << boost::this_thread::get_id() \\
<< "] " << "Handler[" << i << "]" << std::endl; \\
} while (0)
boost::this_thread::sleep(
boost::posix_time::seconds(1));
});
}
boost::thread_group pool;
for (int i = 0; i < 4; ++i) {
pool.create_thread([&service]() { service.run(); });
}
pool.join_all();
}
and then the program will be in the following order:
1- main thread :make io_service instance
2- main thread :make mutex instance
3- main thread :make for loop 20 times ,each time the main thread post a task"the lambda function" which is defined in the book having this code as adding this function object to an internal queue in io_service so
my question is :does the main thread add 20 lambda function objects to the queue and in this case each one would have certain value of i
and then when the new 4 threads start work,he gives them thread function "run" which according to same book remove the function objects one by one and execute them one by one
in this case:
thread 1:removes lambda 1 and execute it with its own code as being separate instance with unique i
thread 2:removes lambda 2 and execute it with its own code as being separate instance with unique i
thread 3:removes lambda 3 and execute it with its own code as being separate instance with unique i
thread 4:removes lambda 4 and execute it with its own code as being separate instance with unique i
then thread one againget lambda 5
this is based on my understanding that the queue has 20 function objects as lambda functions"may be wrapped in somesort of wrapper" and thus each thread will take separate object and for this reason need no mutex"20 internal assembly codes after compilation"
but if the tasks in queue are just references to the same single code"but when it execute the for loop" ,then it will need mutex to prevent 2 threads accessing the critical code at same time
which scenario is the present here by code signs?
Macros are always expanded at compile time but the compiler has only very rudimentary knowledge of threads (mostly regarding being able to say that certain variables are thread local).
Code is only going to exist once in either the on-disk image or in-memory copy that actually gets run.
Locking the mutex in PRINT_ARGS ensures that each operation's message is printed in its entirety without getting interrupted by another thread. (Otherwise you might have a operation start to print its message, get interrupted by another operation on a different thread which does print its message and then the remainder of the first operation's message gets printed).

Execute callback function on main thread from std::thread

I have a requirement of executing a callback function on exit of a std::thread and the callback function should be executed on the main thread.
On thread creation I need to detach the thread and cannot block the main loop execution for thread completion.
i tried using std::signal but that does not seem to execute callback function on the main thread
#include <thread>
#include <csignal>
#include <iostream>
std::thread::id main_thread_id;
void func2()
{
for(int i = 0; i < 10000000; i++)
{
// do something
}
}
void func()
{
for(int i = 0; i < 10; i++)
{
func2();
}
std::raise(SIGUSR1);
}
void callback(int signal)
{
std::cout << "SIGNAL: " << signal << " THREAD ID:" <<
std::this_thread::get_id() << std::endl;
bool b = std::this_thread::get_id() == main_thread_id;
std::cout << "IS EXECUTED ON MAIN THREAD: " << b << std::endl;
}
int main()
{
main_thread_id = std::this_thread::get_id();
std::cout << "MAIN THREAD ID: " << std::this_thread::get_id() << std::endl;
std::signal(SIGUSR1, callback);
std::thread t1(func);
t1.detach();
for(int i = 0; i < 20; i++)
{
func2();
}
if(t1.joinable())
t1.join();
}
The result I get is that the callback function is not executed on main thread. Please suggest a way in which I can create a worker thread and call a callback function on main thread upon exit of the thread.
Thanks for the help
There are a few ways to do this.
First, your main thread could be running a message loop. In which case, you queue up a message with a payload that tells the main thread to run some code (either carry the code to run via a pointer part of the message to the main thread, or put it in some known spot that the main thread checks).
A second approach is to return something like a std::future<std::function<void()>> object, and the main thread checks if the future is ready. When it is ready, it runs the code.
A third approach is to create a concurrent queue that the main thread waits on, and stuff your message (containing code to run) onto that queue.
All of these things require the active cooperation of the main thread. The main thread cannot be preemted and told to run different code without its cooperation.
Which is best depends on features of your program you did not choose to mention in your question. If you are a graphical GUI with a message loop, use the message loop. If you are a streaming processor that paralellizes some work, and you don't need prompt execution, yet eventually will want to block on the parallel work, a future might be best. If you are a message passing channel-type app, a set of queues might be best.

boost::asio signal_set handler only executes after first signal is caught and ignores consecutive signals of the same type

I have a program and would like to stop it by sending SIGINT for writing some data to a file instead of exiting immediately. However, if the user of the program sends SIGINT again, then the program should quit immediately and forget about writing data to a file.
For portability reason I would like to use boost::asio for this purpose.
My initial (simplified) approach (see below) did not work. Is this not possible or am I missing something?
The handler seems to be called only once (printing out the message) and the program always stops when the loop has reached the max iteration number.
void handler(
const boost::system::error_code& error,
int signal_number) {
if (!error) {
static bool first = true;
if(first) {
std::cout << " A signal(SIGINT) occurred." << std::endl;
// do something like writing data to a file
first = false;
}
else {
std::cout << " A signal(SIGINT) occurred, exiting...." << std::endl;
exit(0);
}
}
}
int main() {
// Construct a signal set registered for process termination.
boost::asio::io_service io;
boost::asio::signal_set signals(io, SIGINT);
// Start an asynchronous wait for one of the signals to occur.
signals.async_wait(handler);
io.run();
size_t i;
for(i=0;i<std::numeric_limits<size_t>::max();++i){
// time stepping loop, do some computations
}
std::cout << i << std::endl;
return 0;
}
When your first event is handled, you don't post any new work on the service object, so it exits.
This means that then (after the ioservice exited) the tight loop is started. This may not be what you expected.
If you want to listen for SIGINT again, you have to wait for the signal set again from the handler:
#include <boost/asio.hpp>
#include <boost/asio/signal_set.hpp>
#include <boost/bind.hpp>
#include <boost/atomic.hpp>
#include <iostream>
void handler(boost::asio::signal_set& this_, boost::system::error_code error, int signal_number) {
if (!error) {
static boost::atomic_bool first(true);
if(first) {
// do something like writing data to a file
std::cout << " A signal(SIGINT) occurred." << std::endl;
first = false;
this_.async_wait(boost::bind(handler, boost::ref(this_), _1, _2));
}
else {
std::cout << " A second signal(SIGINT) occurred, exiting...." << std::endl;
exit(1);
}
}
}
int main() {
// Construct a signal set registered for process termination.
boost::asio::io_service io;
boost::asio::signal_set signals(io, SIGINT);
// Start an asynchronous wait for one of the signals to occur.
signals.async_wait(boost::bind(handler, boost::ref(signals), _1, _2));
io.run();
return 2;
}
As you can see I bound the signal_set& reference to the handler in order to be able to async_wait on it after receiving the first signal. Also, as a matter of principle, I made first an atomic (although that's not necessary until you run the io_service on multiple threads).
Did you actually wish to run the io_service in the background? In that case, make it look like so:
signals.async_wait(boost::bind(handler, boost::ref(signals), _1, _2));
boost::thread(boost::bind(&boost::asio::io_service::run, boost::ref(io))).detach();
while (true)
{
std::cout << "Some work on the main thread...\n";
boost::this_thread::sleep_for(boost::chrono::seconds(1));
}
With typical output:
Some work on the main thread...
Some work on the main thread...
Some work on the main thread...
^CSome work on the main thread...
A signal(SIGINT) occurred.
Some work on the main thread...
Some work on the main thread...
^CSome work on the main thread...
A second signal(SIGINT) occurred, exiting....

What exactly is join() in Boost::thread? (C++)

In Java, I would do something like:
Thread t = new MyThread();
t.start();
I start thread by calling start() method. So later I can do something like:
for (int i = 0; i < limit; ++i)
{
Thread t = new MyThread();
t.start();
}
To create a group of threads and execute the code in run() method.
However, in C++, there's no such thing as start() method. Using Boost, if I want a thread to start running, I have to call the join() method in order to make a thread running.
#include <iostream>
#include <boost/thread.hpp>
class Worker
{
public:
Worker()
{
// the thread is not-a-thread until we call start()
}
void start(int N)
{
m_Thread = boost::thread(&Worker::processQueue, this, N);
}
void join()
{
m_Thread.join();
}
void processQueue(unsigned N)
{
float ms = N * 1e3;
boost::posix_time::milliseconds workTime(ms);
std::cout << "Worker: started, will work for "
<< ms << "ms"
<< std::endl;
// We're busy, honest!
boost::this_thread::sleep(workTime);
std::cout << "Worker: completed" << std::endl;
}
private:
boost::thread m_Thread;
};
int main(int argc, char* argv[])
{
std::cout << "main: startup" << std::endl;
Worker worker, w2, w3, w5;
worker.start(3);
w2.start(3);
w3.start(3);
w5.start(3);
worker.join();
w2.join();
w3.join();
w5.join();
for (int i = 0; i < 100; ++i)
{
Worker w;
w.start(3);
w.join();
}
//std::cout << "main: waiting for thread" << std::endl;
std::cout << "main: done" << std::endl;
return 0;
}
On the code above, the for loop to create 100 threads, normally I must use a boost::thread_group to add the thread function, and finally run all with join_all(). However, I don't know how to do it with thread function putting in a class which uses various class members.
On the other hand, the loop above will not behave like the loop in Java. It will make each thread execute sequentially, not all at once like the other separated threads, whose own join() is called.
What is join() in Boost exactly? Also please help me to create a group of threads which share the same class.
join doesn't start the thread, it blocks you until the thread you're joining finishes. You use it when you need to wait for the thread you started to finish its run (for example - if it computes something and you need the result).
What starts the thread is boost::thread, which creates the thread and calls the thread function you passed to it (in your case - Worker::processQueue).
The reason you had a problem with the loop is not because the threads didn't start, but because your main thread didn't wait for them to execute before finishing. I'm guessing you didn't see this problem in Java because of the scheduling differences, aka "undefined behavior". after edit In Java the threading behaves slightly differently, see the comment below for details. That explains why you didn't see it in Java.
Here's a question about the boost::thread_group. Read the code in the question and the answers, it will help you.
Joining a thread does the same thing in Boost as it does in Java: it waits for the thread to finish running.
Plus, if I remember correctly, Boost's threads run upon construction. You don't start them explicitly.