Any way to obtain additional information about io_service - c++

While trying to hack clean shutdown of asio app I find it quite irritating that I cant know if ios stopped because i called .stop() or because it run out of handlers.
Also when I want to kill it I cant find a way to see if it has handler in its handlers q, or even if some handlers are running atm.
So
1) Any way to see what stopped ios - .stop or running out of work (except the awful manual bIsAppShuttingDown flag )
2) Any way to see if io_service (after I called stop) is still processing something?
so I can write
ios->stop()
while(! ios.finished())
sleep(1) // :/
delete ios;

Typically the pattern is to dispatch on the io_service in a separate thread, for example:
_thread.reset(new std::thread([&]() { _service.run(); }); // so the dispatching here is in a thread
Subsequently, if you want to stop it and wait for it to finish cleanly, then the best way is:
_service.stop();
_thread->join();
This way the calling thread is blocked until the dispatch thread terminates (which happens when the call to execute the last handler (run()) completes. There is no way (AFAIK) of knowing whether the io_service ran out of work or whether stop() was called, you can certainly prevent the former by instantiating an io_service::work on the service. See the docs.

Related

How interrupt a websocket (using boost beast) from another thread?

I 'm using boost beast 1.74.0. in another thread i try close the websocket but the code is broken at "acceptor.accept(socket, endpoint)" and i receive "Signal: SIG32 (Real-time event 32)" after call close.
Part from code to listen connection, What i need change to interrupt the accept correctly the service?
...
_acceptor = &acceptor;
_keepAlive = true;
while (_keepAlive) {
tcp::socket socket{ioc};
// Block until we get a connection
acceptor.accept(socket, endpoint);
// Launch the session, transferring ownership of the socket
std::thread(
&WebSocketServer::doSession,
std::move(socket),
this,
this,
getHeaderServer()
).detach();
}
close function call by another thread
void WebSocketServer::close() {
if (_acceptor != nullptr) this->close();
_keepAlive = false;
}
glibc uses SIG32 to signal the cancellation of threads created using the pthread library. Are you trying to use pthread_kill?
If not, you may be witnessing that only because you are running it under GDB. Which should be fixable by telling GDB to ignore that:
handle SIG32 nostop noprint
Finally to the original question:
there's interupption points in Boost Thread. They could help you iff you can switch to Boost Thread boost::thread instead of std::thread. Also, you have to change the thread's code to actually check for interruptions: https://www.boost.org/doc/libs/1_75_0/doc/html/thread/thread_management.html#thread.thread_management.tutorial.interruption
Since it actually sounds like you want to terminate the accept loop, why not "simply" cancel the acceptor? I'm not entirely sure this works with synchronous operations, but you could of course easily use an async accept.
Take care to synchronize access to the acceptor object itself. This means either run cancel on the same thread doing async_accept or from the same strand. By this point it surely sounds like it's easier to just do the whole thing asynchronously.

Cross thread call a.k.a run on main/UI thread from other thread without dependencies needed

I'm on some c++ mobile product, but I need my apps main thread is still running without any blocking when doing some heavy work on the background thread and run back on main thread. But I realized there is no runOnMainThread/runOnUIThread in c++ thread api. I trying to figure it out the issue and found that need to depend library, or create your own thread event queue. Although it is good, but i am thinking to have a behavior which can runOnUIThread.
How it does not work: the mentioned library creates a timer, installs a SIGALRM signal handler and dispatches queued tasks when signals are fired. This allows tasks being processed on the main thread even when it is busy. However POSIX permits only a small set of async-signal-safe functions to be invoked inside of signal handler. Running arbitrary с++ code inside of signal handler violates that restriction and leaves application in hopelessly doomed state.
After some research and development, I've created a library called NonBlockpp
it is a small c++ library to allow c++ mobile application able to process the heavy and time consuming task on background and back to Main thread again, It’s been tested and fired the main thread event.
It also allow to save the tasks and fire them later, all the task has no blocking each other and thread safety.
How it works:
If you found any query or suggestion, please don't hesitate to raise an issue and we can discuss it together.
The project has rectify from signal to pollEvent due to signal handler might not be safe to use.
Please take a look the new changed.
NonBlockpp
Usage

Qt: The relation between Worker thread and GUI Events

I have an ordinary GUI Thread (Main Window) and want to attach a Worker thread to it. The Worker thread will be instantiated, moved to its own thread and then fired away to run on its own independently, running a messaging routine (non-blocking).
This is where the worker is created:
void MainWindow::on_connectButton_clicked()
{
Worker* workwork;
workwork= new Worker();
connect(workwork,SIGNAL(invokeTestResultsUpdate(int,quint8)),
this,SLOT(updateTestResults(int,quint8)),Qt::QueuedConnection);
connect(this,SIGNAL(emitInit()),workwork,SLOT(init()));
workwork->startBC();
}
This is where the Worker starts:
void Worker::startBC()
{
t1553 = new QThread();
this->moveToThread(t1553);
connect(t1553,SIGNAL(started()),this,SLOT(run1553Process()));
t1553->start();
}
I have two problems here, regarding the event queue of the new thread:
The first and minor problem is that, while I can receive the signals from the Worker thread (namely: invokeTestResultsUpdate), I cannot invoke the init method by emitting the emitInit signal from MainWindow. It just doesn't fire unless I call it directly or connect it via Qt::DirectConnection . Why is this happening? Because I have to start the Worker thread's own messaging loop explicitly? Or some other thing I'm not aware of? (I really fail to wrap my head around the concept of Thread/Event Loop/Signal Slot mechanism and the relation between each other even though I try. I welcome any fresh perspective here too.)
The second and more obscure problem is: run1553process method does some heavy work. By heavy work, I mean a very high rate of data. There is a loop running, and I try to receive the data flowing from a device (real-time) as soon as it lands in the buffer, using mostly extern API functions. Then throw the mentioned invokeTestResultsUpdate signal towards the GUI each time it receives a message, updating the message number box. It's nothing more than that.
The thing I'm experiencing is weird; normally the messaging routine is mostly unhindered but when I resize the main window, move it, or hide/show the window, the Worker thread skips many messages. And the resizing action is really slow (not responds very fast). It's really giving me a cancer.
(Note: I have tried subclassing QThread before, it did not mitigate the problem.)
I've been reading all the "Thread Affinity" topics and tried to apply them but it still behaves like it is somehow interrupted by the GUI thread's events at some point. I can understand MainWindow's troubles since there are many messages at the queue to be executed (both the invoked slots and the GUI events). But I cannot see as to why a background thread is affected by the GUI events. I really need to have an extremely robust and unhindered message routine running seperately behind, firing and forgetting the signals and not giving a damn about anything.
I'm really desperate for any help right now, so any bit of information is useful for me. Please do not hesitate to throw ideas.
TL;DR: call QCoreApplication::processEvents(); periodiacally inside run1553process.
Full explanation:
Signals from the main thread are put in a queue and executed once the event loop in the second thread takes control. In your implementation you call run1553Process as soon as the thread starts. the control will not go back to the event loop until the end of that function or QCoreApplication::processEvents is manually invoked so signals will just sit there waiting for the event loop to pick them up.
P.S.
you are leaking both the worker and the thread in the code above
P.P.S.
Data streams from devices normally provide an asynchronous API instead of you having to poll them indefinetly
I finally found the problem.
The crucial mistake was connecting the QThread's built in start() signal to run1553Process() slot. I had thought of this as replacing run() with this method, and expected everything to be fine. But this caused the actual run() method to get blocked, therefore preventing the event loop to start.
As stated in qthread.cpp:
void QThread::run()
{
(void) exec();
}
To fix this, I didn't touch the original start() signal, instead connected another signal to my run1553Process() independently. First started the thread ordinarily, allowed the event loop to start, then fired my other signals. That did it, now my Worker can receive all the messages.
I think now I understand the relation between threads and events better.
By the way, this solution did not take care of the message skipping problem entirely, but I feel that's caused by another factor (like my message reading implementation).
Thanks everyone for the ideas. I hope the solution helps some other poor guy like me.

How does boost::asio::io_service prioritize work?

I am using boost::asio::io_service to manage some asynchronous TCP communication. That means I create a boost::asio::ip::tcp::socket and give the io_service to it. When I start the communication it goes schematically like this:
Async Resolve -> Callback -> Async Connect -> Callback -> Async Write -> Callback -> Async Read
I ommitted parts like resolve and bind. Just assume the Socket has been bound to a port and the hostname is resolved ( so connect meaning establishing the real connection to the endpoint )
Now the point is that I may start several Async Connections with the same io_service object. This means for example, that while in my io_service thread the program is about to Async Write some data, the main thread will call Async Resolve with on Socket ( but with the same io_service ).
This means that my io_service now has some parallel work to do - what I'd like to know is how it will prioritize the work?
For example it go like this
Main Thread | io_service Thread
-------------------------+-----------------------------------------------
SocketA->Async Connect |
//Some other Stuff | SocketA->Callback from Async Connect
| SocketA->Async Write
SocketB->Async Connect |
| --> ?
Now at this point I have to admit I am not quite sure how the io_service works. In the fourth line there are now two different asynchronous functions which needs to be executed.
Is io_service capable of doing the Async Connect and the Async Write simultaneously? If that is the case it is clear that always the callback from the function which is finished first will be called.
If the io_service is not capable of doing so, in which order will it do the work? If SocketA Async Write will be called first, it's callback will also be called first. Actually there will be always work until the whole operation on SocketA is finished.
EDIT :
According to ereOns comment I try to make my question a bit more precise:
From the view of the io_service thread - is the SocketA Async Connect call asynchronous or synchronous? From the view of my main thread it is of course asynchronous ( it just dispatches the command and then goes on ). But in the io_service thread will this specific Connect call block other operations?
In other words: Is one single io_service capable of Connecting to one Socket while it is reading on another?
Another example would be if I just call 2 Async Connect in my main function right after each other:
SocketA->AsyncConnect();
SocketB->AsyncConnect();
Let's say the Host from SocketA is a bit slow and it takes it two seconds to answer. So while SocketA is trying to connect would SocketB in the meanwhile also connect or would it have to wait until SocketA is done /timed out?
All the work is done in the thread where io_service.run() runs.
However, the call to any async_ method won't block this specific thread: it behaves exactly like if io_service.run() called select() on several events, and "returns" (calls a callback) whenever such an event is raised. That is, if you call:
socketA->async_connect();
socketB->async_connect();
socketB may as well connect before socketA and the associated callback would then be called first, still in the thread io_service.run() runs.
That's all the beauty of Boost Asio: it takes a very good care about polling, waiting and raising events when it is more appropriate, leaving you with the "easy" part.
You shouldn't try to predict order of execution for asynchronous operations here. async_connect just signals to io_service and returns immediately. The real work gets done in io_service object's event processing loop (io_service::run), but you don't know exact specifics. It most likely uses OS-specific asynchronous IO functions.
It's not clear what you're trying to achieve. Maybe you should use synchronous operations. Maybe you should use thread synchronization functionality.
Maybe io_service::run_one will help you (it executes at most one handler).
Maybe you'll want to call io_service::run multiple times in separate threads, creating a thread pool. That way one long completion handler won't block all the others.
boost::asio::io_service service;
const size_t ASIO_THREAD_COUNT = 3;
boost::thread_group threadGroup;
for (size_t i = 0; i < ASIO_THREAD_COUNT; ++i)
threadGroup.create_thread(boost::bind(&boost::asio::io_service::run,
&service, boost::system::error_code()));

boost::asio, threads and synchronization

This is somewhat related to this question, but I think I need to know a little bit more. I've been trying to get my head around how to do this for a few days (whilst working on other parts), but the time has come for me to bite the bullet and get multi-threaded. Also, I'm after a bit more information than the question linked.
Firstly, about multi-threading. As I have been testing my code, I've not bothered with any multi-threading. It's just a console application that starts a connection to a test server and everything else is then handled. The main loop is this:
while(true)
{
Root::instance().performIO(); // calls io_service::runOne();
}
When I write my main application, I'm guessing this solution won't be acceptable (as it would have to be called in the message loop which, whilst possible, would have issues when the message queue blocks waiting for a message. You could change it so that the message-loop doesn't block, but then isn't that going to whack the CPU usage through the roof?)
The solution it seems is to throw another thread at it. Okay, fine. But then I've read that io_service::run() returns when there is no work to do. What is that? Is that when there's no data, or no connections? If at least one connection exists does it stay alive? If so, that's not so much of a problem as I only have to start up a new thread when the first connection is made and I'm happy if it all stops when there is nothing going on at all. I guess I am confused by the definition of 'no work to do'.
Then I have to worry about synchronizing my boost thread with my main GUI thread. So, I guess my questions are:
What is the best-practice way of using boost::asio in a client application with regard to threads and keeping them alive?
When writing to a socket from the main thread to the IO thread, is synchronization achieved using boost::asio::post, so that the call happens later in the io_service?
When data is received, how do people get the data back to the UI thread? In the past when I used completion ports, I made a special event that could post the data back to the main UI thread using a ::SendMessage. It wasn't elegant, but it worked.
I'll be reading some more today, but it would be great to get a heads up from someone who has done this already. The Boost::asio documentation isn't great, and most of my work so far has been based on a bit of the documentation, some trial/error, some example code on the web.
1) Have a look at io_service::work. As long as an work object exists io_service::run will not return. So if you start doing your clean up, destroy the work object, cancel any outstanding operations, for example an async_read on a socket, wait for run to return and clean up your resources.
2) io_service::post will asynchronously execute the given handler from a thread running the io_service. A callback can be used to get the result of the operation executed.
3) You needs some form of messaging system to inform your GUI thread of the new data. There are several possibilities here.
As far as your remark about the documention, I thing Asio is one of the better documented boost libraries and it comes with clear examples.
boost::io_service::run() will return only when there's nothing to do, so no async operations are pending, e.g. async accept/connection, async read/write or async timer wait. so before calling io_service::run() you first have to start any async op.
i haven't got do you have console or GUI app? in any case multithreading looks like a overkill. you can use Asio in conjunction with your message loop. if it's win32 GUI you can call io_service::run_one() from you OnIdle() handler. in case of console application you can setup deadline_timer that regularly checks (every 200ms?) for user input and use it with io_service::run(). everything in single thread to greatly simplify the solution
1) What is the best-practice way of using
boost::asio in a client application
with regard to threads and keeping
them alive?
As the documentation suggests, a pool of threads invoking io_service::run is the most scalable and easiest to implement.
2) When writing to a socket from the main
thread to the IO thread, is
synchronization achieved using
boost::asio::post, so that the call
happens later in the io_service?
You will need to use a strand to protect any handlers that can be invoked by multiple threads. See this answer as it may help you, as well as this example.
3) When data is received, how do people
get the data back to the UI thread? In
the past when I used completion ports,
I made a special event that could post
the data back to the main UI thread
using a ::SendMessage. It wasn't
elegant, but it worked.
How about providing a callback in the form of a boost::function when you post an asynchronous event to the io_service? Then the event's handler can invoke the callback and update the UI with the results.
When data is received, how do people get the data back to the UI thread? In the past when I used completion ports, I made a special event that could post the data back to the main UI thread using a ::SendMessage. It wasn't elegant, but it worked
::PostMessage may be more appropriate.
Unless everything runs in one thread these mechanisms must be used to safely post events to the UI thread.