I keep hearing from SO - no GUI work from worker thread. I understand the following async function could however be executed from a mfc class using a PostMessage(), but I
am still tempted to rather use a callback lambda function.I want to know the downside of this, if any.
std::future<void> refresh(std:: function<void()> win_func)
{
std::async(std::launch::async,[&]()
{
// do long process
win_func();
});
}
void CWinDlg::OnClick()
{
refresh( [&]()
{
// want to avoid PostMessage() here
// sync with mfc required here?
clist_ctrl1.Update();
});
}
Related
I use AMQP-CPP lib with libev backend. I try to create a class which will open a connection and do consuming. I want to run connection's loop in a worker thread in order not to block the main thread. That part of code looks like this
...
m_thread.reset(new std:thread([this]()
{
ev_run(m_loop, 0);
}));
...
Then at some point I want to stop the loop. I have read that it is possible to do it with ev_break() function. However, it should be called from the same thread as ev_run() was called. More search showed that ev_async_send() function might do that, but I cannot figure out how.
How can I do it? Any ideas?
Here is an example:
void asyncCallback(EV_P_ ev_async*, int)
{
ev_break(m_loop, EVBREAK_ONE);
}
void MyClass::stopLoop()
{
ev_async_init(&m_asyncWatcher, asyncCallback);
ev_async_start(m_loop, &m_asyncWatcher);
ev_async_send(m_loop, &m_asyncWatcher);
m_thread->join();
}
// in the class async watcher should be defined
ev_async m_asyncWatcher;
By calling stopLoop() function from another thread it stops the loop that started from m_thread worker thread.
I'm trying to implement some network application using Boost.Asio. I have a problem with multiple layers of callbacks. In other languages that natively support async/await syntax, I can write my logic like this
void do_send(args...) {
if (!endpoint_resolved) {
await resolve_async(...); // results are stored in member variables
}
if (!connected) {
await connect_async(...);
}
await send_async(...);
await receive_async(...);
}
Right now I have to write it using multiple layers of callbacks
void do_send(args...) {
if (!endpoint_resolved) {
resolve_async(..., [captures...](args...) {
if (!connected) {
connect_async(..., [captures...](args...) {
send_async(..., [captures...](args...) {
receive_async(..., [captures...](args...) {
// do something
}); // receive_async
}); // send_async
}); // connect_async
}
});
}
}
This is cumbersome and error-prone. An alternative is to use std::bind to bind member functions as callbacks, but this does not solve the problem because either way I have to write complicated logic in the callbacks to determine what to do next.
I'm wondering if there are better solutions. Ideally I would like to write code in a synchronous way while I can await asynchronously on any I/O operations.
I've also checked std::async, std::future, etc. But they don't seem to fit into my situation.
Boost.Asio's stackful coroutines would provide a good solution. Stackful coroutines allow for asynchronous code to be written in a manner that reads synchronous. One can create a stackful coroutine via the spawn function. Within the coroutine, passing the yield_context as a handler to an asyncornous operation will start the operation and suspend the coroutine. The coroutine will be resumed automatically when the asynchronous operation completes. Here is the example from the documentation:
boost::asio::spawn(my_strand, do_echo);
// ...
void do_echo(boost::asio::yield_context yield)
{
try
{
char data[128];
for (;;)
{
std::size_t length =
my_socket.async_read_some(
boost::asio::buffer(data), yield);
boost::asio::async_write(my_socket,
boost::asio::buffer(data, length), yield);
}
}
catch (std::exception& e)
{
// ...
}
}
I have a function foo that is called in the UI thread. Inside it, I call functionA whose return value will determine whether I call functionB or not. Inside functionA, I call funcFromAnotherProject which actually runs in a worker thread. I need to wait for this to end before I can proceed with functionC.
void foo() {
bool succeeded = functionA();
if (succeeded) functionB();
}
bool functionA() {
if (someCondition) {
funcFromAnotherProject();
}
return functionC();
}
Fortunately, funcFromAnotherProject can accept a callback parameter so I can actually pass functionC as a callback so the order is preserved. However, if I do this, I won't be able to get functionC's return value which I need in foo.
I then decided to do the following (the bool variable is actually a shared pointer to a class that wraps around a HANDLE but it's too complicated):
bool functionA() {
bool finishedFuncFromAnotherProject = false;
auto callback = [&finishedFuncFromAnotherProject](){
finishedFuncFromAnotherProject = true;
};
if (someCondition) {
funcFromAnotherProject(callback);
waitUntilAboveFuncFinishes();
}
return functionC();
}
The problem with this is that I am calling wait in the UI thread and funcFromAnotherProject calls the callback in the UI thread as well. The callback is never called because the wait is blocking everything else.
Running foo in the worker thread will solve the above problem, however I need to block the UI thread until functionB finishes.
funcFromAnotherProject will always run in a worker thread so I can't change that. If it comes down to it, what I can do is add a flag for funcFromAnotherProject on whether it should run the callback in the UI thread or not. But since this is a utility in our program, I'd rather not touch it.
Is there another way to go about this? I feel like this should be very simple and I'm just overthinking things.
I'm trying to write my own torrent program based on libtorrent rasterbar and I'm having problems getting the alert mechanism working correctly. Libtorrent offers function
void set_alert_notify (boost::function<void()> const& fun);
which is supposed to
The intention of of the function is that the client wakes up its main thread, to poll for more alerts using pop_alerts(). If the notify function fails to do so, it won't be called again, until pop_alerts is called for some other reason.
so far so good, I think I understand the intention behind this function. However, my actual implementation doesn't work so good. My code so far is like this:
std::unique_lock<std::mutex> ul(_alert_m);
session.set_alert_notify([&]() { _alert_cv.notify_one(); });
while (!_alert_loop_should_stop) {
if (!session.wait_for_alert(std::chrono::seconds(0))) {
_alert_cv.wait(ul);
}
std::vector<libtorrent::alert*> alerts;
session.pop_alerts(&alerts);
for (auto alert : alerts) {
LTi_ << alert->message();
}
}
however there is a race condition. If wait_for_alert returns NULL (since no alerts yet) but the function passed to set_alert_notify is called before _alert_cw.wait(ul);, the whole loop waits forever (because of second sentence from the quote).
For the moment my solution is just changing _alert_cv.wait(ul); to _alert_cv.wait_for(ul, std::chrono::milliseconds(250)); which reduces number of loops per second enough while keeping latency low enough.
But it's really more workaround then solution and I keep thinking there must be proper way to handle this.
You need a variable to record the notification. It should be protected by the same mutex that owns the condition variable.
bool _alert_pending;
session.set_alert_notify([&]() {
std::lock_guard<std::mutex> lg(_alert_m);
_alert_pending = true;
_alert_cv.notify_one();
});
std::unique_lock<std::mutex> ul(_alert_m);
while(!_alert_loop_should_stop) {
_alert_cv.wait(ul, [&]() {
return _alert_pending || _alert_loop_should_stop;
})
if(_alert_pending) {
_alert_pending = false;
ul.unlock();
session.pop_alerts(...);
...
ul.lock();
}
}
I'm implementing simple server with boost::asio and thinking of io-service-per-cpu model(each io_service has one thread).
What i want to do is, let an io_service to request some jobs to another io_service( something like message passing ).
I think boost::asio::io_service::post can help me.
There are two io_services, ios1,ios2,
and a job(function) bool func(arg *),
and a completion handler void callback(bool).
So I want ios1 to request a job, ios2 runs it and notify ios1 to finish and finally ios2 runs the handler.
ios2.post(
[&ios1, arg_ptr, callback, func]
{
bool result = func(arg_ptr);
ios1.post( []{ callback(result) } );
} );
Is this code works? and is there any smarter and simpler way?
EDIT:
I found that the second lamda inside the ios1.post() can't reach the function pointer callback. It's out of the scope... so I'm trying another way using boost::bind().
ios2.post(
[&ios1, arg_ptr, callback, func]
{
ios1.post( boost::bind( callback, func(arg_ptr) ) );
} );
I removed one stack variable bool and it seems better.
But using c++11 lambda and boost::bind together doesn't look so cool.
How can i do this without boost::bind?
I found that the second lamda inside the ios1.post() can't reach the function pointer callback. It's out of the scope
I don't think that's the problem.
You're trying to capture callback but that's not a function pointer, it's a function. You don't need to capture a function, you can just call it! The same applies to func, don't capture it just call it. Finally, your inner lambda refers to result without capturing it.
It will work if you fix these problems:
ios2.post(
[&ios1, arg_ptr]
{
bool result = func(arg_ptr);
ios1.post( [result]{ callback(result); } );
}
);
You're second version is not quite the same, because func(arg_ptr) will get run in the thread of ios1 not ios2, and I'm not sure either version fits your description:
So I want ios1 to request a job, ios2 runs it and notify ios1 to finish and finally ios2 runs the handler.
In both your code samples ios1 runs the callback handler.
#include <boost/asio/io_service.hpp>
#include <boost/function.hpp>
typedef int arg;
int main()
{
arg * arg_ptr;
boost::function<void(bool)> callback;
boost::function<bool(arg *)> func;
boost::asio::io_service ios1, ios2;
ios2.post(
[&ios1, arg_ptr, callback, func]
{
bool result = func(arg_ptr);
auto callback1 = callback;
ios1.post( [=]{ callback1(result); } );
} );
}