I'm implementing simple server with boost::asio and thinking of io-service-per-cpu model(each io_service has one thread).
What i want to do is, let an io_service to request some jobs to another io_service( something like message passing ).
I think boost::asio::io_service::post can help me.
There are two io_services, ios1,ios2,
and a job(function) bool func(arg *),
and a completion handler void callback(bool).
So I want ios1 to request a job, ios2 runs it and notify ios1 to finish and finally ios2 runs the handler.
ios2.post(
[&ios1, arg_ptr, callback, func]
{
bool result = func(arg_ptr);
ios1.post( []{ callback(result) } );
} );
Is this code works? and is there any smarter and simpler way?
EDIT:
I found that the second lamda inside the ios1.post() can't reach the function pointer callback. It's out of the scope... so I'm trying another way using boost::bind().
ios2.post(
[&ios1, arg_ptr, callback, func]
{
ios1.post( boost::bind( callback, func(arg_ptr) ) );
} );
I removed one stack variable bool and it seems better.
But using c++11 lambda and boost::bind together doesn't look so cool.
How can i do this without boost::bind?
I found that the second lamda inside the ios1.post() can't reach the function pointer callback. It's out of the scope
I don't think that's the problem.
You're trying to capture callback but that's not a function pointer, it's a function. You don't need to capture a function, you can just call it! The same applies to func, don't capture it just call it. Finally, your inner lambda refers to result without capturing it.
It will work if you fix these problems:
ios2.post(
[&ios1, arg_ptr]
{
bool result = func(arg_ptr);
ios1.post( [result]{ callback(result); } );
}
);
You're second version is not quite the same, because func(arg_ptr) will get run in the thread of ios1 not ios2, and I'm not sure either version fits your description:
So I want ios1 to request a job, ios2 runs it and notify ios1 to finish and finally ios2 runs the handler.
In both your code samples ios1 runs the callback handler.
#include <boost/asio/io_service.hpp>
#include <boost/function.hpp>
typedef int arg;
int main()
{
arg * arg_ptr;
boost::function<void(bool)> callback;
boost::function<bool(arg *)> func;
boost::asio::io_service ios1, ios2;
ios2.post(
[&ios1, arg_ptr, callback, func]
{
bool result = func(arg_ptr);
auto callback1 = callback;
ios1.post( [=]{ callback1(result); } );
} );
}
Related
The following code starts a non-blocking timer that will launch the function myFunc after one second:
MyClass.h:
std::future<void> timer_future_;
MyClass.cpp:
timer_future_ = std::async(
std::launch::async,
[this] { QTimer::singleShot(1000,
[this] {this->myFunc();}
);
}
);
I would like to replace the lambda functions with std::functions. I have successfully replaced the second lambda as follows:
timer_future_ = std::async(
std::launch::async,
[this] { QTimer::singleShot(1000,
std::bind(&MyClass::myFunc, this)
);
}
);
How can I now replace the first lambda with another std::bind() call?
Note that the function QTimer::singleShot is from the Qt libraries; its documentation is here. Its prototype is:
void QTimer::singleShot(int msec, Functor functor)
As per this question, the definition of the Functor type can be found in QObject.h. It says:
template <class FunctorT, class R, typename... Args> class Functor { /*...*/ }
After some research, I understand that the std::bind() that will replace the first lambda must take account of the following:
QTimer::singleShot is an overloaded function, so I must use a cast to disambiguate the call to it
QTimer::singleShot is a static member function, so the pointer to it must resemble a pointer to a non-member function
I have made several unsuccessful attempts, the last of which was:
timer_future_ = std::async(
std::launch::async,
std::bind( ( void(*) (int, Functor<const std::function<void(void)>,void>) )&QTimer::singleShot,
1000,
std::bind(&MyClass::myFunc, this)
)
);
For this code, the MSVC compiler returned the error message
error: C2059: syntax error: ')'
on the third line.
Why don’t I just use the lambdas which are already working? The answer is simply that trying to use std::bind() instead is teaching me more about the various features of the C++ language and how to use them.
EDIT: Code that implements Kuba Ober's answer:
QTimer::singleShot(1000, [this] {
timer_future_ = std::async(
std::launch::async,
std::bind(&MyClass::myFunc, this)
);
});
The timer requires an event loop, and std::async will invoke it in a worker thread that doesn't have a running event loop. I question why would you ever want to do it?
If you want to run something in a worker thread after a delay, run the timer in a thread that has an event loop, and fire off the async action from that timer.
Count opening and closing brackets and add a semicolon
I have a program (client + server) that works with no issue with this write:
boost::asio::write(this->socket_, boost::asio::buffer(message.substr(count,length_to_send)));
where socket_ is boost::asio::ssl::stream<boost::asio::ip::tcp::socket> and message is an std::string.
I would like to make this better and non-blocking, so I created a function that could replace this, it's called like follows:
write_async_sync(socket_,message.substr(count,length_to_send));
The purpose of this function is:
To make the call async, intrinsically
To keep the interface unchanged
The function I implemented simply uses promise/future to simulate sync behavior, which I will modify later (after it works) to be cancellable:
std::size_t
SSLClient::write_async_sync(boost::asio::ssl::stream<boost::asio::ip::tcp::socket>& socket,
const std::string& message_to_send)
{
boost::system::error_code write_error;
std::promise<std::size_t> write_promise;
auto write_future = write_promise.get_future();
boost::asio::async_write(socket,
boost::asio::buffer(message_to_send),
[this,&write_promise,&write_error,&message_to_send]
(const boost::system::error_code& error,
std::size_t size_written)
{
logger.write("HANDLING WRITING");
if(!error)
{
write_error = error;
write_promise.set_value(size_written);
}
else
{
write_promise.set_exception(std::make_exception_ptr(std::runtime_error(error.message())));
}
});
std::size_t size_written = write_future.get();
return size_written;
}
The problem: I'm unable to get the async functionality to work. The sync one works fine, but async simply freezes and never enters the lambda part (the writing never happens). What am I doing wrong?
Edit: I realized that using poll_one() makes the function execute and it proceeds, but I don't understand it. This is how I'm calling run() for io_service (before starting the client):
io_service_work = std::make_shared<boost::asio::io_service::work>(io_service);
io_service_thread.reset(new std::thread([this](){io_service.run();}));
where basically these are shared_ptr. Is this wrong? Does this way necessitate using poll_one()?
Re. EDIT:
You have the io_service::run() correctly. This tells me you are blocking on the future inside a (completion) handler. That, obviously, prevents run() from progressing the event loop.
The question asked by #florgeng was NOT whether you have an io_service instance.
The question is whether you are calling run() (or poll()) on it suitably for async operations to proceed.
Besides, you can already use future<> builtin:
http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/overview/cpp2011/futures.html
Example: http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/example/cpp11/futures/daytime_client.cpp
std::future<std::size_t> recv_length = socket.async_receive_from(
boost::asio::buffer(recv_buf),
sender_endpoint,
boost::asio::use_future);
I keep hearing from SO - no GUI work from worker thread. I understand the following async function could however be executed from a mfc class using a PostMessage(), but I
am still tempted to rather use a callback lambda function.I want to know the downside of this, if any.
std::future<void> refresh(std:: function<void()> win_func)
{
std::async(std::launch::async,[&]()
{
// do long process
win_func();
});
}
void CWinDlg::OnClick()
{
refresh( [&]()
{
// want to avoid PostMessage() here
// sync with mfc required here?
clist_ctrl1.Update();
});
}
I would like to implement a command queue which handles incoming commands concurrently with a thread pool (so the queue grows temporarily when all threads are working). I would like to post a callback to the callers when a command worker is started and finished. My implementation is based on this example from the Asio website.
Is there a way to hook into these events and signal somehow? I would like to avoid the command functors knowing about the callbacks (since obviously I could call the callbacks inside the command functors).
Pseudocode to illustrate (initialization and error handling omitted for brevity):
class CommandQueue
{
public:
void handle_command(CmdId id, int param)
{
io_service.post(boost::bind(&(dispatch_map[id]), param));
// PSEUDOCODE:
// when one of the worker threads start with this item, I want to call
callback_site.cmd_started(id, param);
// when the command functor returns and the thread finished
callback_site.cmd_finished(id, param);
}
private:
boost::asio::io_service io_service;
asio::io_service::work work;
std::map<CmdId, CommandHandler> dispatch_map; // CommandHandler is a functor taking an int parameter
CallbackSite callback_site;
};
Is there a way to do this without having the command functors depend on the CallbackSite?
My initial response would be that std::futures are what you want given that boost-asio now even has built in support for them. However you have tagged this as c++03 so you will have to make do with boost::future.
Basically you pass in a boost::promise to the task you want to pass into asio but beforehand call get_future on it and store the future values which shares state with the promise. When the task finishes you can call promise::set_value. In another thread you can check to see if this has happened by calling future::is_ready (non-blocking) or future::wait (blocking) and then retrieve the value from it before calling the appropriate callback functions.
e.g. the value set could be a CmdId in your example to determine which callback to call.
So what you want is to build in something that happens when one of the run() commands starts process a command, and then does something on return.
Personally, I do this by wrapping the function call:
class CommandQueue
{
public:
void handle_command(CmdId id, int param)
{
io_service.post(boost::bind(&CommandQueue::DispatchCommand, this,id,param));
}
private:
boost::asio::io_service io_service;
asio::io_service::work work;
std::map<CmdId, CommandHandler> dispatch_map; // CommandHandler is a functor taking an int parameter
CallbackSite callback_site;
void DispatchCommand(CmdId id, int param)
{
// when one of the worker threads start with this item, I want to call
callback_site.cmd_started(id, param);
dispatch_map[id](param);
// when the command functor returns and the thread finished
callback_site.cmd_finished(id, param);
}
};
This is also the pattern I use when I want to handle exceptions in the dispatched commands. You can also post different events instead of running them inline.
I am trying to wrap some c++ functionality into python with the help of boost::python. I have some trouble getting a particular callback mechanism to work. The following code snippet explains what I am trying to do:
//c++ side
class LoopClass {
public:
//some class attributes
void call_once(std::function const& fun) const;
};
void callOnce(LoopClass& loop, boost::python::object const& function) {
auto fun = [&]() {
function();
};
loop->call_once(fun);
}
boost::python::class_<LoopClass>("LoopClass")
.def("call_once", &callOnce);
//python side
def foo():
print "foo"
loop = LoopClass()
loop.call_once(foo)
Here is the deal: The function call_once() takes a std::function and puts it in a queue. LoopClass maintains an eternal loop which is run in a separate thread and, at a certain point, processes the queue of stored callback functions. To tread a boost::python::object as a function, the cast operator has to be called explicitly. This is why I didn't wrap call_once() directly but wrote the little conversion function callOnce() which forwards the cast operator call through a lambda.
Anyhow, when I try to run this code, accessing the boost::python::object fails with a segmentation fault. I guess it's just not that easy to share python objects between to threads. But how can this be done?
Thanks in advance for any help!
Update
I tried to follow the advice of #JanneKarila
See Non-Python created threads. – Janne Karila
I guess this is the right point to find a solution, but unfortunately I am not able to figure out how to apply it.
I tried
void callOnce(LoopClass& loop, boost::python::object const& function) {
auto fun = [&]() {
PyGILState_STATE gstate;
gstate = PyGILState_Ensure();
function();
PyGILState_Release(gstate);
};
loop->call_once(fun);
}
which doesn't work. Am I missing something or just too dumb?
Have you called PyEval_InitThreads(); ?
If so maybe this http://www.codevate.com/blog/7-concurrency-with-embedded-python-in-a-multi-threaded-c-application piece can help?