I have a program (client + server) that works with no issue with this write:
boost::asio::write(this->socket_, boost::asio::buffer(message.substr(count,length_to_send)));
where socket_ is boost::asio::ssl::stream<boost::asio::ip::tcp::socket> and message is an std::string.
I would like to make this better and non-blocking, so I created a function that could replace this, it's called like follows:
write_async_sync(socket_,message.substr(count,length_to_send));
The purpose of this function is:
To make the call async, intrinsically
To keep the interface unchanged
The function I implemented simply uses promise/future to simulate sync behavior, which I will modify later (after it works) to be cancellable:
std::size_t
SSLClient::write_async_sync(boost::asio::ssl::stream<boost::asio::ip::tcp::socket>& socket,
const std::string& message_to_send)
{
boost::system::error_code write_error;
std::promise<std::size_t> write_promise;
auto write_future = write_promise.get_future();
boost::asio::async_write(socket,
boost::asio::buffer(message_to_send),
[this,&write_promise,&write_error,&message_to_send]
(const boost::system::error_code& error,
std::size_t size_written)
{
logger.write("HANDLING WRITING");
if(!error)
{
write_error = error;
write_promise.set_value(size_written);
}
else
{
write_promise.set_exception(std::make_exception_ptr(std::runtime_error(error.message())));
}
});
std::size_t size_written = write_future.get();
return size_written;
}
The problem: I'm unable to get the async functionality to work. The sync one works fine, but async simply freezes and never enters the lambda part (the writing never happens). What am I doing wrong?
Edit: I realized that using poll_one() makes the function execute and it proceeds, but I don't understand it. This is how I'm calling run() for io_service (before starting the client):
io_service_work = std::make_shared<boost::asio::io_service::work>(io_service);
io_service_thread.reset(new std::thread([this](){io_service.run();}));
where basically these are shared_ptr. Is this wrong? Does this way necessitate using poll_one()?
Re. EDIT:
You have the io_service::run() correctly. This tells me you are blocking on the future inside a (completion) handler. That, obviously, prevents run() from progressing the event loop.
The question asked by #florgeng was NOT whether you have an io_service instance.
The question is whether you are calling run() (or poll()) on it suitably for async operations to proceed.
Besides, you can already use future<> builtin:
http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/overview/cpp2011/futures.html
Example: http://www.boost.org/doc/libs/1_64_0/doc/html/boost_asio/example/cpp11/futures/daytime_client.cpp
std::future<std::size_t> recv_length = socket.async_receive_from(
boost::asio::buffer(recv_buf),
sender_endpoint,
boost::asio::use_future);
Related
I am currently debugging a server(win32/64) that utilizes Boost:asio 1.78.
The code is a blend of legacy, older legacy and some newer code. None of this code is mine. I can't answer for why something is done in a certain way. I'm just trying to understand why this is happening and hopefully fix it wo. rewriting it from scratch. This code has been running for years on 50+ servers with no errors. Just these 2 servers that missbehaves.
I have one client (dot.net) that is connected to two servers. Client is sending the same data to the 2 servers. The servers run the same code, as follows in code sect.
All is working well but now and then communications halts. No errors or exceptions on either end. It just halts. Never on both servers at the same time. This happens very seldom. Like every 3 months or less. I have no way of reproducing it in a debugger bc I don't know where to look for this behavior.
On the client side the socket appears to be working/open but does not accept new data. No errors is detected in the socket.
Here's a shortened code describing the functions. I want to stress that I can't detect any errors or exceptions during these failures. Code just stops at "m_socket->read_some()".
Only solution to "unblock" right now is to close the socket manually and restart the acceptor. When I manually close the socket the read_some method returns with error code so I know it is inside there it stops.
Questions:
What may go wrong here and give this behavior?
What parameters should I log to enable me to determine what is happening, and from where.
main code:
std::shared_ptr<boost::asio::io_service> io_service_is = std::make_shared<boost::asio::io_service>();
auto is_work = std::make_shared<boost::asio::io_service::work>(*io_service_is.get());
auto acceptor = std::make_shared<TcpAcceptorWrapper>(*io_service_is.get(), port);
acceptor->start();
auto threadhandle = std::thread([&io_service_is]() {io_service_is->run();});
TcpAcceptorWrapper:
void start(){
m_asio_tcp_acceptor.open(boost::asio::ip::tcp::v4());
m_asio_tcp_acceptor.bind(boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), m_port));
m_asio_tcp_acceptor.listen();
start_internal();
}
void start_internal(){
m_asio_tcp_acceptor.async_accept(m_socket, [this](boost::system::error_code error) { /* Handler code */ });
}
Handler code:
m_current_session = std::make_shared<TcpSession>(&m_socket);
std::condition_variable condition;
std::mutex mutex;
bool stopped(false);
m_current_session->run(condition, mutex, stopped);
{
std::unique_lock<std::mutex> lock(mutex);
condition.wait(lock, [&stopped] { return stopped; });
}
TcpSession runner:
void run(std::condition_variable& complete, std::mutex& mutex, bool& stopped){
auto self(shared_from_this());
std::thread([this, self, &complete, &mutex, &stopped]() {
{ // mutex scope
// Lock and hold mutex from tcp_acceptor scope
std::lock_guard<std::mutex> lock(mutex);
while (true) {
std::array<char, M_BUFFER_SIZE> buffer;
try {
boost::system::error_code error;
/* Next call just hangs/blocks but only rarely. like once every 3 months or more seldom */
std::size_t read = m_socket->read_some(boost::asio::buffer(buffer, M_BUFFER_SIZE), error);
if (error || read == -1) {
// This never happens
break;
}
// inside this all is working
process(buffer);
} catch (std::exception& ex) {
// This never happens
break;
} catch (...) {
// Neither does this
break;
}
}
stopped = true;
} // mutex released
complete.notify_one();
}).detach();
}
This:
m_acceptor.async_accept(m_socket, [this](boost::system::error_code error) { // Handler code });
Handler code:
std::condition_variable condition;
std::mutex mutex;
bool stopped(false);
m_current_session->run(condition, mutex, stopped);
{
std::unique_lock<std::mutex> lock(mutex);
condition.wait(lock, [&stopped] { return stopped; });
}
Is strange. It suggests you are using an "async" accept, but the handler block unconditionally until the session completes. That's the opposite of asynchrony. You could much easier write the same code without the asynchrony, and also without the thread and synchronization around it.
My intuition says something is blocking the mutex. Have you established that the session stack is actually inside the read_some frame when e.g. doing a debugger break during a "lock-up"?
When I manually close the socket the read_some method returns with error code so I know it is inside there I have an issue.
You can't legally do that. Your socket is in use on a thread - in a blocking read -, and you're bound to close it from a separate thread. That's a race-condition (see docs). If you want cancellable operations, use async_read*.
There are more code smells (read_some is a lowlevel primitive that is rarely what you want at the application level, detached threads with manual synchronization on termination could be packaged tasks, shared boolean flags could be atomics, notify_one outside the mutex could lead to thread starvation on some platforms etc.).
If you can share more code I'll be happy to sketch simplified solutions that remove the problems.
I have a problem where two threads are called like this, one after another.
new boost::thread( &SERVER::start_receive, this);
new boost::thread( &SERVER::run_io_service, this);
Where the first thread calls this function.
void start_receive()
{
udp_socket.async_receive(....);
}
and the second thread calls,
void run_io_service()
{
io_service.run();
}
and sometimes the io_service thread ends up finishing before the start_receive() thread and then the server cannot receive packets.
I thought about putting a sleep function between the two threads to wait a while for the start_receive() to complete and that works but I wondered if there was another sure fire way to make this happen?
When you call io_service.run(), the thread will block, dispatching posted handlers until either:
There are no io_service::work objects associated with the io_service, or
io_service.stop() is called.
If either of these happens, the io_service enters the stopped state and will refuse to dispatch any more handlers in future until its reset() method is called.
Every time you initiate an asynchronous operation on an io object associated with the io_service, an io_service::work object is embedded in the asynchronous handler.
For this reason, point (1) above cannot happen until the asynchronous handler has run.
this code therefore will guarantee that the async process completes and that the asserts pass:
asio::io_service ios; // ios is not in stopped state
assert(!ios.stopped());
auto obj = some_io_object(ios);
bool completed = false;
obj.async_something(..., [&](auto const& ec) { completed = true; });
// nothing will happen yet. There is now 1 work object associated with ios
assert(!completed);
auto ran = ios.run();
assert(completed);
assert(ran == 1); // only 1 async op waiting for completion.
assert(ios.stopped()); // io_service is exhausted and no work remaining
ios.reset();
assert(!ios.stopped()); // io_service is ready to run again
If you want to keep the io_service running, create a work object:
boost::asio::io_service svc;
auto work = std::make_shared<boost::asio::io_service::work>(svc);
svc.run(); // this will block as long as the work object is valid.
The nice thing about this approach is that the work object above will keep the svc object "running", but not block any other operations on it.
I'm building a network service with boost::asio and I'm unsure about the thread safety.
io_service.run() is called only once from a thread dedicated for the io_service work
send_message() on the other hand can be called either by the code inside the second io_service handlers mentioned later, or by the mainThread upon user interaction. And that is why I'm getting nervous.
std::deque<message> out_queue;
// send_message will be called by two different threads
void send_message(MsgPtr msg){
while (out_queue->size() >= 20){
Sleep(50);
}
io_service_.post([this, msg]() { deliver(msg); });
}
// from my understanding, deliver will only be called by the thread which called io_service.run()
void deliver(const MsgPtr){
bool write_in_progress = !out_queue.empty();
out_queue.push_back(msg);
if (!write_in_progress)
{
write();
}
}
void write()
{
auto self(shared_from_this());
asio::async_write(socket_,
asio::buffer(out_queue.front().header(),
message::header_length), [this, self](asio::error_code ec, std::size_t/)
{
if (!ec)
{
asio::async_write(socket_,
asio::buffer(out_queue.front().data(),
out_queue.front().paddedPayload_size()),
[this, self](asio::error_code ec, std::size_t /*length*/)
{
if (!ec)
{
out_queue.pop_front();
if (!out_queue.empty())
{
write();
}
}
});
}
});
}
Is this scenario safe?
A similar second scenario: When the network thread receives a message, it posts them into another asio::io_service which is also run by its own dedicated thread. This io_service uses an std::unordered_map to store callback functions etc.
std::unordered_map<int, eventSink> eventSinkMap_;
//...
// called by the main thread (GUI), writes a callback function object to the map
int IOReactor::registerEventSink(std::function<void(int, std::shared_ptr<message>)> fn, QObject* window, std::string endpointId){
util::ScopedLock lock(&sync_);
eventSink es;
es.id = generateRandomId();
// ....
std::pair<int, eventSink> eventSinkPair(es.id, es);
eventSinkMap_.insert(eventSinkPair);
return es.id;
}
// called by the second thread, the network service thread when a message was received
void IOReactor::onMessageReceived(std::shared_ptr<message> msg, ConPtr con)
{
reactor_io_service_.post([=](){ handleReceive(msg, con); });
}
// should be called only by the one thread running the reactor_io_service.run()
// read and write access to the map
void IOReactor::handleReceive(std::shared_ptr<message> msg, ConPtr con){
util::ScopedLock lock(&sync_);
auto es = eventSinkMap_.find(msg.requestId);
if (es != eventSinkMap_.end())
{
auto fn = es->second.handler;
auto ctx = es->second.context;
QMetaObject::invokeMethod(ctx, "runInMainThread", Qt::QueuedConnection, Q_ARG(std::function<void(int, std::shared_ptr<msg::IMessage>)>, fn), Q_ARG(int, CallBackResult::SUCCESS), Q_ARG(std::shared_ptr<msg::IMessage>, msg));
eventSinkMap_.erase(es);
}
first of all: Do I even need to use a lock here?
Ofc both methods access the map, but they are not accessing the same elements (the receiveHandler cannot try to access or read an element that has not yet been registered/inserted into the map). Is that threadsafe?
First of all, a lot of context is missing (where is onMessageReceived invoked, and what is ConPtr? and you have too many questions. I'll give you some specific pointers that will help you though.
You should be nervous here:
void send_message(MsgPtr msg){
while (out_queue->size() >= 20){
Sleep(50);
}
io_service_.post([this, msg]() { deliver(msg); });
}
The check out_queue->size() >= 20 requires synchronization unless out_queue is thread safe.
The call to io_service_.post is safe, because io_service is thread safe. Since you have one dedicated IO thread, this means that deliver() will run on that thread. Right now, you need synchronization there too.
I strongly suggest using a proper thread-safe queue there.
Q. first of all: Do I even need to use a lock here?
Yes you need to lock to do the map lookup (otherwise you get a data race with the main thread inserting sinks).
You do not need to lock during the invocation (in fact, that seems like a very unwise idea that could lead to performance issue or lockups). The reference remains valid due to Iterator invalidation rules.
The deletion of course requires a lock again. I'd revise the code to do deletion and removal at once, and invoke the sink only after releasing the lock. NOTE You will have to think about exceptions here (in your code when there is an exception during invocation, the sink doesn't get removed (ever?). This might be important to you.
Live Demo
void handleReceive(std::shared_ptr<message> msg, ConPtr con){
util::ScopedLock lock(&sync_);
auto es = eventSinkMap_.find(msg->requestId);
if (es != eventSinkMap_.end())
{
auto fn = es->second.handler;
auto ctx = es->second.context;
eventSinkMap_.erase(es); // invalidates es
lock.unlock();
// invoke in whatever way you require
fn(static_cast<int>(CallBackResult::SUCCESS), std::static_pointer_cast<msg::IMessage>(msg));
}
}
Can anyone tell me under what conditions boost::asio's io_service::run() method will return? The documentation documentation for io_service::run() seems to suggest that as long as there is work to be done or handlers to be dispatched, run() won't return.
The reason I'm asking this is that we have a legacy https client that contacts a server and executes http POST's. The separation of concerns in the client is a bit different than what we'd like so we're changing a few things about it, but we're running into problems.
Right now, the client basically has a mis-named connect() call that effectively drives the entire protocol conversation with the server. The connect() call starts off by creating a boost::asio::ip::tcp::resolver object and calling ::async_resolve() on it. This starts a chain where new asio calls are made from within asio callbacks.
void connect()
{
m_resolver.async_resolve( query, bind( &clientclass::resolve_callback, this ) );
thread = new boost::thread( bind( &boost::asio::io_service::run, m_io_service ) );
}
void resolve_callback( error_code & e, resolver::iterator i )
{
if (!e)
{
tcp::endpoint = *i;
m_socket.lowest_layer().async_connect(endpoint, bind(&clientclass::connect_callback,this,_1,++i));
}
}
void connect_callback( error_code & e, resolve::iterator i )
{
if (!e)
{
m_socket.lowest_layer().async_handshake(boost::asio::ssl::stream_base::client,
bind(&clientclass::handshake_callback,this,_1,++i));
}
}
void handshake_callback( error_code &e )
{
if (!e)
{
mesg = format_hello_message();
http_send( mesg, bind(&clientlass::hello_resp_handler,this,_1,_2) );
}
}
void http_send( stringstream & mesg, reply_handler handler )
{
async_write(m_socket, m_request_buffer, bind(&clientclass::write_complete_callback,this,_1,handler));
}
void write_comlete_callback( error_code &e, reply_handler handler )
{
if (!e)
{
async_read_until(m_socket,m_reply_buffer,"\r\n\r\n", bind(&clientclass::handle_reply,this,handler));
}
}
...
Anyways, this continues through the protocol until the protocol conversation is done. From the code here you can see that while connect() is running on the main thread, all of the subsequent callbacks and requests are coming back on the worker thread that is created in connect(). This is 'working' code.
When I try to break this chain up and expose it via an external interface, it stops working. In particular, I'm having the call handle_handshake() call outside of the clientclass object. Then http_send() is part of the interface (or is called by the external interface) and it creates a new worker thread to call io_service::run(). What happens is even though async_write() has been called and even though write_complete_callback() hasn't returned, io_service::run() exits. It exits without error and claims that no handlers were dispatched, but there's still 'work' to be done?
So what I'm wondering is what is io_service::run()'s definition of 'work'? Is it a pending request? Why is it that io_service::run() never returns during this chain of requests and responses in the existing code, but when I try to start the thread up again and start a new chain, it returns almost immediately before it's finished its work?
The definition of work in the context of the run() call is any pending asynchronous operations on that io_service object. This includes the invocations of the handlers in response to an operation. So, if a handler for one operation starts another operation, there is always work available.
In addition, there is an io_service::work class that can be used to create work on an io_service that never completes until the object is destroyed.
When a single chain completes, the io_service has completed all asynchronous operations, and all of the handler's have been invoked without starting a new operation, so it returns. Until you call io_service::reset(), further calls to run() will return without executing any operations.
I'm pretty new to boost. I needed a cross platform low level C++ network API, so I chose asio. Now, I've successfully connected and written to a socket, but since I'm using the asynchronous read/write, I need a way to keep track of the requests (to have some kind of IDs, if you will). I've looked at the documentation/reference, and I found no way to pass user data to my handler, the only option I can think of is creating a special class that acts as a callback and keeps track of it's id, then pass it to the socket as a callback. Is there a better way? Or is the best way to do it?
The async_xxx functions are templated on the type of the completion handler. The handler does not have to be a plain "callback", and it can be anything that exposes the right operator() signature.
You should thus be able to do something like this:
// Warning: Not tested
struct MyReadHandler
{
MyReadHandler(Whatever ContextInformation) : m_Context(ContextInformation){}
void
operator()(const boost::system::error_code& error, std::size_t bytes_transferred)
{
// Use m_Context
// ...
}
Whatever m_Context;
};
boost::asio::async_read(socket, buffer, MyReadHander(the_context));
Alternatively, you could also have your handler as a plain function and bind it at the call site, as described in the asio tutorial. The example above would then be:
void
HandleRead(
const boost::system::error_code& error,
std::size_t bytes_transferred
Whatever context
)
{
//...
}
boost::asio::async_read(socket, buffer, boost::bind(&HandleRead,
boost::asio::placeholders::error_code,
boost::asio::placeholders::bytes_transferred,
the_context
));