boost::asio error "The I/O operation has been aborted..." - c++

I am receiving this error message
"The I/O operation has been aborted because of either a thread exit or an application request"
when using boost::asio::socket::async_read_some()
What does the error mean? What should I be looking for?
Here is the relevant code:
void tcp_connection::start()
{
printf("Connected to simulator\n");
socket_.async_read_some(boost::asio::buffer(myBuffer,256),
boost::bind(&tcp_connection::read_sim_handler,this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void tcp_connection::read_sim_handler(
const boost::system::error_code& error, // Result of operation.
std::size_t len ) // Number of bytes read.
{
try {
if (error == boost::asio::error::eof) {
// Connection closed cleanly by peer.
printf("Sim connection closed\n");
return;
} else if (error) {
throw boost::system::system_error(error); // Some other error. if( ! error )
}
socket_.async_read_some(boost::asio::buffer(myBuffer,256),
boost::bind(&tcp_connection::read_sim_handler,this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
}
When I replace the call to async_read_some() with read_some() in the start() method, everything works fine ( except the server blocks waiting for a message! )
Following a comment i see that tcp_connection is going out of scope. I copied the code from http://www.boost.org/doc/libs/1_45_0/doc/html/boost_asio/tutorial/tutdaytime3.html
which says this:
"We will use shared_ptr and enable_shared_from_this because we want to keep the tcp_connection object alive as long as there is an operation that refers to it."
I confess that I do not know what all that means. So I have broken it somehow?
Following further comments, the answer is
void tcp_connection::start()
{
printf("Connected to simulator\n");
socket_.async_read_some(boost::asio::buffer(myBuffer,256),
boost::bind(&tcp_connection::read_sim_handler,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
Passing shared_from_this() rather than this employs the clever ( too clever? ) keep alive infrastructure established by the server code, even though the connection manager is not in scope, by normal means. For technical details, see comments under accepted answer.

Your tcp_connection object or your buffer object is likely going out of scope prior to the async operation completing.
Since your program is based on one of the tutorial examples, why don't you check out another of the examples that reads some data as well: http://www.boost.org/doc/libs/1_45_0/doc/html/boost_asio/example/echo/async_tcp_echo_server.cpp
The reason your class goes out of scope is that you are no longer using shared_from_this(). What this does is create a shared_ptr to your class that is stored by the bind handler. This means that the shared_ptr will keep your class alive until your handler is called.
This is also why you need to inherit from enable_shared_from_this.
The last shared_ptr that goes out of scope will delete your class instance.

Related

Cancelling boost::asio::async_read gracefully

I have a class that looks like this:
class MyConnector : public boost::noncopyable, public boost::enable_shared_from_this<MyConnector>
{
public:
typedef MyConnector this_type;
boost::asio::ip::tcp::socket _plainSocket;
boost::shared_ptr<std::vector<uint8_t>> _readBuffer;
// lot of obvious stuff removed....
void readProtocol()
{
_readBuffer = boost::make_shared<std::vector<uint8_t>>(12, 0);
boost::asio::async_read(_plainSocket, boost::asio::buffer(&_readBuffer->at(0), 12),
boost::bind(&this_type::handleReadProtocol, shared_from_this(),
boost::asio::placeholders::bytes_transferred, boost::asio::placeholders::error));
}
void handleReadProtocol(size_t bytesRead,const boost::system::error_code& error)
{
// handling code removed
}
};
This class instance is generally waiting to receive 12 bytes protocol, before trying to read the full message. However, when I try to cancel this read operation and destroy the object, it doesn't happen. When I call _plainSocket.cancel(ec), it doesn't call handleReadProtocol with that ec. Socket disconnects, but the handler is not called.
boost::system::error_code ec;
_plainSocket.cancel(ec);
And the shared_ptr of MyConnector object that was passed using shared_from_this() is not released. The object remains like a zombie in the heap memory. How do I cancel the async_read() in such a way that the MyConnector object reference count is decremented, allowing the object to destroy itself?
Two things: one, in handleReadProtocol, make sure that, if there is an error, that readProtocol is not called. Canceled operations still call the handler, but with an error code set.
Second, asio recommends shutting down and closing the socket if you're finished with the connection. For example:
asio::post([this] {
if (_plainSocket.is_open()) {
asio::error_code ec;
/* For portable behaviour with respect to graceful closure of a connected socket, call
* shutdown() before closing the socket. */
_plainSocket.shutdown(asio::ip::tcp::socket::shutdown_both, ec);
if (ec) {
Log(fmt::format("Socket shutdown error {}.", ec.message()));
ec.clear();
}
_plainSocket.close(ec);
if (ec)
Log(fmt::format("Socket close error {}.", ec.message()));
}
});

bad_weak_ptr with boost smart pointer

I develop a desktop chat with boost asio and beast (for browser support).
I use this architecture :
But, when building, I have an issue : bad_weak_ptr, I don't know what is wrong :s
Here a link to the source
https://onlinegdb.com/BkFhDGHe4
Update1 :
I remove run() function into constructor and move it into handle_accept function, tcp_server class. like this:
void tcp_server::handle_accept(const boost::system::error_code ec, websocket_session_ptr new_websocket)
{
if (!ec)
{
// Happens when the timer closes the socket
if(ec == boost::asio::error::operation_aborted)
return;
new_websocket->run(); //Here
chatwebsocketsessionpointer session = chat_websocket_session::create(room, new_websocket);
room->join(session);
wait_for_connection();
}
}
I can see the chat_webocket_session is deleted, but still have issue with bad_weak_ptr
Update 2 :
I found where is the issue.
If I never call do_read() function, there is no error, and I can connect to server with ws
If I call it into wait_for_data from chat_websoket_session class, I have issue.
So I must found how call do_read()
Update 3 :
If I do
websocket_session_ptr new_websocket(new websocket_session(std::move(socket)));
acceptor.async_accept(
socket,
boost::bind(
&tcp_server::websocket_accept,
this,
boost::asio::placeholders::error,
new_websocket
));
making ref to : boost beast websocket example, I accept first the socket, and after I accept the websocket with m_ws.async_accept() but I have now Bad file descriptor which means the socket is not open.
P.S: I update the ide URL (GDB online debugger)
You're using the shared pointer to this from inside the constructor:
websocket_session::websocket_session(tcp::socket socket)
: m_ws(std::move(socket))
, strand(socket.get_executor())
{
run();
}
Inside run() you do
void websocket_session::run() {
// Accept the websocket handshake
std::cout << "Accepted connection" << std::endl;
m_ws.async_accept(boost::asio::bind_executor(
strand, std::bind(&websocket_session::on_accept, , std::placeholders::_1)));
}
That uses shared_from_this() which will try to lock the unitialized weak_ptr from enable_shared_from_this. As you can see in the documentation that throws the std::bad_weak_ptr exception (ad. 11)
The documentation to shared_from_this explicitly warns against this:
It is permitted to call shared_from_this only on a previously shared object, i.e. on an object managed by std::shared_ptr (in particular, shared_from_this cannot be called in a constructor).

What causes a random crash in boost::coroutine?

I have a multithread application which uses boost::asio and boost::coroutine via its integration in boost::asio. Every thread has its own io_service object. The only shared state between threads are connection pools which are locked with mutex when connection is get or returned from/to the connection pool. When there is not enough connections in the pool I push infinite asio::steady_tiemer in internal structure of the pool and asynchronously waiting on it and I yielding from the couroutine function. When other thread returns connection to the pool it checks whether there is waiting timers, it gets waiting timer from the internal structure, it gets its io_service object and posts a lambda which wakes up the timer to resume the suspended coroutine. I have random crashes in the application. I try to investigate the problem with valgrind. It founds some issues but I cannot understand them because they happen in boost::coroutine and boost::asio internals. Here are fragments from my code and from valgrind output. Can someone see and explain the problem?
Here is the calling code:
template <class ContextsType>
void executeRequests(ContextsType& avlRequestContexts)
{
AvlRequestDataList allRequests;
for(auto& requestContext : avlRequestContexts)
{
if(!requestContext.pullProvider || !requestContext.toAskGDS())
continue;
auto& requests = requestContext.pullProvider->getRequestsData();
copy(requests.begin(), requests.end(), back_inserter(allRequests));
}
if(allRequests.size() == 0)
return;
boost::asio::io_service ioService;
curl::AsioMultiplexer multiplexer(ioService);
for(auto& request : allRequests)
{
using namespace boost::asio;
spawn(ioService, [&multiplexer, &request](yield_context yield)
{
request->prepare(multiplexer, yield);
});
}
while(true)
{
try
{
VLOG_DEBUG(avlGeneralLogger, "executeRequests: Starting ASIO event loop.");
ioService.run();
VLOG_DEBUG(avlGeneralLogger, "executeRequests: ASIO event loop finished.");
break;
}
catch(const std::exception& e)
{
VLOG_ERROR(avlGeneralLogger, "executeRequests: Error while executing GDS request: " << e.what());
}
catch(...)
{
VLOG_ERROR(avlGeneralLogger, "executeRequests: Unknown error while executing GDS request.");
}
}
}
Here is the prepare function implementation which is called in spawned lambda:
void AvlRequestData::prepareImpl(curl::AsioMultiplexer& multiplexer,
boost::asio::yield_context yield)
{
auto& ioService = multiplexer.getIoService();
_connection = _pool.getConnection(ioService, yield);
_connection->prepareRequest(xmlRequest, xmlResponse, requestTimeoutMS);
multiplexer.addEasyHandle(_connection->getHandle(),
[this](const curl::EasyHandleResult& result)
{
if(0 == result.responseCode)
returnQuota();
VLOG_DEBUG(lastSeatLogger, "Response " << id << ": " << xmlResponse);
_pool.addConnection(std::move(_connection));
});
}
void AvlRequestData::prepare(curl::AsioMultiplexer& multiplexer,
boost::asio::yield_context yield)
{
try
{
prepareImpl(multiplexer, yield);
}
catch(const std::exception& e)
{
VLOG_ERROR(lastSeatLogger, "Error wile preparing request: " << e.what());
returnQuota();
}
catch(...)
{
VLOG_ERROR(lastSeatLogger, "Unknown error while preparing request.");
returnQuota();
}
}
The returnQuota function is pure virtual method of the AvlRequestData class and its implementation for the TravelportRequestData class which is used in all my tests is the following:
void returnQuota() const override
{
auto& avlQuotaManager = AvlQuotaManager::getInstance();
avlQuotaManager.consumeQuotaTravelport(-1);
}
Here are push and pop methods of the connection pool.
auto AvlConnectionPool::getConnection(
TimerPtr timer,
asio::yield_context yield) -> ConnectionPtr
{
lock_guard<mutex> lock(_mutex);
while(_connections.empty())
{
_timers.emplace_back(timer);
timer->expires_from_now(
asio::steady_timer::clock_type::duration::max());
_mutex.unlock();
coroutineAsyncWait(*timer, yield);
_mutex.lock();
}
ConnectionPtr connection = std::move(_connections.front());
_connections.pop_front();
VLOG_TRACE(defaultLogger, str(format("Getted connection from pool: %s. Connections count %d.")
% _connectionPoolName % _connections.size()));
++_connectionsGiven;
return connection;
}
void AvlConnectionPool::addConnection(ConnectionPtr connection,
Side side /* = Back */)
{
lock_guard<mutex> lock(_mutex);
if(Front == side)
_connections.emplace_front(std::move(connection));
else
_connections.emplace_back(std::move(connection));
VLOG_TRACE(defaultLogger, str(format("Added connection to pool: %s. Connections count %d.")
% _connectionPoolName % _connections.size()));
if(_timers.empty())
return;
auto timer = _timers.back();
_timers.pop_back();
auto& ioService = timer->get_io_service();
ioService.post([timer](){ timer->cancel(); });
VLOG_TRACE(defaultLogger, str(format("Connection pool %s: Waiting thread resumed.")
% _connectionPoolName));
}
This is implementation of coroutineAsyncWait.
inline void coroutineAsyncWait(boost::asio::steady_timer& timer,
boost::asio::yield_context yield)
{
boost::system::error_code ec;
timer.async_wait(yield[ec]);
if(ec && ec != boost::asio::error::operation_aborted)
throw std::runtime_error(ec.message());
}
And finally the first part of the valgrind output:
==8189== Thread 41:
==8189== Invalid read of size 8
==8189== at 0x995F84: void boost::coroutines::detail::trampoline_push_void, void, boost::asio::detail::coro_entry_point, void (anonymous namespace)::executeRequests > >(std::vector<(anonymous namespace)::AvlRequestContext, std::allocator<(anonymous namespace)::AvlRequestContext> >&)::{lambda(boost::asio::basic_yield_context >)#1}>&, boost::coroutines::basic_standard_stack_allocator > >(long) (trampoline_push.hpp:65)
==8189== Address 0x2e3b5528 is not stack'd, malloc'd or (recently) free'd
When I use valgrind with debugger attached it stops in the following function in trampoline_push.hpp in boost::coroutine library.
53│ template< typename Coro >
54│ void trampoline_push_void( intptr_t vp)
55│ {
56│ typedef typename Coro::param_type param_type;
57│
58│ BOOST_ASSERT( vp);
59│
60│ param_type * param(
61│ reinterpret_cast< param_type * >( vp) );
62│ BOOST_ASSERT( 0 != param);
63│
64│ Coro * coro(
65├> reinterpret_cast< Coro * >( param->coro) );
66│ BOOST_ASSERT( 0 != coro);
67│
68│ coro->run();
69│ }
Ultimately I found that when objects need to be deleted, boost::asio doesn't handle it gracefully without proper use of shared_ptr and weak_ptr. When crashes do occur, they are very difficult to debug, because its hard to look into what the io_service queue is doing at the time of failure.
After doing a full asynchronous client architecture recently and running into random crashing issues, I have a few tips to offer. Unfortunately, I cannot know whether these will solve your issues, but hopefully it provides a good start in the right direction.
Boost Asio Coroutine Usage Tips
Use boost::asio::asio_handler_invoke instead of io_service.post():
auto& ioService = timer->get_io_service();
ioService.post(timer{ timer->cancel(); });
Using post/dispatch within a coroutine is usually a bad idea. Always use the asio_handler_invoke when you are called from a coroutine. In this case, however, you can probably safely call timer->cancel() without posting it to the message loop anyways.
Your timers do not appear to use shared_ptr objects. Regardless of what is going on in the rest of your application, there is no way to know for sure when these objects should be destroyed. I would highly recommend using shared_ptr objects for all of your timer objects. Also, any pointer to class methods should use shared_from_this() as well. Using a plain this can be quite dangerous if this is destructed (on the stack) or goes out of scope somewhere else in a shared_ptr. Whatever you do, do not use shared_from_this() in the constructor of an object!
If you're getting a crash when a handler within the io_service is being executed, but part of the handler is no longer valid, this is a seriously difficult thing to debug. The handler object that is pumped into the io_service includes any pointers to timers, or pointers to objects that might be necessary to execute the handler.
I highly recommend going overboard with shared_ptr objects wrapped around any asio classes. If the problem goes away, then its likely order of destruction issues.
Is the failure address location on the heap somewhere or is it pointing to the stack? This will help you diagnose whether its an object going out of scope in a method at the wrong time, or if it is something else. For instance, this proved to me that all of my timers must become shared_ptr objects even within a single threaded application.

Persistent ASIO connections

I am working on a project where I need to be able to use a few persistent to talk to different servers over long periods of time. This server will have a fairly high throughput. I am having trouble figuring out a way to setup the persistent connections correctly. The best way I could think of to do this is create a persistent connection class. Ideally I would connect to the server one time, and do async_writes as information comes into me. And read information as it comes back to me. I don't think I am structuring my class correctly though.
Here is what I have built right now:
persistent_connection::persistent_connection(std::string ip, std::string port):
io_service_(), socket_(io_service_), strand_(io_service_), is_setup_(false), outbox_()
{
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(ip,port);
boost::asio::ip::tcp::resolver::iterator iterator = resolver.resolve(query);
boost::asio::ip::tcp::endpoint endpoint = *iterator;
socket_.async_connect(endpoint, boost::bind(&persistent_connection::handler_connect, this, boost::asio::placeholders::error, iterator));
io_service_.poll();
}
void persistent_connection::handler_connect(const boost::system::error_code &ec, boost::asio::ip::tcp::resolver::iterator endpoint_iterator)
{
if(ec)
{
std::cout << "Couldn't connect" << ec << std::endl;
return;
}
else
{
boost::asio::socket_base::keep_alive option(true);
socket_.set_option(option);
boost::asio::async_read_until(socket_, buf_ ,"\r\n\r\n", boost::bind(&persistent_connection::handle_read_headers, this, boost::asio::placeholders::error));
}
}
void persistent_connection::write(const std::string &message)
{
write_impl(message);
//strand_.post(boost::bind(&persistent_connection::write_impl, this, message));
}
void persistent_connection::write_impl(const std::string &message)
{
outbox_.push_back(message);
if(outbox_.size() > 1)
{
return;
}
this->write_to_socket();
}
void persistent_connection::write_to_socket()
{
std::string message = "GET /"+ outbox_[0] +" HTTP/1.0\r\n";
message += "Host: 10.1.10.120\r\n";
message += "Accept: */*\r\n";
boost::asio::async_write(socket_, boost::asio::buffer(message.c_str(), message.size()), strand_.wrap(
boost::bind(&persistent_connection::handle_write, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)));
}
void persistent_connection::handle_write(const boost::system::error_code& ec, std::size_t bytes_transfered)
{
outbox_.pop_front();
if(ec)
{
std::cout << "Send error" << boost::system::system_error(ec).what() << std::endl;
}
if(!outbox_.empty())
{
this->write_to_socket();
}
boost::asio::async_read_until(socket_, buf_ ,"\r\n\r\n",boost::bind(&persistent_connection::handle_read_headers, this, boost::asio::placeholders::error));
}
The first message I will send from this seems to send out fine, the server gets it, and responds with a valid response. I see two problem unfortunately:
1) My handle_write is never called after doing the async_write command, I have no clue why.
2) The program never reads the response, I am guessing this is related to #1, since asyn_read_until is not called until that function happens.
3) I was also wondering if someone could tell me why my commented out strand_.post call would not work.
I am guessing most of this has to due with my lack of knowledge of how I should be using my io_service, so if somebody could give me any pointer that would be greatly appreciated. And if you need any additional information, I would be glad to provide some more.
Thank you
Edit call to write:
int main()
{
persistent_connection p("10.1.10.220", "80");
p.write("100");
p.write("200");
barrier b(1,30000); //Timed mutex, waits for 300 seconds.
b.wait();
}
and
void persistent_connection::handle_read_headers(const boost::system::error_code &ec)
{
std::istream is(&buf_);
std::string read_stuff;
std::getline(is,read_stuff);
std::cout << read_stuff << std::endl;
}
The behavior described is the result of the io_service_'s event loop no longer being processed.
The constructor invokes io_service::poll() which will run handlers that are ready to run and will not block waiting for work to finish, where as io_service::run() will block until all work has finished. Thus, when polling, if the other side of the connection has not written any data, then no handlers may be ready to run, and execution will return from poll().
With regards to threading, if each connection will have its own thread, and the communication is a half-duplex protocol, such as HTTP, then the application code may be simpler if it is written synchronously. On the other hand, if it each connection will have its own thread, but the code is written asynchronously, then consider handling exceptions being thrown from within the event loop. It may be worth reading Boost.Asio's
effect of exceptions thrown from handlers.
Also, persistent_connection::write_to_socket() introduces undefined behavior. When invoking boost::asio::async_write(), it is documented that the caller retains ownership of the buffer and must guarantee that the buffer remains valid until the handler is called. In this case, the message buffer is an automatic variable, whose lifespan may end before the persistent_connection::handle_write handler is invoked. One solution could be to change the lifespan of message to match that of persistent_connection by making it a member variable.

Am I getting a race condition with my boost asio async_read?

bool Connection::Receive(){
std::vector<uint8_t> buf(1000);
boost::asio::async_read(socket_,boost::asio::buffer(buf,1000),
boost::bind(&Connection::handler, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
int rcvlen=buf.size();
ByteBuffer b((std::shared_ptr<uint8_t>)buf.data(),rcvlen);
if(rcvlen <= 0){
buf.clear();
return false;
}
OnReceived(b);
buf.clear();
return true;
}
The method works fine but only when I make a breakpoint inside it. Is there an issue with timing as it waits to receive? Without the breakpoint, nothing is received.
You are trying to read from the receive buffer immediately after starting the asynchronous operation, without waiting for it to complete, that is why it works when you set a breakpoint.
The code after your async_read belongs into Connection::handler, since that is the callback you told async_read to invoke after receiving some data.
What you usually want is a start_read and a handle_read_some function:
void connection::start_read()
{
socket_->async_read_some(boost::asio::buffer(read_buffer_),
boost::bind(&connection::handle_read_some, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void connection::handle_read_some(const boost::system::error_code& error, size_t bytes_transferred)
{
if (!error)
{
// Use the data here!
start_read();
}
}
Note the shared_from_this, it's important if you want the lifetime of your connection to be automatically taken care of by the number of outstanding I/O requests. Make sure to derive your class from boost::enable_shared_from_this<connection> and to only create it with make_shared<connection>.
To enforce this, your constructor should be private and you can add a friend declaration (C++0x version; if your compiler does not support this, you will have to insert the correct number of arguments yourself):
template<typename T, typename... Arg> friend boost::shared_ptr<T> boost::make_shared(const Arg&...);
Also make sure your receive buffer is still alive by the time the callback is invoked, preferably by using a statically sized buffer member variable of your connection class.