Will my buffer used by all connections? - c++

I have a main which creates an io_service and passes them to an instance of TcpServer.
TcpServer has a member std::array<char, 8192> m_buffer. It has 4 methods: the constructor, startAccept, handleAccept and handleRead.
The constructor only initializes some members and calls startAccept.
startAccept creates a shared pointer of TcpConnection which extends std::enable_shared_from_this<TcpConnection. After that start accept calls m_acceptor.async_accept and binds the accept to the handleAccept method mentioned before.
And this is my handleAccept method. It calls async_read_some with the boost::asio::buffer which uses the member variable declared in TcpServer.
void TcpServer::handleAccept(std::shared_ptr<TcpConnection> newConnection, const boost::system::error_code &error)
{
if (!error) {
//newConnection->start();
std::cout << "Accepting new connection" << std::endl;
newConnection->getSocket().async_read_some(
boost::asio::buffer(m_buffer),
boost::bind(&TcpServer::handleRead, this, newConnection, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)
);
}
startAccept();
}
I am not sure, but if there are multiple connections, all of them will use the same buffer object, right? And they will probably overwrite it, won't they?

Yes, all connections will use same buffer, that is defined in TcpServer. You actually should store buffer in connection, rather than in server.
boost::asio::buffer will use that overload. So, data from read will be stored to your m_buffer. You should store your buffer in connection, or use some synchronization (i.e. some boolean flag, like is_in_read, but that is bad idea).

Related

boost::asio problem passing dynamically sized data to async handler

I am processing custom tcp data packet with boost. Since all operations are asynchronously a handler must be called to process the data. The main problem is that I don't know how to pass the data to the handler when the size is not known at compiletime?
For example, say you receive the header bytes, parse them which tells you the length of the body:
int length = header.body_size();
I somehow need to allocate an array with the size of the body and then call the handler (which is a class member, not a static function) and pass the data to it. How do I do that properly?
I tried different things such as but always ended up getting a segfault or I had to provide a fixed size for the body buffer which is not what I want. An attempt I made can be found below.
After receiving the header information:
char data[header.body_size()];
boost::asio::async_read(_socket, boost::asio::buffer(data, header.body_size()),
boost::bind(&TCPClient::handle_read_body, this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred, data));
The handler:
void TCPClient::handle_read_body(const boost::system::error_code &error, std::size_t bytes_transferred,
const char *buffer) {
Logger::log_info("Reading body. Body size: " + std::to_string(bytes_transferred));
}
This example throws a segfault.
How can I allocate a buffer for the body after knowing the size?
And how can I then call the handler and passing over the error_code, the bytes_transferred and the body data?
An example snippet would be really appreciated since the boost-chat examples that do this are not very clear to me.
char data[header.body_size()]; is not standard in C++ and will become invalid once it goes out of scope while async_read requires buffer to remain alive until completion callback is invoked. So you should probably add a field to TCPClient holding a list of data buffers (probably of std::vector kind) pending to be received.
All you need to do is to create buffer onto heap instead of stack. In place of VLA - char [sizeAtRuntime] you can use std::string or std::vector with std::shared_ptr. By using string/vector you can set buffer to have any size and by using shared_ptr you can prolong lifetime of your buffer.
Version with bind:
void foo()
{
std::shared_ptr<std::vector<char>> buf = std::make_shared<std::vector<char>>(); // buf is local
buf->resize( header.body_size() );
// ditto with std::string
boost::asio::async_read(_socket, boost::asio::buffer(*buf),
boost::bind(&TCPClient::handle_read_body,
this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
buf)); // buf is passed by value
}
void handle_read_body(const boost::system::error_code&,
size_t,
std::shared_ptr<std::vector<char>>)
{
}
in above example buf is created onto stack and points to vector onto heap, because bind takes its arguments by value, so buf is copied and reference counter is increased - it means your buffer still exists when async_read ends and foo ends.
You can achive the same behaviour with lambda, then buf should be captured by value:
void foo()
{
std::shared_ptr<std::vector<char>> buf = std::make_shared<std::vector<char>>(); // buf is local
buf->resize( header.body_size() );
// ditto with std::string
boost::asio::async_read(_socket, boost::asio::buffer(*buf),
boost::bind(&TCPClient::handle_read_body, this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred, buf)); // buf is passed by value
boost::asio::async_read(_socket, boost::asio::buffer(*buf),
[buf](const boost::system::error_code& , size_t)
^^^ capture buf by value, increates reference counter of shared_ptr
{
});
}

Use case of shared_from_this and this

In this source file there are two classes : tcp_connection and tcp_server. I've seleceted the relevant bits of code in my opinion but you might want to refer to the full source code for more information.
class tcp_connection : public boost::enable_shared_from_this<tcp_connection>
{
public:
typedef boost::shared_ptr<tcp_connection> pointer;
void start()
{
message_ = make_daytime_string();
boost::asio::async_write(socket_, boost::asio::buffer(message_),
boost::bind(&tcp_connection::handle_write, shared_from_this()));
}
};
class tcp_server
{
private:
void start_accept()
{
tcp_connection::pointer new_connection =
tcp_connection::create(acceptor_.get_io_service());
acceptor_.async_accept(new_connection->socket(),
boost::bind(&tcp_server::handle_accept, this, new_connection,
boost::asio::placeholders::error));
}
};
My question is simple : what would we use shared_from_this as a bindargument within the async_write function and use this as a bindargument within the
async_acceptfunction?
Shared pointers govern the lifetime of a dynamically allocated object. Each held pointer increases a reference count and when all held pointers are gone the referred to object is freed.
The Server
There's only one server, and it's not dynamically allocated. Instead, the instance lives longer than the acceptor (and possibly the io_service) so no all async operations can trust the object to stay alive long enough.
The Connections
Each client spawns a new connection, dynamically allocating (make_shared) a tcp_connection instance, and then starting asynchronous operations on it.
The server does not keep a copy of the shared-pointer, so when all async operations on the connection complete (e.g. because the connection was dropped) the tcp_connection object will be freed.
However because the object must not be destroyed when an async operation is in progress, you need to bind the completion handler to the shared pointer (shared_from_this) instead of this.

How do you correctly close sockets in boost::asio?

I am trying to wrap my head around resource management in boost::asio. I am seeing callbacks called after the corresponding sockets are already destroyed. A good example of this is in the boost::asio official example: http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/example/cpp11/chat/chat_client.cpp
I am particularly concerned with the close method:
void close()
{
io_service_.post([this]() { socket_.close(); });
}
If you call this function and afterwards destruct chat_client instance that holds socket_, socket_ will be destructed before the close method is called on it. Also any pending async_* callbacks can be called after the chat_client has been destroyed.
How would you correctly handle this?
You can do socket_.close(); almost any time you want, but you should keep in mind some moments:
If you have threads, this call should be wrapped with strand or you can crash. See boost strand documentation.
Whenever you do close keep in mind that
io_service can already have queued handlers. And they will be called anyway with old state/error code.
close can throw an exception.
close does NOT includes ip::tcp::socket destruction. It
just closes the system socket.
You must manage object lifetime
yourself to ensure objects will be destroyed only when there is no
more handlers. Usually this is done with enable_shared_from_this
on your Connection or socket object.
Invoking socket.close() does not destroy the socket. However, the application may need to manage the lifetime of objects for which the operation and completion handlers depend upon, but this is not necessarily the socket object itself. For instance, consider a client class that holds a buffer, a socket, and has a single outstanding read operation with a completion handler of client::handle_read(). One can close() and explicitly destroy the socket, but the buffer and client instance must remain valid until at least the handler is invoked:
class client
{
...
void read()
{
// Post handler that will start a read operation.
io_service_.post([this]() {
async_read(*socket, boost::asio::buffer(buffer_);
boost::bind(&client::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
});
}
void handle_read(
const boost::system::error_code& error,
std::size_t bytes_transferred
)
{
// make use of data members...if socket_ is not used, then it
// is safe for socket to have already been destroyed.
}
void close()
{
io_service_.post([this]() {
socket_->close();
// As long as outstanding completion handlers do not
// invoke operations on socket_, then socket_ can be
// destroyed.
socket_.release(nullptr);
});
}
private:
boost::asio::io_service& io_service_;
// Not a typical pattern, but used to exemplify that outstanding
// operations on `socket_` are not explicitly dependent on the
// lifetime of `socket_`.
std::unique_ptr<boost::asio::socket> socket_;
std::array<char, 512> buffer_;
...
}
The application is responsible for managing the lifetime of objects upon which the operation and handlers are dependent. The chat client example accomplishes this by guaranteeing that the chat_client instance is destroyed after it is no longer in use, by waiting for the io_service.run() to return within the thread join():
int main(...)
{
try
{
...
boost::asio::io_service io_service;
chat_client c(...);
std::thread t([&io_service](){ io_service.run(); });
...
c.close();
t.join(); // Wait for `io_service.run` to return, guaranteeing
// that `chat_client` is no longer in use.
} // The `chat_client` instance is destroyed.
catch (std::exception& e)
{
...
}
}
One common idiom is to managing object lifetime is to have the I/O object be managed by a single class that inherits from enable_shared_from_this<>. When a class inherits from enable_shared_from_this, it provides a shared_from_this() member function that returns a valid shared_ptr instance managing this. A copy of the shared_ptr is passed to completion handlers, such as a capture-list in lambdas or passed as the instance handle to bind(), causing the lifetime of the I/O object to be extended to at least as long as the handler. See the Boost.Asio asynchronous TCP daytime server tutorial for an example using this approach.

Why does this ASIO example use members variables to pass state rather than using bind?

In the ASIO HTTP Server 3 example there is code like this:
void server::start_accept()
{
new_connection_.reset(new connection(io_service_, request_handler_));
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error));
}
void server::handle_accept(const boost::system::error_code& e)
{
if (!e)
{
new_connection_->start();
}
start_accept();
}
Essentially, new_connection_ is a member of the server class and is used to pass a connection from start_accept to handle_accept. Now, I'm curious as to why new_connection_ is implemented as a member variable.
Wouldn't it also work to pass the connection using bind instead of a member variable? Like this:
void server::start_accept()
{
std::shared_ptr<connection> new_connection(new connection(io_service_, request_handler_));
acceptor_.async_accept(new_connection_->socket(),
boost::bind(&server::handle_accept, this,
boost::asio::placeholders::error),
new_connection);
}
void server::handle_accept(boost::system::error_code const& error, std::shared_ptr<connection> new_connection)
{
if (!error) {
new_connection->start();
}
start_accept();
}
If so, why does the example use member variables? Is it to avoid the copying involved?
(note: I'm not comfortable with ASIO yet and so there may be a fundamental misconception here)
Passing the socket variable inside a function created with std::bind is more or less the same as retaining it as a member variable in the http::server3::server class. Using bind will create temporaries whereas using the member variable will not. I don't think that is a big concern here as the std::shared_ptr is not terribly expensive to copy nor is this code path in the example a performance critical section.
When writing my own applications I find myself using both techniques. If the asynchronous call chain is very long I will typical retain the variables as members to simplify the handler's function signatures and prevent code repetition. For shorter call chains, keeping the state variables in the functor created from bind is a bit easier to understand the code's logic.

Am I getting a race condition with my boost asio async_read?

bool Connection::Receive(){
std::vector<uint8_t> buf(1000);
boost::asio::async_read(socket_,boost::asio::buffer(buf,1000),
boost::bind(&Connection::handler, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
int rcvlen=buf.size();
ByteBuffer b((std::shared_ptr<uint8_t>)buf.data(),rcvlen);
if(rcvlen <= 0){
buf.clear();
return false;
}
OnReceived(b);
buf.clear();
return true;
}
The method works fine but only when I make a breakpoint inside it. Is there an issue with timing as it waits to receive? Without the breakpoint, nothing is received.
You are trying to read from the receive buffer immediately after starting the asynchronous operation, without waiting for it to complete, that is why it works when you set a breakpoint.
The code after your async_read belongs into Connection::handler, since that is the callback you told async_read to invoke after receiving some data.
What you usually want is a start_read and a handle_read_some function:
void connection::start_read()
{
socket_->async_read_some(boost::asio::buffer(read_buffer_),
boost::bind(&connection::handle_read_some, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void connection::handle_read_some(const boost::system::error_code& error, size_t bytes_transferred)
{
if (!error)
{
// Use the data here!
start_read();
}
}
Note the shared_from_this, it's important if you want the lifetime of your connection to be automatically taken care of by the number of outstanding I/O requests. Make sure to derive your class from boost::enable_shared_from_this<connection> and to only create it with make_shared<connection>.
To enforce this, your constructor should be private and you can add a friend declaration (C++0x version; if your compiler does not support this, you will have to insert the correct number of arguments yourself):
template<typename T, typename... Arg> friend boost::shared_ptr<T> boost::make_shared(const Arg&...);
Also make sure your receive buffer is still alive by the time the callback is invoked, preferably by using a statically sized buffer member variable of your connection class.