boost::asio::async_write and buffers over 65536 bytes - c++

I have a very simple method, with the purpose of responding to an incoming message, and then closing the connection:
void respond ( const std::string message )
{
std::string str = "<?xml version=\"1.0\"?>";
Controller & controller = Controller::Singleton();
if ( auto m = handleNewMessage( _message ) )
{
auto reply = controller.FIFO( m );
str.append( reply );
}
else
str.append ( "<Error/>" );
std::size_t bytes = str.size() * sizeof( std::string::value_type );
std::cout << "Reply bytesize " << bytes << std::endl;
boost::asio::async_write(
socket_,
boost::asio::buffer( str ),
boost::bind(
&TCPConnection::handle_write,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
));
}
void handle_write ( const boost::system::error_code & error, size_t bytes_transferred )
{
if ( error )
{
std::cerr << "handle_write Error: " << error.message() << std::endl;
std::cerr << "handle_write Bytes sent: " << bytes_transferred << std::endl;
}
else
{
std::cerr << "handle_write Bytes sent: " << bytes_transferred << std::endl;
socket_.close();
}
}
I know the problem is that boost::asio::async_write does not complete the writing operation, because the output from the above operations is:
Reply bytesize: 354275
handle_write Bytes sent: 65536
Implying that the maximum buffer size (65536) was not enough to write the data?
Searching around Stack Overflow, I discovered that my problem is that the buffer created by the method:
boost::asio::buffer( str )
goes out of scope before the operation has a chance to finish sending all the data.
It seems like I can't use a boost::asio::mutable_buffer, but only a boost::asio::streambuf
Furthermore and more importantly, a second error complains about the actual boost::asio::async_write being passed a boost::asio::const_buffer OR boost::asio::mutable_buffer:
/usr/include/boost/asio/detail/consuming_buffers.hpp:164:5: error: no type named ‘const_iterator’ in ‘class boost::asio::mutable_buffer’
const_iterator;
^
/usr/include/boost/asio/detail/consuming_buffers.hpp:261:36: error: no type named ‘const_iterator’ in ‘class boost::asio::mutable_buffer’
typename Buffers::const_iterator begin_remainder_;
So I am left with only one choice: To use a boost::asio::streambuf
I've tried using:
boost::asio::streambuf _out_buffer;
As a class member, and then made method respond:
std::ostream os( &_out_buffer );
os << str;
boost::asio::async_write(
socket_,
_out_buffer,
boost::asio::transfer_exactly( bytes ),
boost::bind(
&TCPConnection::handle_write,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
));
However, although I get no errors, not the entire data is sent!
So, I am guessing, not the entire string is written into the streambuf?
Alternatively, I would love to know what is the most elegant way to write using boost::asio::async_write, data that is larger than 65536 bytes!

Alex, you understand asio async operations wrong. Your problem is all about lifetime of buffer and socket.
The buffer has to be alive and socket opened during the whole transmition time (from asio::async_write call to handle_write callback is to be called by Asio io_service dispatcher.
To better understand how it works, consider that every time you doing some boost::asio::async_{operation} you are posting the pointer to data for operation and pointer to callback function to the job queue. And it's Asio decision when to execute your job (but of course it tries to do it as faster as possible =)). When the whole (possible big) I/O operation completes the Asio informs you using specified callback. And you can release resources then freely.
So, to get your code work you have to ensure that std::string str is still exist and _socket not closed until the handle_write callback. You can replace the stack allocated std::string str variable by some member variable in the class that agregates _socket. And move the socket_.close(); line from respond function to handle_write.
Hope, I helped you.
P.S. When you do boost::asio::buffer( str ), you don't copy content of the string but just create thin wpapper above data of string.

The code:
_out_buffer( static_cast<void*>( &str.front() ), bytes );
Is only valid when initializing _out_buffer, i.e. Before the body of your class's constructor begins.
That code equivalent to
_out_buffer.operator()( static_cast<void*>(&str.front()), bytes )
Of course there is no such operator in class mutable_buffer, and that's what the compiler is complaining about.
I think the simplest thing to do (but not the best), is to change that line to:
_out_buffer = boost::asio::mutable_buffer(
static_cast<void*>( &str.front() ),
bytes
);

Related

How can I effectively use boost::process::async_pipe for both writing and reading?

I've already seen the boost::process tutorial... but there the example is a single write then a single read from the child process. I want to know if it is possible to have both async_pipes alive (read and write) during the child process lifetime.
I'm having trouble reading from child->parent pipe (read_pipe, in my code). I can only read from it after closing the parent->child pipe (write_pipe), but that means that I won't write anything ever again on this child process, right? This makes sense, but there is some workaround to maintain a bi-directional channel? My final objective is to continuously alternate between reading and writing chunks.
#include <boost/asio.hpp>
#include <boost/process.hpp>
#include <iostream>
#include <string>
#include <vector>
typedef std::function<void(const boost::system::error_code & ec, std::size_t n)> Handler;
int main(){
std::vector<char> vread_buffer(4096);
boost::asio::io_service ios;
boost::process::async_pipe read_pipe{ios};
boost::process::async_pipe write_pipe{ios};
auto child_process = boost::process::child( "stockfish_12_win_x64/stockfish_20090216_x64.exe" ,
boost::process::std_in < write_pipe ,
boost::process::std_out > read_pipe );
Handler on_stdout,on_stdin;
std::string read_string;
on_stdout = [&](const boost::system::error_code & ec, size_t n){
std::cout << "I'm reading " << n << " characters from the child process. Can't wait for more!" << std::endl;
read_string.reserve( read_string.size() + n );
read_string.insert( read_string.end() , vread_buffer.begin() , vread_buffer.begin() + n );
if(!ec) boost::asio::async_read( read_pipe , boost::asio::buffer(vread_buffer) , on_stdout );
};
on_stdin = [&]( const boost::system::error_code & ec, std::size_t n ){
std::cout << "I know that " << n << " characters were sent to the process. Yay! " << std::endl;
};
// {...} Suppose an undefined amount of time has passed
// Expected: To see the initial print from the program calling the on_stdout
// ... but it doesn't call anything.
boost::asio::async_read( read_pipe , boost::asio::buffer(vread_buffer) , on_stdout );
ios.poll();
ios.restart();
// Expected: To see the "on_stdin" handler being called... and it was!
std::string write_string = "uci\n";
boost::asio::async_write( write_pipe , boost::asio::buffer(write_string) , on_stdin );
ios.poll();
ios.restart();
// Sending a async_read will never do anything unless I close the write_pipe.
// How can I tell the io_service that last write was done so I can read again?
boost::asio::async_read( read_pipe , boost::asio::buffer(vread_buffer) , on_stdout );
ios.poll();
ios.restart();
}
I think I figured it out. I should use boost::asio::async_read_until passing a delimiter (char/regex/string) instead of boost::asio::async_read
This way, boost::process::buffer should be changed to boost::asio::dynamic_buffer
std::string read_string;
auto d_buffer = boost::asio::dynamic_buffer(read_string);
// First read
boost::asio::async_read_until( read_pipe , d_buffer , '\n' , on_stdout );
ios.poll();
ios.restart();
// Any amount of writes can be done here with async_write
// Second read
d_buffer.consume(d_buffer.size(); // Will append new content into read_string anyway, this is just to read_string doesn't grow indefinitely. Also, I noted unexpected behavior if not consume all content (new content appended at random middle points)
boost::asio::async_read_until( read_pipe , d_buffer , '\n' , on_stdout );
ios.poll();
ios.restart();

boost::asio::io_service async_write in loop while doing an async_read

So my problem is the following I have an async TCP server and the respective async client. What I need is a way to be able to write to my client (continuously), a real time variable, while at the same time being able to receive commands from the client.
What I have right now is, if the client send the command that should trigger this operations it sends just a test message but I'm only finding a way to send one message because after that the server hangs waiting for a client command.
This function is the one that handles the commands sent from the client and after receiving passes them to the function h_read:
void conn::h_write() {
memset( data_, '\0', sizeof(char)*max_length );
async_read_until(sock_ , input_buffer_, '\n',
boost::bind(&conn::h_read, shared_from_this()));
}
Here I check if the command is the one that should trigger continuously write of the real time buffer to the client, in this case the command is "c".
void conn::h_read(){
std::string line;
std::istream is(&input_buffer_);
std::getline(is, line);
std::string output = "";
output.reserve(5000);
if ( line == "exit"){
return;
}
if ( line.empty() ){
memset( data_, '\0', sizeof(char)*max_length );
async_read_until(sock_ , input_buffer_, '\n', boost::bind(&conn::h_read, shared_from_this()));
return;
}
clientMessage_ = line;
clientMessage_ += '\n';
if ( clientMessage_.substr(0,1) == "c" ){
std::stringstream toSend;
streamON = true;
toSend.str("c l 1 ");
toSend << std::fixed << std::setprecision( 2 ) << luxID1[0];
toSend.str(" ");
// Here I sent the real time value for the first time
boost::asio::async_write(sock_, boost::asio::buffer( toSend.str() ), boost::bind(&conn::sendRealTime, shared_from_this()));
}
else{ // Does't really matter to this example
// Do some stuff here and send to client
boost::asio::async_write(sock_, boost::asio::buffer( I2CrxBuf_ ), boost::bind(&conn::h_write, shared_from_this()));
}
}
Now this is the function that should handle the continuous send of the variable but at the same time be able to read the client commands:
void conn::sendRealTime(){
if ( streamON ){
boost::asio::async_write(sock_, boost::asio::buffer( "This is a test\n" ), boost::bind(&conn::h_write, shared_from_this()));
memset( data_, '\0', sizeof(char)*max_length );
async_read_until(sock_ , input_buffer_, '\n', boost::bind(&conn::h_read, shared_from_this()));
}
else{
memset( data_, '\0', sizeof(char)*max_length );
async_read_until(sock_ , input_buffer_, '\n', boost::bind(&conn::h_read, shared_from_this()));
}
}
The problem is that it blocks after the first call to the "async_read_util" functions.
I don't even know if what I want is even possible, but if it is could someone please help me on how to do it?
What I need is a way to be able to write to my client (continuously),
a real time variable, while at the same time being able to receive
commands from the client.
This is not a problem.
Separate your read and write functions. Call async_read_until once after the socket was successfully connected / initialized. Then in your read-handler call it again. Nowhere else. This is the usual way of performing read operations.
Please also refer to the documentation.
The program must ensure that the stream performs no other read
operations (such as async_read, async_read_until, the stream's
async_read_some function, or any other composed operations that
perform reads) until this operation completes.
Remember that the data in the read-buffer may contain more than you expect it to be.
After a successful async_read_until operation, the streambuf may
contain additional data beyond the delimiter. An application will
typically leave that data in the streambuf for a subsequent
async_read_until operation to examine.

Boost::asio::async_write doesn't seem to free memory

I have a cluster program using boost asio to make the network part.
I'm using async_write function to write the message from the server to the client :
boost::asio::async_write( *m_Socket,
boost::asio::buffer( iData, iSize ),
boost::bind(
&MyObject::handle_write, this,
boost::asio::placeholders::error ) );
My handle_write method :
void
MyObject::handle_write( const boost::system::error_code& error )
{
std::cout << "handle_write" << std::endl;
if (error)
{
std::cout << "Write error !" << std::endl;
m_Server->RemoveSession(this);
}
}
It seems to work well. When I use memory leak detector program, there is no leak at all.
But, my program is supposed to run many days without interuption and during test, it appears that I don't have anough memory... After some inspection, I found that my program was allocating around 0.3Mo by seconds. And with a memory validor I found that it was into boost::asio::async_write...
I checked the documentation and I think I use it in the correct way... Am I missing something ?
EDIT 1:
That is how I call the function who call async_write itself :
NetworkMessage* msg = new NetworkMessage;
sprintf(msg->Body(), "%s", iData );
m_BytesCount += msg->Length();
uint32 nbSessions = m_Sessions.size();
// Send to all clients
for( uint32 i=0; i < nbSessions; i++)
{
m_Sessions[i]->Write( msg->Data(), msg->Length() );
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
}
delete msg;
msg->Data is the data passed to async_write.

boost::asio async server design

Currently I'm using design when server reads first 4 bytes of stream then read N bytes after header decoding.
But I found that time between first async_read and second read is 3-4 ms. I just printed in console timestamp from callbacks for measuring. I sent 10 bytes of data in total. Why it takes so much time to read?
I running it in debug mode but I think that 1 connection for debug is
not so much to have a 3 ms delay between reads from socket. Maybe I need
another approach to cut TCP stream on "packets"?
UPDATE: I post some code here
void parseHeader(const boost::system::error_code& error)
{
cout<<"[parseHeader] "<<lib::GET_SERVER_TIME()<<endl;
if (error) {
close();
return;
}
GenTCPmsg::header result = msg.parseHeader();
if (result.error == GenTCPmsg::parse_error::__NO_ERROR__) {
msg.setDataLength(result.size);
boost::asio::async_read(*socket,
boost::asio::buffer(msg.data(), result.size),
(*_strand).wrap(
boost::bind(&ConnectionInterface::parsePacket, shared_from_this(), boost::asio::placeholders::error)));
} else {
close();
}
}
void parsePacket(const boost::system::error_code& error)
{
cout<<"[parsePacket] "<<lib::GET_SERVER_TIME()<<endl;
if (error) {
close();
return;
}
protocol->parsePacket(msg);
msg.flush();
boost::asio::async_read(*socket,
boost::asio::buffer(msg.data(), config::HEADER_SIZE),
(*_strand).wrap(
boost::bind(&ConnectionInterface::parseHeader, shared_from_this(), boost::asio::placeholders::error)));
}
As you see unix timestamps differ in 3-4 ms. I want to understand why so many time elapse between parseHeader and parsePacket. This is not a client problem, summary data is 10 bytes, but i cant sent much much more, delay is exactly between calls. I'm using flash client version 11. What i do is just send ByteArray through opened socket. I don't sure that delays on client. I send all 10 bytes at once. How can i debug where actual delay is?
There are far too many unknowns to identify the root cause of the delay from the posted code. Nevertheless, there are a few approaches and considerations that can be taken to help to identify the problem:
Enable handler tracking for Boost.Asio 1.47+. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, to the standard error stream. These timestamps can be used to help filter out delays introduced by application code (parseHeader(), parsePacket(), etc.).
Verify that byte-ordering is being handled properly. For example, if the protocol defines the header's size field as two bytes in network-byte-order and the server is handling the field as a raw short, then upon receiving a message that has a body size of 10:
A big-endian machine will call async_read reading 10 bytes. The read operation should complete quickly as the socket already has the 10 byte body available for reading.
A little-endian machine will call async_read reading 2560 bytes. The read operation will likely remain outstanding, as far more bytes are trying to be read than is intended.
Use tracing tools such as strace, ltrace, etc.
Modify Boost.Asio, adding timestamps throughout the callstack. Boost.Asio is shipped as a header-file only library. Thus, users may modify it to provide as much verbosity as desired. While not the cleanest or easiest of approaches, adding a print statement with timestamps throughout the callstack may help provide visibility into timing.
Try duplicating the behavior in a short, simple, self contained example. Start with the simplest of examples to determine if the delay is systamtic. Then, iteratively expand upon the example so that it becomes closer to the real-code with each iteration.
Here is a simple example from which I started:
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>
class tcp_server
: public boost::enable_shared_from_this< tcp_server >
{
private:
enum
{
header_size = 4,
data_size = 10,
buffer_size = 1024,
max_stamp = 50
};
typedef boost::asio::ip::tcp tcp;
public:
typedef boost::array< boost::posix_time::ptime, max_stamp > time_stamps;
public:
tcp_server( boost::asio::io_service& service,
unsigned short port )
: strand_( service ),
acceptor_( service, tcp::endpoint( tcp::v4(), port ) ),
socket_( service ),
index_( 0 )
{}
/// #brief Returns collection of timestamps.
time_stamps& stamps()
{
return stamps_;
}
/// #brief Start the server.
void start()
{
acceptor_.async_accept(
socket_,
boost::bind( &tcp_server::handle_accept, this,
boost::asio::placeholders::error ) );
}
private:
/// #brief Accept connection.
void handle_accept( const boost::system::error_code& error )
{
if ( error )
{
std::cout << error.message() << std::endl;
return;
}
read_header();
}
/// #brief Read header.
void read_header()
{
boost::asio::async_read(
socket_,
boost::asio::buffer( buffer_, header_size ),
boost::bind( &tcp_server::handle_read_header, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred ) );
}
/// #brief Handle reading header.
void
handle_read_header( const boost::system::error_code& error,
std::size_t bytes_transferred )
{
if ( error )
{
std::cout << error.message() << std::endl;
return;
}
// If no more stamps can be recorded, then stop the async-chain so
// that io_service::run can return.
if ( !record_stamp() ) return;
// Read data.
boost::asio::async_read(
socket_,
boost::asio::buffer( buffer_, data_size ),
boost::bind( &tcp_server::handle_read_data, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred ) );
}
/// #brief Handle reading data.
void handle_read_data( const boost::system::error_code& error,
std::size_t bytes_transferred )
{
if ( error )
{
std::cout << error.message() << std::endl;
return;
}
// If no more stamps can be recorded, then stop the async-chain so
// that io_service::run can return.
if ( !record_stamp() ) return;
// Start reading header again.
read_header();
}
/// #brief Record time stamp.
bool record_stamp()
{
stamps_[ index_++ ] = boost::posix_time::microsec_clock::local_time();
return index_ < max_stamp;
}
private:
boost::asio::io_service::strand strand_;
tcp::acceptor acceptor_;
tcp::socket socket_;
boost::array< char, buffer_size > buffer_;
time_stamps stamps_;
unsigned int index_;
};
int main()
{
boost::asio::io_service service;
// Create and start the server.
boost::shared_ptr< tcp_server > server =
boost::make_shared< tcp_server >( boost::ref(service ), 33333 );
server->start();
// Run. This will exit once enough time stamps have been sampled.
service.run();
// Iterate through the stamps.
tcp_server::time_stamps& stamps = server->stamps();
typedef tcp_server::time_stamps::iterator stamp_iterator;
using boost::posix_time::time_duration;
for ( stamp_iterator iterator = stamps.begin() + 1,
end = stamps.end();
iterator != end;
++iterator )
{
// Obtain the delta between the current stamp and the previous.
time_duration delta = *iterator - *(iterator - 1);
std::cout << "Delta: " << delta.total_milliseconds() << " ms"
<< std::endl;
}
// Calculate the total delta.
time_duration delta = *stamps.rbegin() - *stamps.begin();
std::cout << "Total"
<< "\n Start: " << *stamps.begin()
<< "\n End: " << *stamps.rbegin()
<< "\n Delta: " << delta.total_milliseconds() << " ms"
<< std::endl;
}
A few notes about the implementation:
There is only one thread (main) and one asynchronous chain read_header->handle_read_header->handle_read_data. This should minimize the amount of time a ready-to-run handler spends waiting for an available thread.
To focus on boost::asio::async_read, noise is minimized by:
Using a pre-allocated buffer.
Not using shared_from_this() or strand::wrap.
Recording the timestamps, and perform processing post-collection.
I compiled on CentOS 5.4 using gcc 4.4.0 and Boost 1.50. To drive the data, I opted to send 1000 bytes using netcat:
$ ./a.out > output &
[1] 18623
$ echo "$(for i in {0..1000}; do echo -n "0"; done)" | nc 127.0.0.1 33333
[1]+ Done ./a.out >output
$ tail output
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Total
Start: 2012-Sep-10 21:22:45.585780
End: 2012-Sep-10 21:22:45.586716
Delta: 0 ms
Observing no delay, I expanded upon the example by modifying the boost::asio::async_read calls, replacing this with shared_from_this() and wrapping the ReadHandlerss with strand_.wrap(). I ran the updated example and still observed no delay. Unfortunately, that is as far as I could get based on the code posted in the question.
Consider expanding upon the example, adding in a piece from the real implementation with each iteration. For example:
Start with using the msg variable's type to control the buffer.
Next, send valid data, and introduce parseHeader() and parsePacket functions.
Finally, introduce the lib::GET_SERVER_TIME() print.
If the example code is as close as possible to the real code, and no delay is being observed with boost::asio::async_read, then the ReadHandlers may be ready-to-run in the real code, but they are waiting on synchronization (the strand) or a resource (a thread), resulting in a delay:
If the delay is the result of synchronization with the strand, then consider Robin's suggestion by reading a larger block of data to potentially reduce the amount of reads required per-message.
If the delay is the result of waiting for a thread, then consider having an additional thread call io_service::run().
One thing that makes Boost.Asio awesome is using the async feature to the fullest. Relying on a specific number of bytes read in one batch, possibly ditching some of what could already been read, isn't really what you should be doing.
Instead, look at the example for the webserver especially this: http://www.boost.org/doc/libs/1_51_0/doc/html/boost_asio/example/http/server/connection.cpp
A boost triboolean is used to either a) complete the request if all data is available in one batch, b) ditch it if it's available but not valid and c) just read more when the io_service chooses to if the request was incomplete. The connection object is shared with the handler through a shared pointer.
Why is this superior to most other methods? You can possibly save the time between reads already parsing the request. This is sadly not followed through in the example but idealy you'd thread the handler so it can work on the data already available while the rest is added to the buffer. The only time it's blocking is when the data is incomplete.
Hope this helps, can't shed any light on why there is a 3ms delay between reads though.

Consume only part of data in boost::asio basic_stream_socket::async_read_some handler

I am new into boost::asio so my question maight be dumb - sorry if it is such.
I am writing asynchronous server application with keepalive (multiple requests may be sent on single connection).
Connection handling routine is simple:
In a loop:
schedule read request with socket->async_read_some(buffer, handler)
from handler schedule write response with async_write.
The problem I am facing is that when
handler passed to async_read_some is called by on of io_service threads, buffers may actually contain more data than single request (e.g. part of next request sent by client).
I do not want to (and cannot if it is only part of request) handle this remaining bytes at the moment.
I would like to do it after handling previous request is finished.
It would be easy to address this if I had the possiblity to reinject unnecessary remainging data back to the socket. So it is handled on next async_read_some call.
Is there such possiblity in boost::asio or do I have to store the remaining data somewhere aside, and handle it myself with extra code.
I think what you are looking for is asio::streambuf.
Basically, you can inspect your seeded streambuf as a char*, read as much as you see fit, and then inform how much was actually processed by consume(amount).
Working code-example to parse HTTP-header as a client:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <string>
namespace asio = boost::asio;
std::string LINE_TERMINATION = "\r\n";
class Connection {
asio::streambuf _buf;
asio::ip::tcp::socket _socket;
public:
Connection(asio::io_service& ioSvc, asio::ip::tcp::endpoint server)
: _socket(ioSvc)
{
_socket.connect(server);
_socket.send(boost::asio::buffer("GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n"));
readMore();
}
void readMore() {
// Allocate 13 bytes space on the end of the buffer. Evil prime number to prove algorithm works.
asio::streambuf::mutable_buffers_type buf = _buf.prepare(13);
// Perform read
_socket.async_read_some(buf, boost::bind(
&Connection::onRead, this,
asio::placeholders::bytes_transferred, asio::placeholders::error
));
}
void onRead(size_t read, const boost::system::error_code& ec) {
if ((!ec) && (read > 0)) {
// Mark to buffer how much was actually read
_buf.commit(read);
// Use some ugly parsing to extract whole lines.
const char* data_ = boost::asio::buffer_cast<const char*>(_buf.data());
std::string data(data_, _buf.size());
size_t start = 0;
size_t end = data.find(LINE_TERMINATION, start);
while (end < data.size()) {
std::cout << "LINE:" << data.substr(start, end-start) << std::endl;
start = end + LINE_TERMINATION.size();
end = data.find(LINE_TERMINATION, start);
}
_buf.consume(start);
// Wait for next data
readMore();
}
}
};
int main(int, char**) {
asio::io_service ioSvc;
// Setup a connection and run
asio::ip::address localhost = asio::ip::address::from_string("127.0.0.1");
Connection c(ioSvc, asio::ip::tcp::endpoint(localhost, 80));
ioSvc.run();
}
One way of tackling this when using a reliable and ordered transport like TCP is to:
Write a header of known size, containing the size of the rest of the message
Write the rest of the message
And on the receiving end:
Read just enough bytes to get the header
Read the rest of the message and no more
If you know the messages are going to be of a fixed length, you can do something like the following:
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
void
Connection::readMore()
{
if (m_connected)
{
// Asynchronously read some data from the connection into the buffer.
// Using shared_from_this() will prevent this Connection object from
// being destroyed while data is being read.
boost::asio::async_read(
m_socket,
boost::asio::buffer(
m_readMessage.getData(),
MessageBuffer::MESSAGE_LENGTH
),
boost::bind(
&Connection::messageBytesRead,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
),
boost::bind(
&Connection::handleRead,
shared_from_this(),
boost::asio::placeholders::error
)
);
}
}
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
std::size_t
Connection::messageBytesRead(const boost::system::error_code& _errorCode,
std::size_t _bytesRead)
{
return MessageBuffer::MESSAGE_LENGTH - _bytesRead;
}
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
void
Connection::handleRead(const boost::system::error_code& _errorCode)
{
if (!_errorCode)
{
/// Do something with the populated m_readMessage here.
readMore();
}
else
{
disconnect();
}
}
The messageBytesRead callback will indicate to boost::asio::async_read when a complete message has been read. This snippet was pulled from an existing Connection object from running code, so I know it works...