BOOST::ASIO - UDP - endpoint gets overwritten - c++

I am trying to implement some keep-alive service in UDP using BOOST::ASIO, these are the general steps:
Sending keep-alives to 2 processes on the same machine, they are listening on the same ip with a different port.
Loop to send async_send_to to both, and the callback is a function that calls async_receive_from with a callback F().
Both refer to the same endpoint and data buffers.
while loop with io_service.run_one() inside.
The processes reply immediately.
The issue is that sporadically I either get the 2 differing ports when I check the endpoints' ports (the wanted case) F() runs, or, I get twice the same port.
It seems as the endpoint buffer (and probably the data) is getting overwritten by the later packet.
I was thinking the since I'm using run_one() the packets should be processed one by one and there will be no overwriting.
Initial send -
void GetInstancesHeartbeat(udp::endpoint &sender_endpoint)
{
int instanceIndex = 0;
for (; instanceIndex <= amountOfInstances ; instanceIndex++)
{
udp::endpoint endpoint = udp::endpoint(IP, Port+ instanceIndex);
m_instancesSocket->async_send_to(
boost::asio::buffer((char*)&(message),
sizeof(message)),endpoint,
boost::bind(&ClusterManager::handle_send_to_instance,
this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
sender_endpoint));
}
}
Then the handler -
void handle_send_to_instance(const boost::system::error_code& error, size_t
bytes_recvd, udp::endpoint &sender_endpoint)
{
m_instancesSocket->async_receive_from(
boost::asio::buffer(m_dataBuffer, m_maxLength), m_endpoint,
boost::bind(&ClusterManager::handle_receive_from_instance, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred,
sender_endpoint));
}
While loop -
while(true){
io_service.run_one();
}
And the handle receive where the port results twice the same -
void handle_receive_from_instance(const boost::system::error_code& error, size_t
bytes_recvd, udp::endpoint&sender_endpoint)
{
if (!error && bytes_recvd > 0)
{
int instancePort = m_endpoint.port();
} else {
//PRINT ERROR
}
}

The actual operations are asynchronous, so there's no telling when the endpoint reference gets written to. That's the nature of asynchronous calls.
So, what you need to have is an endpoint receiving variable per asynchronous call (you might store it per instance index).
There are a number of other really suspicious bits:
what's the type of message? For most types you'd write just boost::asio::buffer(message) (which deals with T [], std::vector<T>, array<T> etc). This works when T is char or any POD type.
If message is actually a struct of some type, consider using a single-element array to avoid having to dangerous casting:
Live On Coliru
POD message[1] = {pod};
s.async_send_to(boost::asio::buffer(message), udp::endpoint{{}, 6767}, [](boost::system::error_code ec, size_t transferred) {
std::cout << "Transferred: " << transferred << " (" << ec.message() << ")\n";
});
(Sends 12 bytes on a typical system).
Whatever you do, don't write the unsafe C-style cast (Why use static_cast<int>(x) instead of (int)x?).
You have while(true) { io.run_one(); } which is an infinite loop. A better way to write it would be: while(io.run_one()) {}
However, that would basically be the same as io.run();, but less correctly and less efficiently (see https://www.boost.org/doc/libs/1_68_0/boost/asio/detail/impl/scheduler.ipp line 138), so why not use it?

Related

boost::asio write: Broken pipe

I have a TCP server that handles new connections, when there's a new connection two threads will be created (std::thread, detached).
void Gateway::startServer(boost::asio::io_service& io_service, unsigned short port) {
tcp::acceptor TCPAcceptor(io_service, tcp::endpoint(tcp::v4(), port));
bool UARTToWiFiGatewayStarted = false;
for (;;) { std::cout << "\nstartServer()\n";
auto socket(std::shared_ptr<tcp::socket>(new tcp::socket(io_service)));
/*!
* Accept a new connected WiFi client.
*/
TCPAcceptor.accept(*socket);
socket->set_option( tcp::no_delay( true ) );
// This will set the boolean `Gateway::communicationSessionStatus` variable to true.
Gateway::enableCommunicationSession();
// start one thread
std::thread(WiFiToUARTWorkerSession, socket, this->SpecialUARTPort, this->SpecialUARTPortBaud).detach();
// start the second thread
std::thread(UARTToWifiWorkerSession, socket, this->UARTport, this->UARTbaud).detach();
}
}
The first of two worker functions look like this (here I'm reading using the shared socket):
void Gateway::WiFiToUARTWorkerSession(std::shared_ptr<tcp::socket> socket, std::string SpecialUARTPort, unsigned int baud) {
std::cout << "\nEntered: WiFiToUARTWorkerSession(...)\n";
std::shared_ptr<FastUARTIOHandler> uart(new FastUARTIOHandler(SpecialUARTPort, baud));
try {
while(true == Gateway::communicationSessionStatus) { std::cout << "WiFi->UART\n";
unsigned char WiFiDataBuffer[max_incoming_wifi_data_length];
boost::system::error_code error;
/*!
* Read the TCP data.
*/
size_t length = socket->read_some(boost::asio::buffer(WiFiDataBuffer), error);
/*!
* Handle possible read errors.
*/
if (error == boost::asio::error::eof) {
// this will set the shared boolean variable from "true" to "false", causing the while loop (from the both functions and threads) to stop.
Gateway::disableCommunicationSession();
break; // Connection closed cleanly by peer.
}
else if (error) {
Gateway::disableCommunicationSession();
throw boost::system::system_error(error); // Some other error.
}
uart->write(WiFiDataBuffer, length);
}
}
catch (std::exception &exception) {
std::cerr << "[APP::exception] Exception in thread: " << exception.what() << std::endl;
}
std::cout << "\nExiting: WiFiToUARTWorkerSession(...)\n";
}
And the second one (here I'm writing using the thread-shared socket):
void Gateway::UARTToWifiWorkerSession(std::shared_ptr<tcp::socket> socket, std::string UARTport, unsigned int baud) {
std::cout << "\nEntered: UARTToWifiWorkerSession(...)\n";
/*!
* Buffer used for storing the UART-incoming data.
*/
unsigned char UARTDataBuffer[max_incoming_uart_data_length];
std::vector<unsigned char> outputBuffer;
std::shared_ptr<FastUARTIOHandler> uartHandler(new FastUARTIOHandler(UARTport, baud));
while(true == Gateway::communicationSessionStatus) { std::cout << "UART->WiFi\n";
/*!
* Read the UART-available data.
*/
auto bytesReceived = uartHandler->read(UARTDataBuffer, max_incoming_uart_data_length);
/*!
* If there was some data, send it over TCP.
*/
if(bytesReceived > 0) {
boost::asio::write((*socket), boost::asio::buffer(UARTDataBuffer, bytesReceived));
std::cout << "\nSending data to app...\n";
}
}
std::cout << "\nExited: UARTToWifiWorkerSession(...)\n";
}
For stopping this two threads I do the following thing: from the WiFiToUARTWorkerSession(...) function, if the read(...) fails (there's an error like boost::asio::error::eof, or any other error) I set the Gateway::communicationSessionStatus boolean switch (which is shared (global) by the both functions) to false, this way the functions should return, and the threads should be killed gracefully.
When I'm connecting for the first time, this works well, but when I'm disconnecting from the server, the execution flow from the WiFiToUARTWorkerSession(...) goes through else if (error) condition, it sets the while condition variable to false, and then it throws boost::system::system_error(error) (which actually means Connection reset by peer).
Then when I'm trying to connect again, I got the following exception and the program terminates:
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::system::system_error> >'
what(): write: Broken pipe
What could be the problem?
EDIT: From what I found about this error, it seems that I write(...) after the client disconnects, but how could this be possible?
EDIT2: I have debugged the code even more and it seems that one thread (on which runs the UARTToWifiWorkerSession(...) function) won't actually exit (because there's a blocking read(...) function call at where the execution flow stops). This way that one thread will hang until there's some data received by the read(...) function, and when I'm reconnecting there will be created another two threads, this causing some data racing problems.
Can someone confirm me that this could be the problem?
The actual problem was that the function UARTToWifiWorkerSession(...) didn't actually exit (because of a blocking read(...) function, this causing two threads (the hanging one, and one of the latest two created ones) to write(...) (without any concurrency control) using the same socket.
The solution was to set a read(...) timeout, so I can return from the function (and thus destroy the thread) without pending from some input.

boost::asio async server design

Currently I'm using design when server reads first 4 bytes of stream then read N bytes after header decoding.
But I found that time between first async_read and second read is 3-4 ms. I just printed in console timestamp from callbacks for measuring. I sent 10 bytes of data in total. Why it takes so much time to read?
I running it in debug mode but I think that 1 connection for debug is
not so much to have a 3 ms delay between reads from socket. Maybe I need
another approach to cut TCP stream on "packets"?
UPDATE: I post some code here
void parseHeader(const boost::system::error_code& error)
{
cout<<"[parseHeader] "<<lib::GET_SERVER_TIME()<<endl;
if (error) {
close();
return;
}
GenTCPmsg::header result = msg.parseHeader();
if (result.error == GenTCPmsg::parse_error::__NO_ERROR__) {
msg.setDataLength(result.size);
boost::asio::async_read(*socket,
boost::asio::buffer(msg.data(), result.size),
(*_strand).wrap(
boost::bind(&ConnectionInterface::parsePacket, shared_from_this(), boost::asio::placeholders::error)));
} else {
close();
}
}
void parsePacket(const boost::system::error_code& error)
{
cout<<"[parsePacket] "<<lib::GET_SERVER_TIME()<<endl;
if (error) {
close();
return;
}
protocol->parsePacket(msg);
msg.flush();
boost::asio::async_read(*socket,
boost::asio::buffer(msg.data(), config::HEADER_SIZE),
(*_strand).wrap(
boost::bind(&ConnectionInterface::parseHeader, shared_from_this(), boost::asio::placeholders::error)));
}
As you see unix timestamps differ in 3-4 ms. I want to understand why so many time elapse between parseHeader and parsePacket. This is not a client problem, summary data is 10 bytes, but i cant sent much much more, delay is exactly between calls. I'm using flash client version 11. What i do is just send ByteArray through opened socket. I don't sure that delays on client. I send all 10 bytes at once. How can i debug where actual delay is?
There are far too many unknowns to identify the root cause of the delay from the posted code. Nevertheless, there are a few approaches and considerations that can be taken to help to identify the problem:
Enable handler tracking for Boost.Asio 1.47+. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, to the standard error stream. These timestamps can be used to help filter out delays introduced by application code (parseHeader(), parsePacket(), etc.).
Verify that byte-ordering is being handled properly. For example, if the protocol defines the header's size field as two bytes in network-byte-order and the server is handling the field as a raw short, then upon receiving a message that has a body size of 10:
A big-endian machine will call async_read reading 10 bytes. The read operation should complete quickly as the socket already has the 10 byte body available for reading.
A little-endian machine will call async_read reading 2560 bytes. The read operation will likely remain outstanding, as far more bytes are trying to be read than is intended.
Use tracing tools such as strace, ltrace, etc.
Modify Boost.Asio, adding timestamps throughout the callstack. Boost.Asio is shipped as a header-file only library. Thus, users may modify it to provide as much verbosity as desired. While not the cleanest or easiest of approaches, adding a print statement with timestamps throughout the callstack may help provide visibility into timing.
Try duplicating the behavior in a short, simple, self contained example. Start with the simplest of examples to determine if the delay is systamtic. Then, iteratively expand upon the example so that it becomes closer to the real-code with each iteration.
Here is a simple example from which I started:
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>
class tcp_server
: public boost::enable_shared_from_this< tcp_server >
{
private:
enum
{
header_size = 4,
data_size = 10,
buffer_size = 1024,
max_stamp = 50
};
typedef boost::asio::ip::tcp tcp;
public:
typedef boost::array< boost::posix_time::ptime, max_stamp > time_stamps;
public:
tcp_server( boost::asio::io_service& service,
unsigned short port )
: strand_( service ),
acceptor_( service, tcp::endpoint( tcp::v4(), port ) ),
socket_( service ),
index_( 0 )
{}
/// #brief Returns collection of timestamps.
time_stamps& stamps()
{
return stamps_;
}
/// #brief Start the server.
void start()
{
acceptor_.async_accept(
socket_,
boost::bind( &tcp_server::handle_accept, this,
boost::asio::placeholders::error ) );
}
private:
/// #brief Accept connection.
void handle_accept( const boost::system::error_code& error )
{
if ( error )
{
std::cout << error.message() << std::endl;
return;
}
read_header();
}
/// #brief Read header.
void read_header()
{
boost::asio::async_read(
socket_,
boost::asio::buffer( buffer_, header_size ),
boost::bind( &tcp_server::handle_read_header, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred ) );
}
/// #brief Handle reading header.
void
handle_read_header( const boost::system::error_code& error,
std::size_t bytes_transferred )
{
if ( error )
{
std::cout << error.message() << std::endl;
return;
}
// If no more stamps can be recorded, then stop the async-chain so
// that io_service::run can return.
if ( !record_stamp() ) return;
// Read data.
boost::asio::async_read(
socket_,
boost::asio::buffer( buffer_, data_size ),
boost::bind( &tcp_server::handle_read_data, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred ) );
}
/// #brief Handle reading data.
void handle_read_data( const boost::system::error_code& error,
std::size_t bytes_transferred )
{
if ( error )
{
std::cout << error.message() << std::endl;
return;
}
// If no more stamps can be recorded, then stop the async-chain so
// that io_service::run can return.
if ( !record_stamp() ) return;
// Start reading header again.
read_header();
}
/// #brief Record time stamp.
bool record_stamp()
{
stamps_[ index_++ ] = boost::posix_time::microsec_clock::local_time();
return index_ < max_stamp;
}
private:
boost::asio::io_service::strand strand_;
tcp::acceptor acceptor_;
tcp::socket socket_;
boost::array< char, buffer_size > buffer_;
time_stamps stamps_;
unsigned int index_;
};
int main()
{
boost::asio::io_service service;
// Create and start the server.
boost::shared_ptr< tcp_server > server =
boost::make_shared< tcp_server >( boost::ref(service ), 33333 );
server->start();
// Run. This will exit once enough time stamps have been sampled.
service.run();
// Iterate through the stamps.
tcp_server::time_stamps& stamps = server->stamps();
typedef tcp_server::time_stamps::iterator stamp_iterator;
using boost::posix_time::time_duration;
for ( stamp_iterator iterator = stamps.begin() + 1,
end = stamps.end();
iterator != end;
++iterator )
{
// Obtain the delta between the current stamp and the previous.
time_duration delta = *iterator - *(iterator - 1);
std::cout << "Delta: " << delta.total_milliseconds() << " ms"
<< std::endl;
}
// Calculate the total delta.
time_duration delta = *stamps.rbegin() - *stamps.begin();
std::cout << "Total"
<< "\n Start: " << *stamps.begin()
<< "\n End: " << *stamps.rbegin()
<< "\n Delta: " << delta.total_milliseconds() << " ms"
<< std::endl;
}
A few notes about the implementation:
There is only one thread (main) and one asynchronous chain read_header->handle_read_header->handle_read_data. This should minimize the amount of time a ready-to-run handler spends waiting for an available thread.
To focus on boost::asio::async_read, noise is minimized by:
Using a pre-allocated buffer.
Not using shared_from_this() or strand::wrap.
Recording the timestamps, and perform processing post-collection.
I compiled on CentOS 5.4 using gcc 4.4.0 and Boost 1.50. To drive the data, I opted to send 1000 bytes using netcat:
$ ./a.out > output &
[1] 18623
$ echo "$(for i in {0..1000}; do echo -n "0"; done)" | nc 127.0.0.1 33333
[1]+ Done ./a.out >output
$ tail output
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Delta: 0 ms
Total
Start: 2012-Sep-10 21:22:45.585780
End: 2012-Sep-10 21:22:45.586716
Delta: 0 ms
Observing no delay, I expanded upon the example by modifying the boost::asio::async_read calls, replacing this with shared_from_this() and wrapping the ReadHandlerss with strand_.wrap(). I ran the updated example and still observed no delay. Unfortunately, that is as far as I could get based on the code posted in the question.
Consider expanding upon the example, adding in a piece from the real implementation with each iteration. For example:
Start with using the msg variable's type to control the buffer.
Next, send valid data, and introduce parseHeader() and parsePacket functions.
Finally, introduce the lib::GET_SERVER_TIME() print.
If the example code is as close as possible to the real code, and no delay is being observed with boost::asio::async_read, then the ReadHandlers may be ready-to-run in the real code, but they are waiting on synchronization (the strand) or a resource (a thread), resulting in a delay:
If the delay is the result of synchronization with the strand, then consider Robin's suggestion by reading a larger block of data to potentially reduce the amount of reads required per-message.
If the delay is the result of waiting for a thread, then consider having an additional thread call io_service::run().
One thing that makes Boost.Asio awesome is using the async feature to the fullest. Relying on a specific number of bytes read in one batch, possibly ditching some of what could already been read, isn't really what you should be doing.
Instead, look at the example for the webserver especially this: http://www.boost.org/doc/libs/1_51_0/doc/html/boost_asio/example/http/server/connection.cpp
A boost triboolean is used to either a) complete the request if all data is available in one batch, b) ditch it if it's available but not valid and c) just read more when the io_service chooses to if the request was incomplete. The connection object is shared with the handler through a shared pointer.
Why is this superior to most other methods? You can possibly save the time between reads already parsing the request. This is sadly not followed through in the example but idealy you'd thread the handler so it can work on the data already available while the rest is added to the buffer. The only time it's blocking is when the data is incomplete.
Hope this helps, can't shed any light on why there is a 3ms delay between reads though.

async_receive_from stops receiving after a few packets under Linux

I have a setup with multiple peers broadcasting udp packets (containing images) every 200ms (5fps).
While receiving both the local stream as external streams works fine under Windows, the same code (except for the socket->cancel(); in Windows XP, see comment in code) produces rather strange behavior under Linux:
The first few (5~7) packets sent by another machine (when this machine starts streaming) are received as expected;
After this, the packets from the other machine are received after irregular, long intervals (12s, 5s, 17s, ...) or get a time out (defined after 20 seconds). At certain moments, there is again a burst of (3~4) packets received as expected.
The packets sent by the machine itself are still being received as expected.
Using Wireshark, I see both local as external packets arriving as they should, with correct time intervals between consecutive packages. The behavior also presents itself when the local machine is only listening to a single other stream, with the local stream disabled.
This is some code from the receiver (with some updates as suggested below, thanks!):
Receiver::Receiver(port p)
{
this->port = p;
this->stop = false;
}
int Receiver::run()
{
io_service io_service;
boost::asio::ip::udp::socket socket(
io_service,
boost::asio::ip::udp::endpoint(boost::asio::ip::udp::v4(),
this->port));
while(!stop)
{
const int bufflength = 65000;
int timeout = 20000;
char sockdata[bufflength];
boost::asio::ip::udp::endpoint remote_endpoint;
int rcvd;
bool read_success = this->receive_with_timeout(
sockdata, bufflength, &rcvd, &socket, remote_endpoint, timeout);
if(read_success)
{
std::cout << "read succes " << remote_endpoint.address().to_string() << std::endl;
}
else
{
std::cout << "read fail" << std::endl;
}
}
return 0;
}
void handle_receive_from(
bool* toset, boost::system::error_code error, size_t length, int* outsize)
{
if(!error || error == boost::asio::error::message_size)
{
*toset = length>0?true:false;
*outsize = length;
}
else
{
std::cout << error.message() << std::endl;
}
}
// Update: error check
void handle_timeout( bool* toset, boost::system::error_code error)
{
if(!error)
{
*toset = true;
}
else
{
std::cout << error.message() << std::endl;
}
}
bool Receiver::receive_with_timeout(
char* data, int buffl, int* outsize,
boost::asio::ip::udp::socket *socket,
boost::asio::ip::udp::endpoint &sender_endpoint, int msec_tout)
{
bool timer_overflow = false;
bool read_result = false;
deadline_timer timer( socket->get_io_service() );
timer.expires_from_now( boost::posix_time::milliseconds(msec_tout) );
timer.async_wait( boost::bind(&handle_timeout, &timer_overflow,
boost::asio::placeholders::error) );
socket->async_receive_from(
boost::asio::buffer(data, buffl), sender_endpoint,
boost::bind(&handle_receive_from, &read_result,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred, outsize));
socket->get_io_service().reset();
while ( socket->get_io_service().run_one())
{
if ( read_result )
{
timer.cancel();
}
else if ( timer_overflow )
{
//not to be used on Windows XP, Windows Server 2003, or earlier
socket->cancel();
// Update: added run_one()
socket->get_io_service().run_one();
}
}
// Update: added run_one()
socket->get_io_service().run_one();
return read_result;
}
When the timer exceeds the 20 seconds, the error message "Operation canceled" is returned, but it is difficult to get any other information about what is going on.
Can anyone identify a problem or give me some hints to get some more information about what is going wrong? Any help is appreciated.
Okay, what you're doing is that when you call receive_with_timeout, you're setting up the two asynchronous requests (one for the recv, one for the timeout). When the first one completes, you cancel the other.
However, you never invoke ioservice::run_one() again to allow it's callback to complete. When you cancel an operation in boost::asio, it invokes the handler, usually with an error code indicating that the operation has been aborted or canceled. In this case, I believe you have a handler dangling once you destroy the deadline service, since it has a pointer onto the stack for it to store the result.
The solution is to call run_one() again to process the canceled callback result prior to exiting the function. You should also check the error code being passed to your timeout handler, and only treat it as a timeout if there was no error.
Also, in the case where you do have a timeout, you need to execute run_one so that the async_recv_from handler can execute, and report that it was canceled.
After a clean installation with Xubuntu 12.04 instead of an old install with Ubuntu 10.04, everything now works as expected. Maybe it is because the new install runs a newer kernel, probably with improved networking? Anyway, a re-install with a newer version of the distribution solved my problem.
If anyone else gets unexpected network behavior with an older kernel, I would advice to try it on a system with a newer kernel installed.

Consume only part of data in boost::asio basic_stream_socket::async_read_some handler

I am new into boost::asio so my question maight be dumb - sorry if it is such.
I am writing asynchronous server application with keepalive (multiple requests may be sent on single connection).
Connection handling routine is simple:
In a loop:
schedule read request with socket->async_read_some(buffer, handler)
from handler schedule write response with async_write.
The problem I am facing is that when
handler passed to async_read_some is called by on of io_service threads, buffers may actually contain more data than single request (e.g. part of next request sent by client).
I do not want to (and cannot if it is only part of request) handle this remaining bytes at the moment.
I would like to do it after handling previous request is finished.
It would be easy to address this if I had the possiblity to reinject unnecessary remainging data back to the socket. So it is handled on next async_read_some call.
Is there such possiblity in boost::asio or do I have to store the remaining data somewhere aside, and handle it myself with extra code.
I think what you are looking for is asio::streambuf.
Basically, you can inspect your seeded streambuf as a char*, read as much as you see fit, and then inform how much was actually processed by consume(amount).
Working code-example to parse HTTP-header as a client:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <string>
namespace asio = boost::asio;
std::string LINE_TERMINATION = "\r\n";
class Connection {
asio::streambuf _buf;
asio::ip::tcp::socket _socket;
public:
Connection(asio::io_service& ioSvc, asio::ip::tcp::endpoint server)
: _socket(ioSvc)
{
_socket.connect(server);
_socket.send(boost::asio::buffer("GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n"));
readMore();
}
void readMore() {
// Allocate 13 bytes space on the end of the buffer. Evil prime number to prove algorithm works.
asio::streambuf::mutable_buffers_type buf = _buf.prepare(13);
// Perform read
_socket.async_read_some(buf, boost::bind(
&Connection::onRead, this,
asio::placeholders::bytes_transferred, asio::placeholders::error
));
}
void onRead(size_t read, const boost::system::error_code& ec) {
if ((!ec) && (read > 0)) {
// Mark to buffer how much was actually read
_buf.commit(read);
// Use some ugly parsing to extract whole lines.
const char* data_ = boost::asio::buffer_cast<const char*>(_buf.data());
std::string data(data_, _buf.size());
size_t start = 0;
size_t end = data.find(LINE_TERMINATION, start);
while (end < data.size()) {
std::cout << "LINE:" << data.substr(start, end-start) << std::endl;
start = end + LINE_TERMINATION.size();
end = data.find(LINE_TERMINATION, start);
}
_buf.consume(start);
// Wait for next data
readMore();
}
}
};
int main(int, char**) {
asio::io_service ioSvc;
// Setup a connection and run
asio::ip::address localhost = asio::ip::address::from_string("127.0.0.1");
Connection c(ioSvc, asio::ip::tcp::endpoint(localhost, 80));
ioSvc.run();
}
One way of tackling this when using a reliable and ordered transport like TCP is to:
Write a header of known size, containing the size of the rest of the message
Write the rest of the message
And on the receiving end:
Read just enough bytes to get the header
Read the rest of the message and no more
If you know the messages are going to be of a fixed length, you can do something like the following:
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
void
Connection::readMore()
{
if (m_connected)
{
// Asynchronously read some data from the connection into the buffer.
// Using shared_from_this() will prevent this Connection object from
// being destroyed while data is being read.
boost::asio::async_read(
m_socket,
boost::asio::buffer(
m_readMessage.getData(),
MessageBuffer::MESSAGE_LENGTH
),
boost::bind(
&Connection::messageBytesRead,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
),
boost::bind(
&Connection::handleRead,
shared_from_this(),
boost::asio::placeholders::error
)
);
}
}
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
std::size_t
Connection::messageBytesRead(const boost::system::error_code& _errorCode,
std::size_t _bytesRead)
{
return MessageBuffer::MESSAGE_LENGTH - _bytesRead;
}
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
void
Connection::handleRead(const boost::system::error_code& _errorCode)
{
if (!_errorCode)
{
/// Do something with the populated m_readMessage here.
readMore();
}
else
{
disconnect();
}
}
The messageBytesRead callback will indicate to boost::asio::async_read when a complete message has been read. This snippet was pulled from an existing Connection object from running code, so I know it works...

Boost async_read doesn't give me an "end of frame" flag

I'm still working on some kind of client for communication with an IP Camera. Now I have the following issue:
I send a request to the camera ( a RTSP DESCRIBEin particular ). Now I get it's answer which looks like this:
RTSP/1.0 200 OK
CSeq: 2
Date: Thu, Jan 01 1970 00:31:41 GMT
Content-Base: rtsp://192.168.0.42/mpeg4?mode=Live&stream=-1&buffer=0&seek=0&fps=100& metainfo=/
Content-Type: application/sdp
Content-Length: 517
This is the header of the answer, followed by a so called Session Description which has the size shown in the field Content-Length. Actually I don't care much for the Session Description , I'm just interested in the Content-Base field. But still, since there is some communication following on the same socket, I need to get rid of all the data.
For receiving'm using the async_read calls from boost::asio.
My code looks ( simplified ) like this:
CommandReadBuffer::CallbackFromAsyncWrite()
{
boost::asio::async_read_until(*m_Socket, m_ReceiveBuffer,"\r\n\r\n",
boost::bind(&CommandReadBuffer::handle_rtsp_describe, this->shared_from_this(),
boost::asio::placeholders::error,boost::asio::placeholders::bytes_transferred));
}
This one reads at least the header ( shown above ) since its terminated by a blank line. As usual for async_write it just reads some more of the data, but nevermind. Now to the next callback function:
void CommandReadBuffer::handle_rtsp_describe(const boost::system::error_code& err,size_t bytesTransferred)
{
std::istream response_stream(&m_ReceiveBuffer);
std::string header;
// Just dump the data on the console
while (std::getline(response_stream, header))
{
// Normally I would search here for the desired content-base field
std::cout << header << "\n";
}
boost::asio::async_read(*m_Socket, m_ReceiveBuffer, boost::asio::transfer_at_least(1),
boost::bind(&CommandReadBuffer::handle_rtsp_setup, this->shared_from_this(),
boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
Now this works fine as well, if I print out the number of received bytes it's always 215.
Now we go on to the critical callback:
void CommandReadBuffer::handle_rtsp_setup(const boost::system::error_code& err, size_t bytesTransferred)
{
std::cout << "Error: " << err.message() << "\n";
if (!err)
{
// Write all of the data that has been read so far.
std::cout << &m_ReceiveBuffer;
// Continue reading remaining data until EOF.
m_DeadlineTimer->async_wait(boost::bind(&CommandReadBuffer::handleTimeout, this->shared_from_this(),boost::asio::placeholders::error));
boost::asio::async_read(*m_Socket, m_ReceiveBuffer, boost::asio::transfer_at_least(1),
boost::bind(&CommandReadBuffer::handle_rtsp_setup, this->shared_from_this(),
boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
else if (err != boost::asio::error::eof)
{
std::cout << "Error: " << err.message() << "\n";
}
else
{
std::cout << "End of Frame " << err.message() << "\n";
}
}
This part reads 220 Bytes. If I look at console Output from this call and compare it with the actualy payload of the frame ( as seen in Wireshark ) I can see that all data has been received. Now I would actually assume that async_read would set me the eof error. But instead the return code of error is success and so it calls async_read again. This time there is no data to be received and it never calls the callback function ( since there will be no more incoming data ).
Now I actually don't know how I could determine that all data has been sent. Actually I would expect the error flag to be set.
Now this is very similar to the implementation of the Boost Example for an Async HTTP client. Also it is done the same way in the Example Boost Async HTTP Client. I implemented this in another call and there it actually works.
Now in my opinion it should make no difference for the async_read call wether it is HTTP or RTSP - end of frame is end of frame, if there is no more data to read.
I'm also aware that according to the boost documentation I am using
void async_read(
AsyncReadStream & s,
basic_streambuf< Allocator > & b,
CompletionCondition completion_condition,
ReadHandler handler);
which means the function will continue until
The supplied buffer is full (that is, it has reached maximum size).
The completion_condition function object returns 0.
So if there is no more data to read, it just continues.
But I also tried the overloaded function without the CompletionCondition parameter, which should return when an error occurs ( EOF !!! ) - But this just won't callback either...
Any suggestions? I just don't get what I'm doing wrong...
I have written an RTSP client and server library using boost asio and can offer the following advice:
The RTSP message syntax is generic: there is no need for different DESCRIBE and SETUP handlers. In general
write an RTSP request
to read the response do a boost::asio::async_read_until("\r\n\r\n")
then check for the Content-Length header
if content_length > 0 do a boost::asio::transfer_at_least(content_length)
Further, why are you expecting an EOF? The connection is still open: the server is waiting for either another SETUP or a PLAY request and typically won't close the connection until the RTSP TCP connection has been timed out, which has a default value of 60 seconds according to RFC2326.
If in your application, you have completed interaction with the RTSP server, close the connection after you have read the response.