Boost ASIO async_read_some - c++

I am having difficulties in implementing a simple TCP server. The following code is taken from boost::asio examples, "Http Server 1" to be precise.
void connection::start() {
socket_.async_read_some(
boost::asio::buffer(buffer_),
boost::bind(
&connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
}
void connection::handle_read(const boost::system::error_code& e, std::size_t bytes_transferred) {
if (!e && bytes_transferred) {
std::cout << " " << bytes_transferred <<"b" << std::endl;
data_.append(buffer_.data(), buffer_.data()+bytes_transferred);
//(1) what here?
socket_.async_read_some(
boost::asio::buffer(buffer_),
boost::bind(
&connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
}
else// if (e != boost::asio::error::operation_aborted)
{
std::cout << data_ << std::endl;
connection_manager_.stop(shared_from_this());
}
}
In the original code the buffer_ is big enough to keep the entire request. It's not what I need. I've changed the size to 32bytes.
The server compiles and listens at port 80 of localhost, so I try to connect to it via my web browser.
Now if the statement (1) is commented-out, then only the first 32bytes of the request are read and the connection hangs. Web browser keeps waiting for the response, the server does.. I dont know what.
If (1) is uncommented, then the entire request is read (and appeded to data_), but it never stops - I have to cancel the request in my browser and only then does the else { } part run - I see my request on stdout.
Question 1: How should I handle a large request?
Question 2: How should I cache the request (currently I append the buffer to a string)?
Question 3: How can I tell that the request is over? In HTTP there always is a response, so my web-browser keeps waiting for it and doesnt close the connection, but how can my server know that the request is over (and perhaps close it or reply some "200 OK")?

Suppose browser send you 1360 bytes of data, you say asio to read some data into your buffer that you say it only have 32 bytes.
then first time that you call it your handler will be called with 32 bytes start of data. here if you comment (1) then browser try to send rest of its data(actually browser already sent it and it is in the OS buffer that wait for you to peek it from there) and you are possibly blocked behind io_service::run for some miracle!!
if you uncomment (1) as you say your loop started, you read first block, then next and another and ... until the data that the browser sent finished, but after that when you say asio to read some more data it will wait for some more data that never come from the browser( since browser already sent its information and is waiting for your answer ) and when you cancel the request from the browser, it will close its socket and then your handler will be called whith an error that say I can't read more data, since the connection is closed.!!
but what you should do here to make it work is: you should learn HTTP format and thus know what is the data that your browser sent to you and provide a good answer for it and then your communication with the client will be proceeded. in this case end of buffer is \r\n\r\n and when you see it you shouldn't read any more data, you should process what you read till now and then send a response to the browser.

Related

How to send websocket message while waiting to receive?

My goal is to register to a websocket service to get real-time company quotations.
So I based my code on the following example, by mostly calling (again) async_read, once we receive a quotation to accept futures quotation:
https://www.boost.org/doc/libs/master/libs/beast/example/websocket/client/async-ssl/websocket_client_async_ssl.cpp
The problem is when I am waiting for a new quotation (who could take sometimes minutes or hours for small companies), the program is blocked waiting for a message and I do not have the opportunity to ask for another company.
I tried to use the "post" function to call again async_write in the good context thread but the program crashed.
Is there any way to force the completion of callback on_read, to have then the opportunity to send a new message?
Here is the function I modified (simplified without mutexes):
void
on_read(
beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(ec)
return fail2(ec, "read");
std::string mycontent = beast::buffers_to_string(buffer_.data());
cout << mycontent << endl;
buffer_.clear();
ws_.async_read(
buffer_,
beast::bind_front_handler(
&session::on_read,
shared_from_this()));
}
void subscribe(const std::string &symbol)
{
// We save the message in the queue
std::string text = "{\"action\": \"subscribe\", \"symbols\": \"" + symbol + "\"}";
msgqueue_.push_back(text);
boost::asio::post(ioc_, beast::bind_front_handler(&session::_subscription_to_post, shared_from_this()));
}
void _subscription_to_post()
{
if (msgqueue_.empty())
return;
// We send the message
ws_.async_write(
net::buffer(msgqueue_.front()),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
msgqueue_.pop_front();
}
And the program crashes immediately when trying to async_write.
The problem is when I am waiting for a new quotation [...] the program is blocked waiting for a message
It isn't technically blocked because you are using async_read
I tried to use the "post" function to call again async_write in the good context thread, but the program crashed.
That means you're doing something wrong. You can post a question with your self-contained minimal code, and we can tell you what is wrong.
In general, you can use a single read operation and a single write operation concurrently (as in: in flight, asynchronously, you still need to synchronize threads accessing all related resources).
Typically, you have a single async-read-chain active at all times, and a outbound message-queue that is drained by a single async-write chain (that obviously ends when the queue is empty, so needs to be initiated when the first outbound message is queued).
I have many answers on this site (literally dozens) that you may be able to find by search for outbox or outbox_ e.g. Keep in mind that the majority of them will deal with plain (ssl) sockets intead of websockets, but the pattern practically the same.
Is there any way to force the completion of callback on_read, to have then the opportunity to send a new message ?
You can technically cancel() it, which completes it with operation_aborted. But that is not what you need. You want full-duplex, so writing cancel() is the opposite of what you want.

C++ websocket client without receive callback

We've a WebSocket server and clients in C#. The server is designed to slow down depending on how fast/slow the client reads and processes the messages. The C# clients reads one message at a time.
I'm looking to write a client in c++ and all the libraries I looked so far have a message handler or callback mechanism for receiving message from server.
This would mean that the client is receiving messages continuously, queue it and the client reads from the queue. This is not the behavior we're looking for. We require the client to read a message and process it and once the processing is complete, read the next message. Is there any library available we could use to achieve this?
I've so far checked cpprestsdk, websocketpp, libwebsocket
you can use Boost.ASIO Library.
I made a server that works with Websocket and multiple connections. Each connection uses an asynchronous way to receive data which will be handled before receiving the next message:
/**
* This method is used to read the incoming message on the WebSocket
* and handled it before reading the other message.
*/
void wscnx::read()
{
if (!is_ssl && !m_ws->is_open())
return;
else if (is_ssl && !m_wss->is_open())
return;
auto f_read = [self{shared_from_this()}](const boost::system::error_code &ec, std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
// This indicates that the session was closed
if (ec == websocket::error::closed)
{
self->close(beast::websocket::close_code::normal, ec);
return;
}
if (ec)
{
self->close(beast::websocket::close_code::abnormal, ec);
return;
}
std::string data = beast::buffers_to_string(self->m_rx_buffer.data());
self->m_rx_buffer.consume(bytes_transferred);
if (!self->m_server.expired())
{
std::string_view vdata(data.c_str());
/*************************************
Here is where the datas are handled
**************************************/
self->m_server.lock()->on_data_rx(self->m_cnx_info.id, vdata, self->cnx_info());
}
/*************************************
Read the next message in the buffer.
**************************************/
self->read();
};//lambda
if (!is_ssl)
m_ws->async_read(m_rx_buffer, f_read);
else
m_wss->async_read(m_rx_buffer, f_read);
}
In this sample the connection could be plain or secure, using SSL, both of them are using a lambda function to receive the data.
I tested this service with a JS application, on the browser and there is no problem with the sequence to attend each message.

Receiving large binary data over Boost::Beast websocket

I am trying to receive a large amount of data using a boost::beast::websocket, fed by another boost::beast::websocket. Normally, this data is sent to a connected browser but I'd like to set up a purely C++ unit test validating certain components of the traffic. I set the auto fragmentation to true from the sender with a max size of 1MB but after a few messages, the receiver spits out:
Read 258028 bytes of binary
Read 1547176 bytes of binary
Read 168188 bytes of binary
"Failed read: The WebSocket message exceeded the locally configured limit"
Now, I should have no expectation that a fully developed and well supported browser should exhibit the same characteristics as my possibly poorly architected unit test, which it does not. The browser has no issue reading 25MB messages over the websocket. My boost::beast::websocket on the other hand hits a limit.
So before I go down a rabbit hole, I'd like to see if anyone has any thoughts on this. My read sections looks like this:
void on_read(boost::system::error_code ec, std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if (ec)
{
m_log.error("Failed read: " + ec.message());
// Stop the websocket
stop();
return;
}
std::string data(boost::beast::buffers_to_string(m_buffer.data()));
// Yes I know this looks dangerous. The sender always sends as binary but occasionally sends JSON
if (data.at(0) == '{')
m_log.debug("Got message: " + data);
else
m_log.debug("Read " + utility::to_string(m_buffer.data().buffer_bytes()) + " of binary data");
// Do the things with the incoming doata
for (auto&& callback : m_read_callbacks)
callback(data);
// Toss the data
m_buffer.consume(bytes_transferred);
// Wait for some more data
m_websocket.async_read(
m_buffer,
std::bind(
&WebsocketClient::on_read,
shared_from_this(),
std::placeholders::_1,
std::placeholders::_2));
}
I saw in a separate example that instead of doing an async read, you can do a for/while loop reading some data until the message is done (https://www.boost.org/doc/libs/1_67_0/libs/beast/doc/html/beast/using_websocket/send_and_receive_messages.html). Would this be the right approach for an always open websocket that could send some pretty massive messages? Would I have to send some indicator to the client that the message is indeed done? And would I run into the exceeded buffer limit issue using this approach?
If your use pattern is fixed:
std::string data(boost::beast::buffers_to_string(m_buffer.data()));
And then, in particular
callback(data);
Then there will be no use at all reading block-wise, since you will be allocating the same memory anyways. Instead, you can raise the "locally configured limit":
ws.read_message_max(20ull << 20); // sets the limit to 20 miB
The default value is 16 miB (as of boost 1.75).
Side Note
You can probably also use ws.got_binary() to detect whether the last message received was binary or not.

boost asio ssl not working as expected when used with null_buffers

We have boost asio ssl server that reads data from a client. We have a requirement to perform actual read of the data in our own code as oppose to read directly by boost async_read_some() routines by passing a buffer to it. Hence we pass null_buffers() to async_read_some() and later do the actual data read using socket->read_some() API. Our sockets are all always non-blocking for read and write. http://www.boost.org/doc/libs/1_40_0/doc/html/boost_asio/overview/core/reactor.html
This works fine with normal stream (tcp) socket. However, with ssl socket, it is not working correctly.
read callback constantly gets called even when there is no data available. This churns the cpu for ever.. When ssl_socket->read_some() is called, we get 0 bytes. error code is set to 11 (Resource unavailable)
In the read callback, we always trigger future reads again, by calling socket->async_read_some(null_buffers(), ...)
If we don't do (2), then not all data is received, when large amounts of data needs to be read. If we do (2), all data is correctly received (over many read callbacks), but the read callback keeps getting called even after all data has been read and when there is no more data to read...
To verify the same, I used the example boost ssl server and client code and notice a similar behavior with following change to the server code
http://www.boost.org/doc/libs/1_40_0/doc/html/boost_asio/example/ssl/server.cpp
handle_read() call back constantly gets called for ever... If we use normal boost stream (tcp) socket (not boost ssl), then we do not see such callbacks when no data is available to read.
We see the same behavior even with the latest boost and ssl code as well. Is null_buffers() read/write mode with ssl socket well tested and supported ? I could not find much documentation for this any where..
Can some one please help. Thank you so much!
diff --git a/server.cc b/server.cc
index 3f2b028..bfd65c7 100644
--- a/server.cc
+++ b/server.cc
## -40,7 +40,10 ## public:
{
if (!error)
{
- socket_.async_read_some(boost::asio::buffer(data_, max_length),
+ socket().non_blocking(true);
+
+ // socket_.async_read_some(boost::asio::buffer(data_, max_length),
+ socket_.async_read_some(boost::asio::null_buffers(),
boost::bind(&session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
## -56,6 +59,8 ## public:
{
if (!error)
{
+ boost::system::error_code err;
+ bytes_transferred = socket_.read_some(boost::asio::mutable_buffers_1(boost::asio::buffer(data_, max_length)), err);
boost::asio::async_write(socket_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
+
+ // Post for read again..
+ socket_.async_read_some(boost::asio::null_buffers(),
+ boost::bind(&session::handle_read, this,
+ boost::asio::placeholders::error,
+ boost::asio::placeholders::bytes_transferred));

Boost asio-acceptor unblocks without a new connection?

I am using the C++ boost asio library, where I listen to new connections on the socket. On getting a connection I process the request and then listen for a new connection on another socket in a loop.
while (true)
{
tcp::socket soc(this->blitzIOService);
this->blitzAcceptor.listen();
boost::system::error_code ec;
this->blitzAcceptor.accept(soc,ec);
if (ec)
{
// Some error occured
cerr << "Error Value: " << ec.value() << endl;
cerr << "Error Message: " << ec.message() << endl;
soc.close();
break;
}
else
{
this->HandleRequest(soc);
soc.shutdown(tcp::socket::shutdown_both);
soc.close();
}
}
According to my understanding it should always block at this->blitzAcceptor.accept(soc,ec); and everytime a new connection is made it should handle it in this->HandleRequest(soc); and again block at this->blitzAcceptor.accept(soc,ec);
But what I see is this that for the first time it will block at this->blitzAcceptor.accept(soc,ec) and when a new connection is made it will handle the request, but instead of blocking again at this->blitzAcceptor.accept(soc,ec) it will go ahead into this->HandleRequest(soc); and block at soc.receive(); inside.
This doesn't happen always, but happens most of the time. What could be the reason to this behavior, and how can I ensure that it always block at this->blitzAcceptor.accept(soc,ec) until a new request is made?
What could be the reason to this
behavior?
This behavior is entirely dependent on the client code. If it connects, but does not send a request, the server with block when receiving data.
how can I ensure that it always block
at this->blitzAcceptor.accept(soc,ec)
until a new request is made?
You can't. But your server can initiate a timeout that starts immediately after accepting the connection. If the client does not send a request within that duration, close the socket. To do that, you should switch to using asynchronous methods rather than synchronous methods.
Be sure you're not blocking on a read(2) call for the file descriptor that you are listen(2)'ing on vs the file descriptor that you accept(2)'ed. I think if you print out the file descriptor numbers you'll very quickly find your problem.