How to send websocket message while waiting to receive? - c++

My goal is to register to a websocket service to get real-time company quotations.
So I based my code on the following example, by mostly calling (again) async_read, once we receive a quotation to accept futures quotation:
https://www.boost.org/doc/libs/master/libs/beast/example/websocket/client/async-ssl/websocket_client_async_ssl.cpp
The problem is when I am waiting for a new quotation (who could take sometimes minutes or hours for small companies), the program is blocked waiting for a message and I do not have the opportunity to ask for another company.
I tried to use the "post" function to call again async_write in the good context thread but the program crashed.
Is there any way to force the completion of callback on_read, to have then the opportunity to send a new message?
Here is the function I modified (simplified without mutexes):
void
on_read(
beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(ec)
return fail2(ec, "read");
std::string mycontent = beast::buffers_to_string(buffer_.data());
cout << mycontent << endl;
buffer_.clear();
ws_.async_read(
buffer_,
beast::bind_front_handler(
&session::on_read,
shared_from_this()));
}
void subscribe(const std::string &symbol)
{
// We save the message in the queue
std::string text = "{\"action\": \"subscribe\", \"symbols\": \"" + symbol + "\"}";
msgqueue_.push_back(text);
boost::asio::post(ioc_, beast::bind_front_handler(&session::_subscription_to_post, shared_from_this()));
}
void _subscription_to_post()
{
if (msgqueue_.empty())
return;
// We send the message
ws_.async_write(
net::buffer(msgqueue_.front()),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
msgqueue_.pop_front();
}
And the program crashes immediately when trying to async_write.

The problem is when I am waiting for a new quotation [...] the program is blocked waiting for a message
It isn't technically blocked because you are using async_read
I tried to use the "post" function to call again async_write in the good context thread, but the program crashed.
That means you're doing something wrong. You can post a question with your self-contained minimal code, and we can tell you what is wrong.
In general, you can use a single read operation and a single write operation concurrently (as in: in flight, asynchronously, you still need to synchronize threads accessing all related resources).
Typically, you have a single async-read-chain active at all times, and a outbound message-queue that is drained by a single async-write chain (that obviously ends when the queue is empty, so needs to be initiated when the first outbound message is queued).
I have many answers on this site (literally dozens) that you may be able to find by search for outbox or outbox_ e.g. Keep in mind that the majority of them will deal with plain (ssl) sockets intead of websockets, but the pattern practically the same.
Is there any way to force the completion of callback on_read, to have then the opportunity to send a new message ?
You can technically cancel() it, which completes it with operation_aborted. But that is not what you need. You want full-duplex, so writing cancel() is the opposite of what you want.

Related

Can you unblock boost::asio::io_context while waiting for async_read?

im trying to connect to a server via boost asio and beast. I need to send heartbeats to the server every 40 seconds, but when I try to, my write requests get stuck in a queue and never get executed, unless the server sends something first.
I have this code to look for new messages that come in.
this->ioContext.run();
thread heartbeatThread(&client::heartbeatCycle, this);
while (this->p->is_socket_open()) {
this->ioContext.restart();
this->p->asyncQueue("", true);
this->ioContext.run();
}
The asyncQueue function just calls async_read, and blocks the io context. The heartbeatCycle tries to send heartbeats, but gets stuck in the queue. If I force it to send anyways, I get
Assertion failed: (id_ != T::id), function try_lock, file soft_mutex.hpp, line 89.
When the server sends a message, the queue is unblocked, and all the queued messages go through, until there is no more work, and the io_context starts blocking again.
So my main question is, is there any way to unblock the io context without having the server send a message? If not, is there a way to emulate the server sending a message?
Thanks!
EDIT:
I have this queue function that queues messages being sent called asyncQueue.
void session::asyncQueue(const string& payload, const bool& madeAfterLoop)
{
if(!payload.empty())
{
queue_.emplace_back(payload);
}
if(payload.empty() && madeAfterLoop)
{
queue_.emplace_back("KEEPALIVE");
}
// If there is something to write, write it.
if(!currentlyQueued_ && !queue_.empty() && queue_.at(0) != "KEEPALIVE")
{
currentlyQueued_ = true;
ws_.async_write(
net::buffer(queue_.at(0)),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
queue_.erase(queue_.begin());
}
// If there is nothing to write, read the buffer to keep stream alive
if(!currentlyQueued_ && !queue_.empty())
{
currentlyQueued_ = true;
ws_.async_read(
buffer_,
beast::bind_front_handler(
&session::on_read,
shared_from_this()));
queue_.erase(queue_.begin());
}
}
The problem is when the code has nothing no work left to do, it calls async read, and gets stuck until the server sends something.
In the function where I initialized the io_context, I also created a separate thread to send heartbeats every x seconds.
void client::heartbeatCycle()
{
while(this->p->is_socket_open())
{
this->p->asyncQueue(bot::websocket::sendEvents::getHeartbeatEvent(cache_), true );
this_thread::sleep_for(chrono::milliseconds(10000));
}
}
Lastly, I have these 2 lines in my on_read function that runs whenever async read is called.
currentlyQueued_ = false;
asyncQueue();
Once there is no more work to do, the program calls async_read but currentlyQueued_ is never set to false.
The problem is the io_context is stuck looking for something to read. What can I do to stop the io_context from blocking the heartbeats from sending?
The only thing I have found that stops the io_context from blocking is when the server sends me a message. When it does, currentlyQueued_ is set to false, and the queue able to run and the queue is cleared.
That is the reason im looking for something that can emulate the server sending me a message. So is there a function that can do that in asio/beast? Or am I going about this the wrong way.
Thanks so much for your help.
The idea is to run the io_service elsewhere (on a thread, or in main, after starting an async chain).
Right now you're calling restart() on it which simply doesn't afford continuous operation. Why stop() or let it run out of work at all?
Note, manually starting threads is atypical and unsafe.
I would give examples, but lots already exist (also on this site). I'd need to see question code with more detail to give concrete suggestions.

Understand the usage of timeout in beast::tcp_stream?

Reference:
https://www.boost.org/doc/libs/1_78_0/libs/beast/example/websocket/client/async/websocket_client_async.cpp
https://www.boost.org/doc/libs/1_78_0/libs/beast/doc/html/beast/using_io/timeouts.html
https://www.boost.org/doc/libs/1_78_0/libs/beast/doc/html/beast/ref/boost__beast__tcp_stream.html
void on_resolve(beast::error_code ec, tcp::resolver::results_type results)
{
if(ec) return fail(ec, "resolve");
// Set the timeout for the operation
beast::get_lowest_layer(ws_).expires_after(std::chrono::seconds(30));
// Make the connection on the IP address we get from a lookup
beast::get_lowest_layer(ws_).async_connect(
results, beast::bind_front_handler(
&session::on_connect, shared_from_this()));
}
void on_connect(beast::error_code ec, tcp::resolver::results_type::endpoint_type ep)
{
if(ec) return fail(ec, "connect");
// Turn off the timeout on the tcp_stream, because
// the websocket stream has its own timeout system.
// beast::get_lowest_layer(ws_).expires_never(); // Note: do NOT call this line for this question!!!
...
host_ += ':' + std::to_string(ep.port());
// Perform the websocket handshake
ws_.async_handshake(host_, "/",
beast::bind_front_handler(&session::on_handshake, shared_from_this()));
}
Question 1>
Will the timeout of beast::tcp_stream continue to work after a previous asynchronous operation finishes on time?
For example,
In above example, the timeout will expire after 30 seconds. If async_connect doesn't finish within 30 seconds, session::on_connect will receive an error::timeout as the value of ec. Let's assume the async_connect takes 10 seconds,
can I assume that async_handshake needs to finish within 20(i.e. 30-10) seconds otherwise a error::timeout will be sent to session::on_handshake? I infer to this idea based on the comments within on_connect function(i.e.
Turn off the timeout on the tcp_stream
). In other words, a timeout will only be turned off after it finishes the specified expiration period or is disabled by expires_never. Is my understanding correct?
Question 2> Also I want to know what a good pattern I should use for timeout in both async_calling and async_callback functions.
When we call an async_calling operation:
void func_async_calling()
{
// set some timeout here(i.e. XXXX seconds)
Step 1> beast::get_lowest_layer(ws_).expires_after(std::chrono::seconds(XXXX));
Step 2> ws_.async_operation(..., func_async_callback, )
Step 3> beast::get_lowest_layer(ws_).expires_never();
}
When we define a async_callback handle for an asynchronous operation:
void func_async_callback()
{
Step 1>Either call
// Disable the timeout for the next logical operation.
beast::get_lowest_layer(ws_).expires_never();
or
// Enable a new timeout
beast::get_lowest_layer(ws_).expires_after(std::chrono::seconds(YYYY));
Step 2> call another asynchronous function
Step 3> beast::get_lowest_layer(ws_).expires_never();
}
Does this make sense?
Thank you
Question 1
Yes that's correct. The linked page has the confirmation:
// The timer is still running. If we don't want the next
// operation to time out 30 seconds relative to the previous
// call to `expires_after`, we need to turn it off before
// starting another asynchronous operation.
stream.expires_never();
Question 2
That looks fine. The only subtleties I can think of are
often, because of Thread Safety often the initiation as well as the completion happen on the same (implicit) strand.
If that's the case, then in your completion handler example, the expires_never(); would be redundant.
If the completion handler is not on the same strand, you want to actively avoid touching the expiry, because that would be a data race
An alternative pattern is to set the expiry only once for a lengthier episode (e.g. an multi-message conversation between client/server). Obviously in this pattern, nobody would touch the expiry after initial setting. This seems pretty obvious, but I thought I'd mention it before someone casts this pattern in stone to never think about it again.
Always do what you need, prefer simple code. I think your basic understanding of the feature is right. (No wonder, this documentation is a piece of art).

Concurrent request processing with Boost Beast

I'm referring to this sample program from the Beast repository: https://www.boost.org/doc/libs/1_67_0/libs/beast/example/http/server/fast/http_server_fast.cpp
I've made some changes to the code to check the ability to process multiple requests simultaneously.
boost::asio::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
std::list<http_worker> workers;
for (int i = 0; i < 10; ++i)
{
workers.emplace_back(acceptor, doc_root);
workers.back().start();
}
ioc.run();
My understanding with the above is that I will now have 10 worker objects to run I/O, i.e. handle incoming connections.
So, my first question is the above understanding correct?
Assuming that the above is correct, I've made some changes to the lambda (handler) passed to the tcp::acceptor:
void accept()
{
// Clean up any previous connection.
boost::beast::error_code ec;
socket_.close(ec);
buffer_.consume(buffer_.size());
acceptor_.async_accept(
socket_,
[this](boost::beast::error_code ec)
{
if (ec)
{
accept();
}
else
{
boost::system::error_code ec2;
boost::asio::ip::tcp::endpoint endpoint = socket_.remote_endpoint(ec2);
// Request must be fully processed within 60 seconds.
request_deadline_.expires_after(
std::chrono::seconds(60));
std::cerr << "Remote Endpoint address: " << endpoint.address() << " port: " << endpoint.port() << "\n";
read_request();
}
});
}
And also in process_request():
void process_request(http::request<request_body_t, http::basic_fields<alloc_t>> const& req)
{
switch (req.method())
{
case http::verb::get:
std::cerr << "Simulate processing\n";
std::this_thread::sleep_for(std::chrono::seconds(30));
send_file(req.target());
break;
default:
// We return responses indicating an error if
// we do not recognize the request method.
send_bad_response(
http::status::bad_request,
"Invalid request-method '" + req.method_string().to_string() + "'\r\n");
break;
}
}
And here's my problem: If I send 2 simultaneous GET requests to my server, they're being processed sequentially, and I know this because the 2nd "Simulate processing" statement is printed ~30 seconds after the previous one which would mean that execution gets blocked on the first thread.
I've tried to read the documentation of boost::asio to better understand this, but to no avail.
The documentation for acceptor::async_accept says:
Regardless of whether the asynchronous operation completes immediately or not, the handler will not be >invoked from within this function. Invocation of the handler will be performed in a manner equivalent to >using boost::asio::io_service::post().
And the documentation for boost::asio::io_service::post() says:
The io_service guarantees that the handler will only be called in a thread in which the run(), >run_one(), poll() or poll_one() member functions is currently being invoked.
So, if 10 workers are in the run() state, then why would the two requests get queued?
And also, is there a way to workaround this behavior without adapting to a different example? (e.g. https://www.boost.org/doc/libs/1_67_0/libs/beast/example/http/server/async/http_server_async.cpp)
io_context does not create threads internally to execute the tasks, but rather uses the threads that call io_context::run explicitly. In the example the io_context::run is called just from one thread (main thread). So you have just one thread for task executions, which (thread) gets blocked in sleep and there is no other thread to execute other tasks.
To make this example work you have to:
Add more thread into the pool (like in the second example you referred to)
size_t const threads_count = 4;
std::vector<std::thread> v;
v.reserve(threads_count - 1);
for(size_t i = 0; i < threads_count - 1; ++i) { // add thraed_count threads into the pool
v.emplace_back([&ioc]{ ioc.run(); });
}
ioc.run(); // add the main thread into the pool as well
Add synchronization (for example, using strand like in the second example) where it is needed (at least for socket reads and writes), because now your application is multi-threaded.
UPDATE 1
Answering to the question "What is the purpose of a list of workers in the Beast example (the first one that referred) if in fact io_context is only running on one thread?"
Notice, regardless of thread count IO operations here are asynchronous, meaning http::async_write(socket_...) does not block the thread. And notice, that I explain here the original example (not your modified version). One worker here deals with one round-trip of 'request-response'. Imagine the situation. There are two clients client1 and client2. Client1 has poor internet connection (or requests a very big file) and client2 has the opposite conditions. Client1 makes request. Then client2 makes request. So if there was just one worker client2 would had to wait until client1 finished the whole round-trip 'request-response`. But, because there are more than one workers client2 gets response immediately not waiting the client1 (keep in mind IO does not block your single thread). The example is optimized for situation where bottleneck is IO but not the actual work. In your modified example you have quite the opposite situation - the work (30s) is very expensive compared to IO. For that case better use the second example.

How to recover from network interruption using boost::asio

I am writing a server that accepts data from a device and processes it. Everything works fine unless there is an interruption in the network (i.e., if I unplug the Ethernet cable, then reconnect it). I'm using read_until() because the protocol that the device uses terminates the packet with a specific sequence of bytes. When the data stream is interrupted, read_until() blocks, as expected. However when the stream starts up again, it remains blocked. If I look at the data stream with Wireshark, the device continues transmitting and each packet is being ACK'ed by the network stack. But if I look at bytes_readable it is always 0. How can I detect the interruption and how to re-establish a connection to the data stream? Below is a code snippet and thanks in advance for any help you can offer. [Go easy on me, this is my first Stack Overflow question....and yes I did try to search for an answer.]
using boost::asio::ip::tcp;
boost::asio::io_service IOservice;
tcp::acceptor acceptor(IOservice, tcp::endpoint(tcp::v4(), listenPort));
tcp::socket socket(IOservice);
acceptor.accept(socket);
for (;;)
{
len = boost::asio::read_until(socket, sbuf, end);
// Process sbuf
// etc.
}
Remember, the client initiates a connection, so the only thing you need to achieve is to re-create the socket and start accepting again. I will keep the format of your snippet but I hope your real code is properly encapsulated.
using SocketType = boost::asio::ip::tcp::socket;
std::unique_ptr<SocketType> CreateSocketAndAccept(
boost::asio::io_service& io_service,
boost::asio::ip::tcp::acceptor& acceptor) {
auto socket = std::make_unique<boost::asio::ip::tcp::socket>(io_service);
boost::system::error_code ec;
acceptor.accept(*socket.get(), ec);
if (ec) {
//TODO: Add handler.
}
return socket;
}
...
auto socket = CreateSocketAndAccept(IOservice, acceptor);
for (;;) {
boost::system::error_code ec;
auto len = boost::asio::read_until(*socket.get(), sbuf, end, ec);
if (ec) // you could be more picky here of course,
// e.g. check against connection_reset, connection_aborted
socket = CreateSocketAndAccept(IOservice, acceptor);
...
}
Footnote: Should go without saying, socket needs to stay in scope.
Edit: Based on the comments bellow.
The listening socket itself does not know whether a client is silent or whether it got cut off. All operations, especially synchronous, should impose a time limit on completion. Consider setting SO_RCVTIMEO or SO_KEEPALIVE (per socket, or system wide, for more info How to use SO_KEEPALIVE option properly to detect that the client at the other end is down?).
Another option is to go async and implement a full fledged "shared" socket server (BOOST example page is a great start).
Either way, you might run into data consistency issues and be forced to deal with it, e.g. when the client detects an interrupted connection, it would resend the data. (or something more complex using higher level protocols)
If you want to stay synchronous, the way I've seen things handled is to destroy the socket when you detect an interruption. The blocking call should throw an exception that you can catch and then start accepting connections again.
for (;;)
{
try {
len = boost::asio::read_until(socket, sbuf, end);
// Process sbuf
// etc.
}
catch (const boost::system::system_error& e) {
// clean up. Start accepting new connections.
}
}
As Tom mentions in his answer, there is no difference between inactivity and ungraceful disconnection so you need an external mechanism to detect this.
If you're expecting continuous data transfer, maybe a timeout per connection on the server side is enough. A simple ping could also work. After accepting a connection, ping your client every X seconds and declare the connection dead if he doesn't answer.

Boost ASIO async_read_some

I am having difficulties in implementing a simple TCP server. The following code is taken from boost::asio examples, "Http Server 1" to be precise.
void connection::start() {
socket_.async_read_some(
boost::asio::buffer(buffer_),
boost::bind(
&connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
}
void connection::handle_read(const boost::system::error_code& e, std::size_t bytes_transferred) {
if (!e && bytes_transferred) {
std::cout << " " << bytes_transferred <<"b" << std::endl;
data_.append(buffer_.data(), buffer_.data()+bytes_transferred);
//(1) what here?
socket_.async_read_some(
boost::asio::buffer(buffer_),
boost::bind(
&connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
)
);
}
else// if (e != boost::asio::error::operation_aborted)
{
std::cout << data_ << std::endl;
connection_manager_.stop(shared_from_this());
}
}
In the original code the buffer_ is big enough to keep the entire request. It's not what I need. I've changed the size to 32bytes.
The server compiles and listens at port 80 of localhost, so I try to connect to it via my web browser.
Now if the statement (1) is commented-out, then only the first 32bytes of the request are read and the connection hangs. Web browser keeps waiting for the response, the server does.. I dont know what.
If (1) is uncommented, then the entire request is read (and appeded to data_), but it never stops - I have to cancel the request in my browser and only then does the else { } part run - I see my request on stdout.
Question 1: How should I handle a large request?
Question 2: How should I cache the request (currently I append the buffer to a string)?
Question 3: How can I tell that the request is over? In HTTP there always is a response, so my web-browser keeps waiting for it and doesnt close the connection, but how can my server know that the request is over (and perhaps close it or reply some "200 OK")?
Suppose browser send you 1360 bytes of data, you say asio to read some data into your buffer that you say it only have 32 bytes.
then first time that you call it your handler will be called with 32 bytes start of data. here if you comment (1) then browser try to send rest of its data(actually browser already sent it and it is in the OS buffer that wait for you to peek it from there) and you are possibly blocked behind io_service::run for some miracle!!
if you uncomment (1) as you say your loop started, you read first block, then next and another and ... until the data that the browser sent finished, but after that when you say asio to read some more data it will wait for some more data that never come from the browser( since browser already sent its information and is waiting for your answer ) and when you cancel the request from the browser, it will close its socket and then your handler will be called whith an error that say I can't read more data, since the connection is closed.!!
but what you should do here to make it work is: you should learn HTTP format and thus know what is the data that your browser sent to you and provide a good answer for it and then your communication with the client will be proceeded. in this case end of buffer is \r\n\r\n and when you see it you shouldn't read any more data, you should process what you read till now and then send a response to the browser.