I am writing a c++ websocket server with boost beast 1.70 and mysql 8 C connector. The server will have several clients simultaneously connected. The particularity is that each client will perform like 100 websocket requests in a row to the server. Each request is "cpu light" for my server but the server perform a "time heavy" sql request for each request.
I have started my server with the websocket_server_coro.cpp example. The server steps are :
1) a websocket read
2) a sql request
3) a websocket write
The problem is that for a given user, the server is "locked" at the step 2 and cannot read until this step and the step 3 are finished. Thus, the 100 requests are solved sequentially. This is too slow for my use case.
I have read that non blocking read/write are not possible with boost beast. However, what I am trying to do now is to execute async_read and async_write in a coroutine.
void ServerCoro::accept(websocket::stream<beast::tcp_stream> &ws) {
beast::error_code ec;
ws.set_option(websocket::stream_base::timeout::suggested(beast::role_type::server));
ws.set_option(websocket::stream_base::decorator([](websocket::response_type &res) {
res.set(http::field::server, std::string(BOOST_BEAST_VERSION_STRING) + " websocket-Server-coro");
}));
ws.async_accept(yield[ec]);
if (ec) return fail(ec, "accept");
while (!_bStop) {
beast::flat_buffer buffer;
ws.async_read(buffer, yield[ec]);
if (ec == websocket::error::closed) {
std::cout << "=> get closed" << std::endl;
return;
}
if (ec) return fail(ec, "read");
auto buffer_str = new std::string(boost::beast::buffers_to_string(buffer.cdata()));
net::post([&, buffer_str] {
// sql async request such as :
// while (status == (mysql_real_query_nonblocking(this->con, sqlRequest.c_str(), sqlRequest.size()))) {
// ioc.poll_one(ec);
// }
// more sql ...
ws.async_write(net::buffer(worker->getResponse()), yield[ec]); // this line is throwing void boost::coroutines::detail::pull_coroutine_impl<void>::pull(): Assertion `! is_running()' failed.
if (ec) return fail(ec, "write");
});
}
}
The problem is that the line with async_write throw an error :
void boost::coroutines::detail::pull_coroutine_impl::pull(): Assertion `! is_running()' failed.
If a replace this line with a sync_write, it works but the server remains sequential for a given user.
I have tried to execute this code on a single threaded server. I have also tried to use the same strand for async_read and async_write. Still have the assertion error.
Is such server impossible with boost beast for websocket ?
Thank you.
By following the suggestion of Vinnie Falco, I rewrite the code by using "websocket chat" and "async server" as exemple. Here is the final working result of the code :
void Session::on_read(beast::error_code ec, std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(ec == websocket::error::closed) return; // This indicates that the Session was closed
if(ec) return fail(ec, "read");
net::post([&, that = shared_from_this(), ss = std::make_shared<std::string const>(std::move(boost::beast::buffers_to_string(_buffer.cdata())))] {
/* Sql things that call ioc.poll_one(ec) HERE, for me the sql response go inside worker.getResponse() used below */
net::dispatch(_wsStrand, [&, that = shared_from_this(), sss = std::make_shared < std::string const>(worker.getResponse())] {
async_write(sss);
});
});
_buffer.consume(_buffer.size()); // we remove from the buffer what we just read
do_read(); // go for another read
}
void Session::async_write(const std::shared_ptr<std::string const> &message) {
_writeMessages.push_back(message);
if (_writeMessages.size() > 1) {
BOOST_LOG_TRIVIAL(warning) << "WRITE IS LOCKED";
} else {
_ws.text(_ws.got_text());
_ws.async_write(net::buffer(*_writeMessages.front()), boost::asio::bind_executor(_wsStrand, beast::bind_front_handler(
&Session::on_write, this)));
}
}
void Session::on_write(beast::error_code ec, std::size_t)
{
// Handle the error, if any
if(ec) return fail(ec, "write");
// Remove the string from the queue
_writeMessages.erase(_writeMessages.begin());
// Send the next message if any
if(!_writeMessages.empty())
_ws.async_write(net::buffer(*_writeMessages.front()), boost::asio::bind_executor(_wsStrand, beast::bind_front_handler(
&Session::on_write, this)));
}
Thank you.
Related
I have a question regarding asio and TCP sockets.
Currently I am using async_read_until until there is a sequence of two "\n\n". If there is no error I do some database stuff and return a response to the client. This basically is done using the asio examples. My question is this: if I run into an error (asio::error_code& ec != null) then how can I make sure that I can write an answer back to the client? Because if I do call do_write(...) from within the error condition around half of my responses don't reach the client.
void do_read (some data) {
asio::async_read_until(socket_, buf_, "\n\n",
[this, self](const asio::error_code& ec, std::size_t bytes_transferred)
{
if(ec) {
do_write("This answer does not reach the client 50% of the time, sometimes more; sometimes less");
} else {
do_write("You did well my young apprentice! Always works");
}
});
}
void do_write(const std::string& response) {
asio::async_write(socket_, asio::buffer(response.c_str(), response.length()),
[this, self, response](std::error_code ec, std::size_t bytes_transferred) {
if (ec) {
// no error here whats-o-ever!
} else {
// OK: bytes_transferred == response.size()
}
}
}
Surely I do something wrong here, but what? At first I thought this is all async so maybe the parameters went out of scope. But do_write is called regardless of ec == null and I even testet it using verbatim strings, like in this example.
Archlinux
ASIO 1.18.1
netcat as client
GCC 10.2.0
Not using boost
I am using boost::beast library for both WebSocket and TCP server.
Because of requirement, I have to use same port. Thus I implemented server following it.
void on_run() {
// Set suggested timeout settings for the websocket
m_ws.set_option(...);
m_ws.async_accept(
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
}
virtual void on_accept(beast::error_code ec) {
if(ec) {
std::string msg = ec.message();
CONSOLE_INFO("err: {}", msg);
if(msg != "bad method") {
return fail(ec, "accept");
} else {
doReadTcp();
return;
}
}
doReadWs();
}
void doReadTcp() {
m_ws.next_layer().async_read_some(boost::asio::buffer(m_recvData, 15),
[this, self = shared_from_this()](const boost::system::error_code &error,
size_t bytes_transferred) {
if(error) {
return fail(error, "tcp read fail");
}
CONSOLE_INFO("recvs: {}", bytes_transferred);
doReadTcp();
});
}
void doReadWs() {
m_ws.async_read(...);
}
After accept function is failed, I try to read raw tcp data, however I wasn't able to know passed data. I can only know failure reason via ec.message(). When accept function is failed, can I know passed data?
If It is impossible what I thought, how to solve this problem?
I found solution.
m_ws.async_accept(net::buffer(m_untilStr),
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
websocket::stream supports buffered accept function.
Thus firstly tcp socket fill handshake data, call the async_accept(buffer, handler).
I am trying out Boost Beast examples for asynchronous web socket server - client
I am running server and client as below,
server.exe 127.0.0.1 4242 1
client.exe 127.0.0.1 4242 "Hello"
If everything works I believe it should print "Hello" on server command prompt
Below is the code
void
on_read(
beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
// This indicates that the session was closed
if (ec == websocket::error::closed)
return;
if (ec)
fail(ec, "read");
// Echo the message
ws_.text(ws_.got_text());
std::cout << "writing received value " << std::endl;
ws_.async_write(
buffer_.data(),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
std::cout << buffer_.data().data()<< std::endl;
}
ws_.write() is not writing anything on console , however buffer_data.data() renders
00000163E044EE80
How do I make sure this is working fine? How do I retrieve string value from the socket buffer?
The line printing content of sent message should be placed before async_write:
std::cout << buffer_.data().data()<< std::endl;
ws_.async_write(
buffer_.data(),
beast::bind_front_handler(
&session::on_write,
shared_from_this()));
Why?
All functions from BOOST-ASIO/BEAST which start with async_ ALWAYS return immediately. They initate some task, which are performed in background asio core, and when they are ready, handlers are called.
Look at on_write handler:
void
on_write(
beast::error_code ec,
std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if (ec)
return fail(ec, "write");
// Clear the buffer
buffer_.consume(buffer_.size()); /// <---
consume removes block of bytes whose length is buffer_size from the beginning of buffer_.
Your problem is that buffer probably has been cleared and then it is printed:
thread 1 thread 2
------------------------------ | steps
async_write | | [1]
| consume | [2]
cout << buffer_ | | [3]
|
In addition to using buffer before its consumed, in order to convert buffer I had to write to_string_ function which takes flat buffer and returns the string
std::string to_string_(beast::flat_buffer const& buffer)
{
return std::string(boost::asio::buffer_cast<char const*>(
beast::buffers_front(buffer.data())),
boost::asio::buffer_size(buffer.data()));
};
Found out this can easily be done by beast::buffers_to_string(buffer_.data()) too.
Reference : trying-to-understand-the-boostbeast-multibuffer
I am not very familiar with the boost::asio fundamentals. I am working on a task where I have connected to a web server and reading the response. The response is thrown at a random period, i.e. as and when the response is generated.
For this I am using the boost::beast library which is wrapped over the boost::asio fundamentals.
I have found that the async_read() function is waiting until it is receiving a response.
Now, the thing is : in documentations & example the asynchronous way of websocket communication is shown where the websocket is closed after it receives the response.
that is accomplished by this code (beast docs):
void read_resp(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "write");
// Read a message into our buffer
ws_.async_read(buffer_, std::bind(&session::close_ws_, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
void close_ws_(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "read");
// Close the WebSocket connection
ws_.async_close(websocket::close_code::normal, std::bind(&session::on_close, shared_from_this(), std::placeholders::_1));
}
In this program it is assumed that the sending is completed before receiving. And there is only once response to expect from server. After which, it(client) goes forward and closes the websocket.
But in my program:
I have to check whether the writing part has ended, if not, while the writing is in progress the websocket should come and check the response for whatever is written till now.
Now for this, I have put an if else which will tell my program whether or not my writing is compeleted or not? if not, then the program should go back to the write section and write the required thing. If Yes then go and close the connection.
void write_bytes(//Some parameters) {
//write on websocket
//go to read_resp
}
void read_resp(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "write");
// Read a message into our buffer
if (write_completed){
ws_.async_read(buffer_, std::bind(&session::close_ws_, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
} else {
ws_.async_read(buffer_, std::bind(&session::write_bytes, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
}
void close_ws_(//Some parameters) {
//Same as before
}
Now what I want to do is,After the write is completed wait for 3 seconds and read the websocket after every 1 second, and after 3rd second go to close the websocket.
For that I have included one more if else to check the 3 second condition to the read_resp
void read_resp(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "write");
// Read a message into our buffer
if (write_completed){
if (3_seconds_completed) {
ws_.async_read(buffer_, std::bind(&session::close_ws_, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
} else {
usleep(1000000); //wait for a second
ws_.async_read(buffer_, std::bind(&session::read_resp, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
} else {
ws_.async_read(buffer_, std::bind(&session::write_bytes, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
}
But the websocket is waiting for async-read to receive something and in doing so it just goes to session timeout.
how can just just see if something is there to read and then move on to execute the call back?
This might just be an answer for my problem here. I can't guarentee the solution for future readers.
I have removed the read_resp() self-loop and simply let the async_read() to move to close_ws_ when write_completed == true.
The async_read will wait for as long as it receives the response and do not move on to next step thus causing the timeout.
I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
ctx.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
ctx.use_private_key_file(..); // My data setup
ctx.use_certificate_chain_file(...); // My data setup
boost::asio::ip::tcp::acceptor acceptor(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));
for (;;) {
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
acceptor.async_accept(sock.next_layer(), yield);
sock.async_handshake(boost::asio::ssl::stream_base::server, yield);
auto ec = boost::system::error_code{};
char data_[1024];
auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error, is this desirable?
sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error
// Shutdown gracefully
sock.async_shutdown(yield[ec]);
if (ec && (ec.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
{
sock.lowest_layer().close();
}
}
});
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.
Will another connection be accepted if one has already been accepted, i.e. it's already past the async_accept line?
It's a bit hard to understand the specifics of your question, since the code is incomplete (e.g., there's a return in your block, but it's unclear what is that block part of).
Notwithstanding, the documentation contains an example of a TCP echo server using coroutines. It seems you basically need to add SSL support to it, to adapt it to your needs.
If you look at main, it has the following chunk:
boost::asio::spawn(io_service,
[&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
tcp::endpoint(tcp::v4(), std::atoi(argv[1])));
for (;;)
{
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if (!ec) std::make_shared<session>(std::move(socket))->go();
}
});
This loops endlessly, and, following each (successful) call to async_accept, handles accepting the next connection (while this connection and others might still be active).
Again, I'm not sure about your code, but it contains exits from the loop like
return; //connection closed cleanly by peer
To illustrate the point, here are two applications.
The first is a Python multiprocessing echo client, adapted from PMOTW:
import socket
import sys
import multiprocessing
def session(i):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print 'connecting to %s port %s' % server_address
sock.connect(server_address)
print 'connected'
for _ in range(300):
try:
# Send data
message = 'client ' + str(i) + ' message'
print 'sending "%s"' % message
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print 'received "%s"' % data
except:
print >>sys.stderr, 'closing socket'
sock.close()
if __name__ == '__main__':
pool = multiprocessing.Pool(8)
pool.map(session, range(8))
The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.
When run, it outputs
...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...
showing that the echo sessions are indeed interleaved.
Now for the echo server. I've slightly adapted the example from the docs:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session :
public std::enable_shared_from_this<session> {
public:
session(tcp::socket socket) : socket_(std::move(socket)) {}
void start() { do_read(); }
private:
void do_read() {
auto self(
shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if(!ec)
do_write(length);
});
}
void do_write(std::size_t length) {
auto self(shared_from_this());
socket_.async_write_some(
boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec)
do_read();
});
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
server(boost::asio::io_service& io_service, short port) :
acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service) {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
socket_,
[this](boost::system::error_code ec) {
if(!ec)
std::make_shared<session>(std::move(socket_))->start();
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[]) {
const int port = 5000;
try {
boost::asio::io_service io_service;
server s{io_service, port};
io_service.run();
}
catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
This shows that this server indeed interleaves.
Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).
However, this is not a fundamental difference, w.r.t. your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio's control loop in both versions, which monitors all the current operations which can proceed.
From the asio coroutine docs:
Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.
It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio.