How to see raw tcp data on async_accept failure? - c++

I am using boost::beast library for both WebSocket and TCP server.
Because of requirement, I have to use same port. Thus I implemented server following it.
void on_run() {
// Set suggested timeout settings for the websocket
m_ws.set_option(...);
m_ws.async_accept(
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
}
virtual void on_accept(beast::error_code ec) {
if(ec) {
std::string msg = ec.message();
CONSOLE_INFO("err: {}", msg);
if(msg != "bad method") {
return fail(ec, "accept");
} else {
doReadTcp();
return;
}
}
doReadWs();
}
void doReadTcp() {
m_ws.next_layer().async_read_some(boost::asio::buffer(m_recvData, 15),
[this, self = shared_from_this()](const boost::system::error_code &error,
size_t bytes_transferred) {
if(error) {
return fail(error, "tcp read fail");
}
CONSOLE_INFO("recvs: {}", bytes_transferred);
doReadTcp();
});
}
void doReadWs() {
m_ws.async_read(...);
}
After accept function is failed, I try to read raw tcp data, however I wasn't able to know passed data. I can only know failure reason via ec.message(). When accept function is failed, can I know passed data?
If It is impossible what I thought, how to solve this problem?

I found solution.
m_ws.async_accept(net::buffer(m_untilStr),
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
websocket::stream supports buffered accept function.
Thus firstly tcp socket fill handshake data, call the async_accept(buffer, handler).

Related

Async accept an ssl socket using asio and c++

I am trying to write an async server using asio with SSL encrypted sockets. Currently I have code that does not use SSL, and after following this tutorial I have a basic idea of how to accept an SSL socket, however I do not know how to adapt this code to accept an SSL connection:
void waitForClients() {
acceptor.async_accept(
[this](std::error_code ec, asio::ip::tcp::socket socket) {
if (!ec) {
Conn newConn = std::make_shared<Connection>(ctx, std::move(socket));
connections.push_back(newConn);
} else {
std::cerr << "[SERVER] New connection error: " << ec.message() << "\n";
}
waitForClients();
}
);
}
//this is how the tutorial shows to accept a connection
ssl_socket socket(io_context, ssl_context);
acceptor.accept(socket.next_layer());
The issue is that the callback for acceptor.async_accept gives an ordinary asio::ip::tcp::socket rather than an asio::ssl::ssl_socket<asio::ip::tcp::socket>, and I cannot find any documentation that suggests there is a method of async_accepting an SSL socket in such a way. The only method I have seen is to construct a socket first then accept it afterwards, which cannot be done in this asynchronous manner.
Any help would be much appreciated.
I solved the problem by realising that the second argument to the constructor of asio::ssl::stream<asio::ip::tcp::socket> is any initialiser for the underlying type asio::ip::tcp::socket. Thus the problem can be solved:
void waitForClients() {
acceptor.async_accept(
[this](std::error_code ec, asio::ip::tcp::socket socket) {
if (!ec) {
//initialise an ssl stream from already created socket
asio::ssl::stream<asio::ip::tcp::socket> sslStream(sslCtx, std::move(socket);
//then pass it on to be used
Conn newConn = std::make_shared<Connection>(ctx, sslStream);
connections.push_back(newConn);
} else {
std::cerr << "[SERVER] New connection error: " << ec.message() << "\n";
}
waitForClients();
}
);
}

Boost::Beast : server with websocket pipelining

I am writing a c++ websocket server with boost beast 1.70 and mysql 8 C connector. The server will have several clients simultaneously connected. The particularity is that each client will perform like 100 websocket requests in a row to the server. Each request is "cpu light" for my server but the server perform a "time heavy" sql request for each request.
I have started my server with the websocket_server_coro.cpp example. The server steps are :
1) a websocket read
2) a sql request
3) a websocket write
The problem is that for a given user, the server is "locked" at the step 2 and cannot read until this step and the step 3 are finished. Thus, the 100 requests are solved sequentially. This is too slow for my use case.
I have read that non blocking read/write are not possible with boost beast. However, what I am trying to do now is to execute async_read and async_write in a coroutine.
void ServerCoro::accept(websocket::stream<beast::tcp_stream> &ws) {
beast::error_code ec;
ws.set_option(websocket::stream_base::timeout::suggested(beast::role_type::server));
ws.set_option(websocket::stream_base::decorator([](websocket::response_type &res) {
res.set(http::field::server, std::string(BOOST_BEAST_VERSION_STRING) + " websocket-Server-coro");
}));
ws.async_accept(yield[ec]);
if (ec) return fail(ec, "accept");
while (!_bStop) {
beast::flat_buffer buffer;
ws.async_read(buffer, yield[ec]);
if (ec == websocket::error::closed) {
std::cout << "=> get closed" << std::endl;
return;
}
if (ec) return fail(ec, "read");
auto buffer_str = new std::string(boost::beast::buffers_to_string(buffer.cdata()));
net::post([&, buffer_str] {
// sql async request such as :
// while (status == (mysql_real_query_nonblocking(this->con, sqlRequest.c_str(), sqlRequest.size()))) {
// ioc.poll_one(ec);
// }
// more sql ...
ws.async_write(net::buffer(worker->getResponse()), yield[ec]); // this line is throwing void boost::coroutines::detail::pull_coroutine_impl<void>::pull(): Assertion `! is_running()' failed.
if (ec) return fail(ec, "write");
});
}
}
The problem is that the line with async_write throw an error :
void boost::coroutines::detail::pull_coroutine_impl::pull(): Assertion `! is_running()' failed.
If a replace this line with a sync_write, it works but the server remains sequential for a given user.
I have tried to execute this code on a single threaded server. I have also tried to use the same strand for async_read and async_write. Still have the assertion error.
Is such server impossible with boost beast for websocket ?
Thank you.
By following the suggestion of Vinnie Falco, I rewrite the code by using "websocket chat" and "async server" as exemple. Here is the final working result of the code :
void Session::on_read(beast::error_code ec, std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
if(ec == websocket::error::closed) return; // This indicates that the Session was closed
if(ec) return fail(ec, "read");
net::post([&, that = shared_from_this(), ss = std::make_shared<std::string const>(std::move(boost::beast::buffers_to_string(_buffer.cdata())))] {
/* Sql things that call ioc.poll_one(ec) HERE, for me the sql response go inside worker.getResponse() used below */
net::dispatch(_wsStrand, [&, that = shared_from_this(), sss = std::make_shared < std::string const>(worker.getResponse())] {
async_write(sss);
});
});
_buffer.consume(_buffer.size()); // we remove from the buffer what we just read
do_read(); // go for another read
}
void Session::async_write(const std::shared_ptr<std::string const> &message) {
_writeMessages.push_back(message);
if (_writeMessages.size() > 1) {
BOOST_LOG_TRIVIAL(warning) << "WRITE IS LOCKED";
} else {
_ws.text(_ws.got_text());
_ws.async_write(net::buffer(*_writeMessages.front()), boost::asio::bind_executor(_wsStrand, beast::bind_front_handler(
&Session::on_write, this)));
}
}
void Session::on_write(beast::error_code ec, std::size_t)
{
// Handle the error, if any
if(ec) return fail(ec, "write");
// Remove the string from the queue
_writeMessages.erase(_writeMessages.begin());
// Send the next message if any
if(!_writeMessages.empty())
_ws.async_write(net::buffer(*_writeMessages.front()), boost::asio::bind_executor(_wsStrand, beast::bind_front_handler(
&Session::on_write, this)));
}
Thank you.

How does beast async_read() work & is there an option for it?

I am not very familiar with the boost::asio fundamentals. I am working on a task where I have connected to a web server and reading the response. The response is thrown at a random period, i.e. as and when the response is generated.
For this I am using the boost::beast library which is wrapped over the boost::asio fundamentals.
I have found that the async_read() function is waiting until it is receiving a response.
Now, the thing is : in documentations & example the asynchronous way of websocket communication is shown where the websocket is closed after it receives the response.
that is accomplished by this code (beast docs):
void read_resp(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "write");
// Read a message into our buffer
ws_.async_read(buffer_, std::bind(&session::close_ws_, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
void close_ws_(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "read");
// Close the WebSocket connection
ws_.async_close(websocket::close_code::normal, std::bind(&session::on_close, shared_from_this(), std::placeholders::_1));
}
In this program it is assumed that the sending is completed before receiving. And there is only once response to expect from server. After which, it(client) goes forward and closes the websocket.
But in my program:
I have to check whether the writing part has ended, if not, while the writing is in progress the websocket should come and check the response for whatever is written till now.
Now for this, I have put an if else which will tell my program whether or not my writing is compeleted or not? if not, then the program should go back to the write section and write the required thing. If Yes then go and close the connection.
void write_bytes(//Some parameters) {
//write on websocket
//go to read_resp
}
void read_resp(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "write");
// Read a message into our buffer
if (write_completed){
ws_.async_read(buffer_, std::bind(&session::close_ws_, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
} else {
ws_.async_read(buffer_, std::bind(&session::write_bytes, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
}
void close_ws_(//Some parameters) {
//Same as before
}
Now what I want to do is,After the write is completed wait for 3 seconds and read the websocket after every 1 second, and after 3rd second go to close the websocket.
For that I have included one more if else to check the 3 second condition to the read_resp
void read_resp(boost::system::error_code ec, std::size_t bytes_transferred) {
boost::ignore_unused(bytes_transferred);
if(ec)
return fail(ec, "write");
// Read a message into our buffer
if (write_completed){
if (3_seconds_completed) {
ws_.async_read(buffer_, std::bind(&session::close_ws_, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
} else {
usleep(1000000); //wait for a second
ws_.async_read(buffer_, std::bind(&session::read_resp, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
} else {
ws_.async_read(buffer_, std::bind(&session::write_bytes, shared_from_this(), std::placeholders::_1, std::placeholders::_2));
}
}
But the websocket is waiting for async-read to receive something and in doing so it just goes to session timeout.
how can just just see if something is there to read and then move on to execute the call back?
This might just be an answer for my problem here. I can't guarentee the solution for future readers.
I have removed the read_resp() self-loop and simply let the async_read() to move to close_ws_ when write_completed == true.
The async_read will wait for as long as it receives the response and do not move on to next step thus causing the timeout.

boost asio ssl writing part of data

My client-server app, that communicating through boost asio, usign functions:
When connection starts, client send to server bunch of requests, server send back some response.
After adding asio::ssl to project i get following problem.
Sometimes, 1/5 times, server reads only first fixed part of requests. When client disconnected, server get all missed requests.
On client side all seems good, callbakcs called with no errors and writed sizes are proper. But result from packet sniffer show that client not sending this part of requests.
Client :
Size of each "frame" located in header, first must read atleast header.
Thread Worker used for background work, and pushing ready packets to storage.
using SSLSocket = boost::asio::ssl::stream<boost::asio::ip::tcp::socket>;
class AsyncStrategy :
public NetworkStrategy
{
// other data...
void _WriteHandler(const boost::system::error_code& err, size_t bytes);
bool Connect(const boost::asio::ip::tcp::endpoint& endpoint);
void _BindMessage();
void _BindMessageRemainder(size_t size);
void _AcceptMessage(const boost::system::error_code& err_code, size_t bytes);
void _AcceptMessageRemainder(const boost::system::error_code& err_code, size_t bytes);
// to keep io_service running
void _BindTimer();
void _DumpTimer(const boost::system::error_code& error);
void _SolveProblem(const boost::system::error_code& err_code);
void _Disconnect();
bool verify_certificate(bool preverified,
boost::asio::ssl::verify_context& ctx);
PacketQuery query;
boost::array <Byte, PacketMaxSize> WriteBuff;
boost::array <Byte, PacketMaxSize> ReadBuff;
boost::asio::ip::tcp::endpoint ep;
boost::asio::io_service service;
boost::asio::deadline_timer _Timer{ service };
boost::asio::ssl::context _SSLContext;
SSLSocket sock;
boost::thread Worker;
bool _ThreadWorking;
bool _Connected = false;
};
AsyncStrategy::AsyncStrategy( MessengerAPI& api)
: API{api},_SSLContext{service,boost::asio::ssl::context::sslv23 },
sock{ service,_SSLContext }, _Timer{service},
Worker{ [&]() {
_BindTimer();
service.run();
} },
_ThreadWorking{ true }
{
_SSLContext.set_verify_mode(boost::asio::ssl::verify_peer);
_SSLContext.set_verify_callback(
boost::bind(&AsyncStrategy::verify_certificate, this, _1, _2));
_SSLContext.load_verify_file("ca.pem");
}
bool AsyncStrategy::verify_certificate(bool preverified,
boost::asio::ssl::verify_context& ctx)
{
return preverified;
}
void AsyncStrategy::_BindMessage()
{
boost::asio::async_read(sock, buffer(ReadBuff,BaseHeader::HeaderSize()),
boost::bind(&AsyncStrategy::_AcceptMessage, this, _1, _2));
}
bool AsyncStrategy::Connect(const boost::asio::ip::tcp::endpoint& endpoint)
{
ep = endpoint;
boost::system::error_code err;
sock.lowest_layer().connect(ep, err);
if (err)
throw __ConnectionRefused{};
// need blocking handshake
sock.handshake(boost::asio::ssl::stream_base::client, err);
if (err)
throw __ConnectionRefused{};
_BindMessage();
return true;
}
void AsyncStrategy::_AcceptMessage(const boost::system::error_code& err_code, size_t bytes)
{
// checking header, to see, packet ends or not
// if there is more data in packet, read rest my binding function
// pseudocode
if( need_load_more )
_BindMessageRemainder(BytesToReceive(FrameSize));
return;
}
// if not use this bind this function next time
_CheckPacket(ReadBuff.c_array(), bytes);
_BindMessage();
}
void AsyncStrategy::_AcceptMessageRemainder(const boost::system::error_code& err_code, size_t bytes)
{
if (err_code)
{
_SolveProblem(err_code);
return;
}
_CheckPacket(ReadBuff.c_array(), bytes + BaseHeader::HeaderSize());
_BindMessage();
}
bool AsyncStrategy::Send(const TransferredData& Data)
{
// alreay known, that that data fits in buffer
Data.ToBuffer(WriteBuff.c_array());
boost::asio::async_write(sock,
buffer(WriteBuff, Data.NeededSize()),
boost::bind(&AsyncStrategy::_WriteHandler, this, _1, _2));
return true;
}
void AsyncStrategy::_WriteHandler(const boost::system::error_code& err, size_t bytes)
{
if (err)
_SolveProblem(err);
}
After removing all ssl stuff, data transfer is normal. As i mentioned, all works properly before ssl integration.
Finding solution, i discovered that if send with delay, tried 200 ms, all data transferring normally.
Win10, boost 1.60, OpenSSL 1.0.2n
I guess there may be an error in my code, but I tried almost everything I thought. Looking for advice.
We can't see how Send is actually called.
Perhaps it needs to be synchronized.
We can that it reuses the same buffer each time, so two writes overlapping will clobber that buffer.
We can also see that you're not verifying that the size of the Data argument fits into the PacketMaxSize buffer.
This means you will not only lose data if you exceed the expected buffer size, it will also invoke Undefined Behaviour

boost asio tcp async read/write

i have an understanding problem how boost asio handles this:
When I watch my request response on client side, I can use following boost example Example
But I don't understand what happens if the server send every X ms some status information to the client. Have I open a serperate socket for this or can my client difference which is the request, response and the cycleMessage ?
Can it happen, that the client send a Request and read is as cycleMessage? Because he is also waiting for async_read because of this Message?
class TcpConnectionServer : public boost::enable_shared_from_this<TcpConnectionServer>
{
public:
typedef boost::shared_ptr<TcpConnectionServer> pointer;
static pointer create(boost::asio::io_service& io_service)
{
return pointer(new TcpConnectionServer(io_service));
}
boost::asio::ip::tcp::socket& socket()
{
return m_socket;
}
void Start()
{
SendCycleMessage();
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
private:
TcpConnectionServer(boost::asio::io_service& io_service)
: m_socket(io_service),m_cycleUpdateRate(io_service,boost::posix_time::seconds(1))
{
}
void handle_read_data(const boost::system::error_code& error_code)
{
if (!error_code)
{
std::string answer=doSomeThingWithData(m_data);
writeImpl(answer);
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
else
{
std::cout << error_code.message() << "ERROR DELETE READ \n";
// delete this;
}
}
void SendCycleMessage()
{
std::string data = "some usefull data";
writeImpl(data);
m_cycleUpdateRate.expires_from_now(boost::posix_time::seconds(1));
m_cycleUpdateRate.async_wait(boost::bind(&TcpConnectionServer::SendTracedParameter,this));
}
void writeImpl(const std::string& message)
{
m_messageOutputQueue.push_back(message);
if (m_messageOutputQueue.size() > 1)
{
// outstanding async_write
return;
}
this->write();
}
void write()
{
m_message = m_messageOutputQueue[0];
boost::asio::async_write(
m_socket,
boost::asio::buffer(m_message),
boost::bind(&TcpConnectionServer::writeHandler, this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void writeHandler(const boost::system::error_code& error, const size_t bytesTransferred)
{
m_messageOutputQueue.pop_front();
if (error)
{
std::cerr << "could not write: " << boost::system::system_error(error).what() << std::endl;
return;
}
if (!m_messageOutputQueue.empty())
{
// more messages to send
this->write();
}
}
boost::asio::ip::tcp::socket m_socket;
boost::asio::deadline_timer m_cycleUpdateRate;
std::string m_message;
const size_t m_sizeOfHeader = 5;
boost::array<char, 5> m_headerData;
std::vector<char> m_bodyData;
std::deque<std::string> m_messageOutputQueue;
};
With this implementation I will not need boost::asio::strand or? Because I will not modify the m_messageOutputQueue from an other thread.
But when I have on my client side an m_messageOutputQueue which i can access from an other thread on this point I will need strand? Because then i need the synchronization? Did I understand something wrong?
The differentiation of the message is part of your application protocol.
ASIO merely provides transport.
Now, indeed if you want to have a "keepalive" message you will have to design your protocol in such away that the client can distinguish the messages.
The trick is to think of it at a higher level. Don't deal with async_read on the client directly. Instead, make async_read put messages on a queue (or several queues; the status messages could not even go in a queue but supersede a previous non-handled status update, e.g.).
Then code your client against those queues.
A simple thing that is typically done is to introduce message framing and a message type id:
FRAME offset 0: message length(N)
FRAME offset 4: message data
FRAME offset 4+N: message checksum
FRAME offset 4+N+sizeof checksum: sentinel (e.g. 0x00, or a larger unique signature)
The structure there makes the protocol more extensible. It's easy to add encryption/compression without touch all other code. There's built-in error detection etc.