boost-beast websocket server that also accept http connections - c++

I need to implement a simple asynchronous websocket server using boost beast that can accept both websocket and standard http connections.
I've tried something like this:
...
// ws is a boost::beast::websocket::stream<boost::asio::ip::tcp::socket>
ws.async_accept_ex(
[](boost::beast::websocket::response_type& res)
{
res.set(boost::beast::http::field::server, "MyServer");
},
[self](boost::beast::error_code e)
{
if (e) self->ReadHttp();
else self->ReadWs();
}
);
...
void ReadHttp()
{
auto self(shared_from_this());
ws.next_layer().async_read_some(
boost::asio::buffer(data, max_length),
[self](boost::system::error_code ec, std::size_t length)
{
if (!self->ws.next_layer().is_open() || ec==boost::asio::error::eof || ec == boost::asio::error::connection_reset)
// handle disconnection
else if (ec)
// handle error
else
{
std::string s(self->data, length);
cout << "HTTP rx: " << s << endl;
self->ReadHttp();
}
}
);
}
void ReadWs()
{
auto self(shared_from_this());
ws.async_read(
rxData,
[self](boost::beast::error_code ec, std::size_t /*length*/)
{
if(ec == boost::beast::websocket::error::closed)
// handle disconnection
else if ( ec )
// handle error
else
{
std::string s((std::istreambuf_iterator<char>(&self->rxData)), std::istreambuf_iterator<char>());
cout << "WS rx: " << s << endl;
self->rxData.consume(self->rxData.size());
self->Read();
}
}
);
}
but when an HTTP client connects, the server misses the first message sent.
Obviously, this is not the correct approach :-)
Can anyone help me with this?
Thanks

The advanced-server and advanced-server-flex examples demonstrate how to build a server that handles normal HTTP requests and that also handles the WebSocket upgrade request:
https://github.com/boostorg/beast/tree/e23ecc8ac903b303b9d1a9824b97c092cb3c09bd/example/advanced/server
https://github.com/boostorg/beast/tree/e23ecc8ac903b303b9d1a9824b97c092cb3c09bd/example/advanced/server-flex

Related

Fastest way to close a websocket and free up file discriptors

I am using boost beast for making websocket connections, a process of mine may have a large number of streams/connections and while terminating the process I am calling blocking close of each websocket in destructor using :
if ( m_ws.next_layer().next_layer().is_open() )
{
boost::system::error_code ec;
m_ws.close(boost::beast::websocket::close_code::normal, ec);
}
Having a lot of websockets, makes terminating the process blocking for a long time, is there a way to force terminate(delete) a connection and free up underlying resources faster? Thanks in advance.
As I told you in the comments, closing the connection would be a fast operation on sockets but it doesn't take time and block the thread. In your case, I don't know how much work your program does to close each socket, but keep in mind that if your main thread ends, which means that the process has ended, the SO releases all the resources that it has been using, without closing each socket, I use this technic, and the WebSocket clients detect the end of the connections, but I close the socket when I have some problems with the protocol or the remote endpoint has been disconnected abruptly.
It could be useful that you share your code to see what other activities your code is doing.
By the way, let me share with you my code, where I have no problem to close websocket:
void wscnx::close(beast::websocket::close_code reason, const boost::system::error_code &ec)
{
// std::cout << "\nwscnx::close\n";
if (is_ssl && m_wss->is_open())
m_wss->close(beast::websocket::normal);
else if (!is_ssl && m_ws->is_open())
m_wss->close(beast::websocket::normal);
if (!m_server.expired())
m_server.lock()->cnx_closed(m_cnx_info, ec);
}
In my case, I'm using asynchronous methods to read and synchronous methods
to write, I'm not using an asynchronous method to write to avoid the scenery of two simultaneous write operations.
Also, it's important to say that I'm using the asynchronous way to accept a new connection.
The code to accept the socket connection, where you can set the TIMES OUTS to write and read, instead of using timers.
void wscnx::accept_tcp()
{
m_ws->set_option(
websocket::stream_base::timeout::suggested(
beast::role_type::server));
m_ws->set_option(websocket::stream_base::decorator(
[](websocket::response_type &res)
{
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-async");
}));
//std::cout << "wscnx::[" << m_id << "]:: TCP async_handshake\n";
m_ws->async_accept(
[self{shared_from_this()}](const boost::system::error_code &ec)
{
if (ec)
{
self->close(beast::websocket::close_code::protocol_error, ec);
return;
}
// self->read_tcp();
self->read();
});
}
The code to read:
void wscnx::read()
{
if (!is_ssl && !m_ws->is_open())
return;
else if (is_ssl && !m_wss->is_open())
return;
auto f_read = [self{shared_from_this()}](const boost::system::error_code &ec, std::size_t bytes_transferred)
{
boost::ignore_unused(bytes_transferred);
// This indicates that the session was closed
if (ec == websocket::error::closed)
{
self->close(beast::websocket::close_code::normal, ec);
return;
}
if (ec)
{
self->close(beast::websocket::close_code::abnormal, ec);
return;
}
std::string data = beast::buffers_to_string(self->m_rx_buffer.data());
self->m_rx_buffer.consume(bytes_transferred);
if (!self->m_server.expired())
{
std::string_view vdata(data.c_str());
self->m_server.lock()->on_data_rx(self->m_cnx_info.id, vdata, self->cnx_info());
}
self->read();
};//lambda
if (!is_ssl)
m_ws->async_read(m_rx_buffer, f_read);
else
m_wss->async_read(m_rx_buffer, f_read);
}
The code to write:
void wscnx::write(std::string_view data, bool close_on_write)
{
std::unique_lock<std::mutex> u_lock(m_mtx_write);
if ( (!is_ssl && !m_ws->is_open()) || (is_ssl && !m_wss->is_open()))
return;
boost::system::error_code ec;
size_t bytes_transferred{0};
if (is_ssl)
bytes_transferred = m_wss->write(net::buffer(data), ec);
else
bytes_transferred = m_ws->write(net::buffer(data), ec);
boost::ignore_unused(bytes_transferred);
// This indicates that the session was closed
if (ec == websocket::error::closed)
{
// std::cout << "[wscnx::[" << self->m_id << "]::on wirite] Error: " << ec.message() << "\n";
close(beast::websocket::close_code::normal, ec);
return;
}
if (ec)
{
// std::cout << "[wscnx::[" << self->m_id << "]::on wirite] Error READING: " << ec.message() << "\n";
close(beast::websocket::close_code::abnormal, ec);
return;
}
if (close_on_write)
close(beast::websocket::close_code::normal, ec);
}
If you want to see the whole code, this is the Link. The project is still in the developer phase, but it works.

HTTP proxy example in C++

So I've been trying to write a proxy in C++ using the boost.asio. My initial project includes the client that writes a string message into a socket, a server that receives this message and writes a string message into a socket, and a proxy that works with the two mentioned sockets.
The proxy code looks like this (The future intention is handle multiple connections and to use the transfered data somehow, and the callbacks would perform some actual work other than logging):
#include "commondata.h"
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
using namespace boost::asio;
using ip::tcp;
using std::cout;
using std::endl;
class con_handler : public boost::enable_shared_from_this<con_handler> {
private:
tcp::socket client_socket;
tcp::socket server_socket;
enum { max_length = 1024 };
char client_data[max_length];
char server_data[max_length];
public:
typedef boost::shared_ptr<con_handler> pointer;
con_handler(boost::asio::io_service& io_service):
server_socket(io_service),
client_socket(io_service) {
memset(client_data, 0, max_length);
memset(server_data, 0, max_length);
server_socket.connect( tcp::endpoint( boost::asio::ip::address::from_string(SERVERIP), SERVERPORT ));
}
// creating the pointer
static pointer create(boost::asio::io_service& io_service) {
return pointer(new con_handler(io_service));
}
//socket creation
tcp::socket& socket() {
return client_socket;
}
void start() {
//read the data into the input buffer
client_socket.async_read_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
server_socket.async_write_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_write,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
server_socket.async_read_some(
boost::asio::buffer(server_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
server_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
client_socket.async_write_some(
boost::asio::buffer(server_data, max_length),
boost::bind(&con_handler::handle_write,
shared_from_this(),
server_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_read(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
cout << "proxy handle_read" << endl;
cout << data << endl;
} else {
std::cerr << "error: " << err.message() << std::endl;
client_socket.close();
}
}
void handle_write(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
cout << "proxy handle_write" << endl;
cout << data << endl;
} else {
std::cerr << "error: " << err.message() << endl;
client_socket.close();
}
}
};
class Server {
private:
boost::asio::io_service io_service;
tcp::acceptor acceptor_;
void start_accept() {
// socket
con_handler::pointer connection = con_handler::create(io_service);
// asynchronous accept operation and wait for a new connection.
acceptor_.async_accept(connection->socket(),
boost::bind(&Server::handle_accept, this, connection,
boost::asio::placeholders::error));
}
public:
//constructor for accepting connection from client
Server()
: acceptor_(io_service, tcp::endpoint(tcp::v4(), PROXYPORT)) {
start_accept();
}
void handle_accept(const con_handler::pointer& connection, const boost::system::error_code& err) {
if (!err) {
connection->start();
}
start_accept();
}
boost::asio::io_service& get_io_service() {
return io_service;
}
};
int main(int argc, char *argv[]) {
try {
Server server;
server.get_io_service().run();
} catch(std::exception& e) {
std::cerr << e.what() << endl;
}
return 0;
}
If the messages sent are strings (which I've used initially to test if my code works at all), then all of the callbacks are called the way I wanted them to be called, and the thing seems to be working.
Here's the stdout of the proxy for that case:
user#laptop:$ ./proxy
proxy handle_read
message from the client
proxy handle_write
message from the client
proxy handle_read
message from server
proxy handle_write
message from server
So the client sends the "message from the client" string, which is received and saved by the proxy, the same string is sent to the server, then the server sends back the "message from server" string, which is also received and saved by the proxy and then is sent to the client.
The problem appears when I try to use the actual web server (Apache) and an application like JMeter to talk to each other. This is the stdout for this case:
user#laptop:$ ./proxy
proxy handle_write
proxy handle_write
proxy handle_read
GET / HTTP/1.1
Connection: keep-alive
Host: 127.0.0.1:1337
User-Agent: Apache-HttpClient/4.5.5 (Java/11.0.8)
error: End of file
The JMeter test then fails with a timeout (that is when the proxy gets the EOF error), and no data seems to be sent to the apache webserver. The questions that I have for now are then why the callbacks are called in another order comparing to the case when the string messages are sent and why the data is not being transferred to the server socket, I guess. Thanks in advance for any help!
Abbreviating from start():
client_socket.async_read_some (buffer(client_data), ...);
server_socket.async_write_some (buffer(client_data), ...);
server_socket.async_read_some (buffer(server_data), ...);
client_socket.async_write_some (buffer(server_data), ...);
//read the data into the input
client_socket.async_read_some (buffer(client_data), ...);
server_socket.async_write_some (buffer(client_data), ...);
server_socket.async_read_some (buffer(server_data), ...);
client_socket.async_write_some (buffer(server_data), ...);
That's... not how async operations work. They run asynchronously, meaning that they will all immediately return.
You're simultaneously reading and writing from some buffers, without waiting for valid data. Also, you're writing the full buffer always, regardless of how much was received.
All of this spells Undefined Behaviour.
Start simple
Conceptually you just want to read:
void start() {
//read the data into the input buffer
client_socket.async_read_some(
boost::asio::buffer(client_data, max_length),
boost::bind(&con_handler::handle_read,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
Now, once you received data, you might want to relay that:
void handle_read(const char* data, const boost::system::error_code& err, size_t bytes_transferred) {
if (!err) {
std::cout << "proxy handle_read" << std::endl;
server_socket.async_write_some(
boost::asio::buffer(client_data, bytes_transferred),
boost::bind(&con_handler::handle_write,
shared_from_this(),
client_data,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} else {
std::cerr << "error: " << err.message() << std::endl;
client_socket.close();
}
}
Note that it seems a bit arbitrary to only close one side of the connection on errors. You probably at least want to cancel() any async operations on both, optionally shutdown() and then just let the shared_ptr destruct your con_handler.
Full Duplex
Now, for full-duplex operation you can indeed start the reverse relay at the same time. It gets a little unweildy to maintain the call chains in separate methods (after all you don't just switch the buffers, but also the socket pairs).
It might be instructive to realize that you're doing the same thing twice:
client -> [...buffer...] -> server
server -> [...buffer...] -> client
You can encapsulate each side in a class, and avoid duplicating all the code:
struct relay {
tcp::socket &from, &to;
std::array<char, max_length> buf{};
void run_relay(pointer self) {
from.async_read_some(asio::buffer(buf),
[this, self](error_code ec, size_t n) {
if (ec) return handle(from, ec);
/*
*std::cout
* << "From " << from.remote_endpoint()
* << ": " << std::quoted(std::string_view(buf.data(), n))
* << std::endl;
*/
async_write(to, asio::buffer(buf, n), [this, self](error_code ec, size_t) {
if (ec) return handle(to, ec);
run_relay(self);
});
});
}
void handle(tcp::socket& which, error_code ec = {}) {
if (ec == asio::error::eof) {
// soft "error" - allow write to complete
std::cout << "EOF on " << which.remote_endpoint() << std::endl;
which.shutdown(tcp::socket::shutdown_receive, ec);
}
if (ec) {
from.cancel();
to.cancel();
std::string reason = ec.message();
auto fep = from.remote_endpoint(ec),
tep = to.remote_endpoint(ec);
std::cout << "Stopped relay " << fep << " -> " << tep << " due to " << reason << std::endl;
}
}
} c_to_s {client_socket, server_socket, {0}},
s_to_c {server_socket, client_socket, {0}};
Note
we sidestepped the bind mess by using lambdas
we cancel both ends of the relay on error
we use a std::array buffer - more safe and easier to use
we only write as many bytes as were received, regardless of the size of the buffer
we don't schedule another read until the write has completed to avoid clobbering the data in buf
Let's implement con_handler start again
Using the relay from just above:
void start() {
c_to_s.run_relay(shared_from_this());
s_to_c.run_relay(shared_from_this());
}
That's all. We pass ourselves so the con_handler stays alive until all operations complete.
DEMO Live On Coliru
#define PROXYPORT 8899
#define SERVERIP "173.203.57.63" // coliru IP at the time
#define SERVERPORT 80
#include <boost/enable_shared_from_this.hpp>
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
namespace asio = boost::asio;
using boost::asio::ip::tcp;
using boost::system::error_code;
using namespace std::chrono_literals;
class con_handler : public boost::enable_shared_from_this<con_handler> {
public:
con_handler(asio::io_service& io_service):
server_socket(io_service),
client_socket(io_service)
{
server_socket.connect({ asio::ip::address::from_string(SERVERIP), SERVERPORT });
}
// creating the pointer
using pointer = boost::shared_ptr<con_handler>;
static pointer create(asio::io_service& io_service) {
return pointer(new con_handler(io_service));
}
//socket creation
tcp::socket& socket() {
return client_socket;
}
void start() {
c_to_s.run_relay(shared_from_this());
s_to_c.run_relay(shared_from_this());
}
private:
tcp::socket server_socket;
tcp::socket client_socket;
enum { max_length = 1024 };
struct relay {
tcp::socket &from, &to;
std::array<char, max_length> buf{};
void run_relay(pointer self) {
from.async_read_some(asio::buffer(buf),
[this, self](error_code ec, size_t n) {
if (ec) return handle(from, ec);
/*
*std::cout
* << "From " << from.remote_endpoint()
* << ": " << std::quoted(std::string_view(buf.data(), n))
* << std::endl;
*/
async_write(to, asio::buffer(buf, n), [this, self](error_code ec, size_t) {
if (ec) return handle(to, ec);
run_relay(self);
});
});
}
void handle(tcp::socket& which, error_code ec = {}) {
if (ec == asio::error::eof) {
// soft "error" - allow write to complete
std::cout << "EOF on " << which.remote_endpoint() << std::endl;
which.shutdown(tcp::socket::shutdown_receive, ec);
}
if (ec) {
from.cancel();
to.cancel();
std::string reason = ec.message();
auto fep = from.remote_endpoint(ec),
tep = to.remote_endpoint(ec);
std::cout << "Stopped relay " << fep << " -> " << tep << " due to " << reason << std::endl;
}
}
} c_to_s {client_socket, server_socket, {0}},
s_to_c {server_socket, client_socket, {0}};
};
class Server {
asio::io_service io_service;
tcp::acceptor acceptor_;
void start_accept() {
// socket
auto connection = con_handler::create(io_service);
// asynchronous accept operation and wait for a new connection.
acceptor_.async_accept(
connection->socket(),
[connection, this](error_code ec) {
if (!ec) connection->start();
start_accept();
});
}
public:
Server() : acceptor_(io_service, {{}, PROXYPORT}) {
start_accept();
}
void run() {
io_service.run_for(5s); // .run();
}
};
int main() {
Server().run();
}
When run with
printf "GET / HTTP/1.1\r\nHost: coliru.stacked-crooked.com\r\n\r\n" | nc 127.0.0.1 8899
The server prints:
EOF on 127.0.0.1:36452
And the netcat receives reply:
HTTP/1.1 200 OK
Content-Type: text/html;charset=utf-8
Content-Length: 8616
Server: WEBrick/1.4.2 (Ruby/2.5.1/2018-03-29) OpenSSL/1.0.2g
Date: Sat, 01 Aug 2020 00:25:10 GMT
Connection: Keep-Alive
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Frameset//EN">
<html>
....
</html>
Summary
Thinking clearly about what you are trying to achieve, avoids accidentally complexity. It allowed us to come up with a good building block (relay), evaporating complexity.

Something is went wrong when im sending a created Packet with boost::asio to a Minecraft Client

Goal and Instruction to the Protocol about Notchian Communication
I have a Server Application that uses boost::asio's asynchronous read/write Functions to communicate with connecting Notchian Clients. So far so good I read the Documented Website and only wrote a Status Handshake Packet. In Minecraft you can get those Packets at each Notchian Server. These Packets do use specific Data Types. My Server is just sending a String as a Json Response to the Client.
Code Section | How I wrote the ByteBuffer
typedef unsigned char byte; /* Sending unsigned bytes */
class LBuffer {
std::vector<byte> buf;
public:
std::vector<byte>& getBuf() {
return buf;
}
void write(byte data) {
buf.push_back(data);
}
void writeInt(int32_t data) {
buf.push_back(data >> 24);
buf.push_back((data << 8) >> 24);
buf.push_back((data << 16) >> 24);
buf.push_back((data << 24) >> 24);
}
void writeString(std::string data) {
std::copy(data.begin(), data.end(), std::back_inserter(buf));
}
};
Code Section | How I wrote the Packet to the Buffer
LBuffer createHandshakeStatusResponsePacket() {
LBuffer buffer;
buffer.write(0x00);
buffer.writeString("{{\"version\":{\"name\":\"1.8.7\",\"protocol\":47},\"players\":{\"max\":100,\"online\":5,\"sample\":[{\"name\":\"thinkofdeath\",\"id\":\"4566e69f-c907-48ee-8d71-d7ba5aa00d20\"}]},\"description\":{\"text\":\"Helloworld\"}}}");
return buffer;
}
Code Section | Writing Server with the ResponseBuf
int main() {
boost::asio::io_service svc;
tcp::acceptor a(svc);
a.open(tcp::v4());
a.set_option(tcp::acceptor::reuse_address(true));
a.bind({ {}, 6767 });
a.listen(5);
using session = std::shared_ptr<tcp::socket>;
std::function<void()> doAccept;
std::function<void(session)> doSession;
doSession = [&](session s) {
auto buf = std::make_shared<std::vector<byte>>(1024);
s->async_read_some(boost::asio::buffer(*buf), [&, s, buf](error_code ec, size_t bytes) {
if (ec)
std::cerr << "read failed: " << ec.message() << "\n";
else {
/*
As you see I dont read the Request from the Client..
But thats not relevant when I just want to send the Data
to receive it's Motd and so on..
*/
if (ec)
std::cerr << "endpoint failed: " << ec.message() << std::endl;
else {
std::vector<byte> responseBuf = createHandshakeStatusResponsePacket().getBuf();
async_write(*s, boost::asio::buffer(responseBuf), [&, s, buf](error_code ec, size_t) {
if (ec) std::cerr << "write failed: " << ec.message() << "\n";
});
}
doSession(s);
}
});
};
doAccept = [&] {
auto s = std::make_shared<session::element_type>(svc);
a.async_accept(*s, [&, s](error_code ec) {
if (ec)
std::cerr << "accept failed: " << ec.message() << "\n";
else {
doSession(s);
doAccept();
}
});
};
doAccept();
svc.run();
}
Results and Problems
When my Notchian Client reads the Packet that I sent as Response from the Server, it's giving me without any Delay this Result here:
Can't connect to Server
The Log from my Notchian Client said:
[04:42:54] [Client thread/ERROR]: Can't ping 127.0.0.1:6767: Internal Exception: io.netty.handler.codec.DecoderException:
java.io.IOException: Bad packet id 123
But how can it be the packetId 123 ? Because I'm sending the PacketId 0 at first.
Declaration
Notchian: Typically Software written from Notch ( so I grabbed it up )
ByteBuffer: Sending Bytes in a specific order.
I do hope for Tips and Solutions,
thanks

boost async ioservice, slow execution of a thread on linux

I am using boost asio in my application. I have created two ioservices. one for handling UDP sockets async operations, other to handle tcp async operations. on receiving data over udp socket, based on the data i will open a tcp connection to some other server. I am using async function calls in all places.
below is high level pseudo code.
int g_requestID ;
// boost async_recv handler
class UDPSocket
{
UDPSocket::OnDataReceived(const boost::system::error_code& error, std::size_t bytes_transferred
{
TCPSession *pSesson = new TCPSession() ;
g_requestID = pSesson->sendDataOverTCP(data);
}
}
// TCP Response Callback
void tcpResponse(int reqId)
{
if (g_requestID == reqId)
{
// received response for the request
}
}
class TCPSession
{
boost::asio::streambuf request_;
static int requestId ;
boost::tcp::socket m_socket ;
int currentsessionId = requestId ++ ;
int sendDataOverTCP(char* address, char* data)
{
m_socket.async_resolve()
return currentsessionId++ ;
}
void handle_resolve(const boost::system::error_code& err, tcp::resolver::iterator endpoint_iterator)
{
if (!err)
{
boost::asio::async_connect(m_socket, endpoint_iterator, boost::bind(&client::handle_connect, this, boost::asio::placeholders::error));
}
else
{
std::cout << "Error: " << err.message() << "\n";
}
}
void handle_connect(const boost::system::error_code& err)
{
if (!err)
{
// The connection was successful. Send the request.
boost::asio::async_write(m_socket, request_, boost::bind(&client::handle_write, this, boost::asio::placeholders::error));
}
else
{
std::cout << "Error: " << err.message() << "\n";
}
}
void handle_write(const boost::system::error_code& err)
{
if (!err)
{
boost::asio::async_read_until(socket_, response_, "\r\n", boost::bind(&client::handle_receive, this, boost::asio::placeholders::error));
}
else
{
std::cout << "Error: " << err.message() << "\n";
}
}
void handle_receive(const boost::system::error_code& err)
{
if (!err)
{
tcpResponse(currentsessionId) ;
}
else
{
std::cout << "Error: " << err << "\n";
}
}
}
Now Coming to my problem. in tcpResponse function, g_requestID contains garbage value. when i debugged by adding log statements, i found that sendDataOverTCP is returned after receiving tcpResponse callback. inside sendDataOverTcp, i am calling async_resolve only. it should return immediately. but it is not.
After debugging, I found following behaviour. async_resolve is working as expected. it is returning immediately. but sendDataOverTCP returns only after tcpResponse callback.
can anybody provide me solution, is it because of some thread scheduling ?
same code is working fine on windows. on linux, i am facing this issue.
I am using boost 1.53.0 on ubuntu 13.04 using virtual box

Boost.Asio HTTP Server Session Closing

I have a http server based on Boost.Asio Example HTTP Server 2 (using an io_service-per-CPU design)
Each Request gets parsed, checked if it's a POST-Request, if Path is correct, if Content-Length Header is present, if one of these conditions is not correct, I generate a Bad-Request Response.
As everything is okay, I send a OK Response.
That's how it gets started.
int main(int argc, char* argv[])
{
try
{
http::server s("127.0.0.1", "88", 1024);
s.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
Now I made a quick C# app which starts 16 Threads and each Thread sends continuously in a while(true)-Loop POST-Requests to this server.
A while it runs smoothly and fast. Then at some point of time the server gets unresponsive (Cannot connect to remote server).
A cause for this could be that the sessions from the server don't get closed properly.
Here's the code for Bad-Request/OK Responses:
void session::handle_request()
{
//Generate OK Response
reply_ = reply::stock_reply(reply::ok);
//Send OK Response
boost::asio::async_write(client_socket_, reply_.to_buffers(),
boost::bind(&session::handle_client_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void session::send_bad_request()
{
//Generate Bad Request Response
reply_ = reply::stock_reply(reply::bad_request);
//Send Bad Request Response
boost::asio::async_write(client_socket_, reply_.to_buffers(),
boost::bind(&session::handle_client_write, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
And the handle_client_write which both trigger:
void session::handle_client_write(const boost::system::error_code& err, size_t len)
{
if (!err) {
//Dummy Error Code Var
boost::system::error_code ignored_ec;
//Shutdown socket
client_socket_.shutdown(boost::asio::ip::tcp::socket::shutdown_both, ignored_ec);
client_socket_.close();
//std::cout << "Closed" << ignored_ec.message() << "\n";
}
else {
std::cout << "Error: " << err.message() << "\n";
}
}
Like I said, at some point it gets unresponsive and after little time it responses again.