boost asio tcp async read/write - c++

i have an understanding problem how boost asio handles this:
When I watch my request response on client side, I can use following boost example Example
But I don't understand what happens if the server send every X ms some status information to the client. Have I open a serperate socket for this or can my client difference which is the request, response and the cycleMessage ?
Can it happen, that the client send a Request and read is as cycleMessage? Because he is also waiting for async_read because of this Message?
class TcpConnectionServer : public boost::enable_shared_from_this<TcpConnectionServer>
{
public:
typedef boost::shared_ptr<TcpConnectionServer> pointer;
static pointer create(boost::asio::io_service& io_service)
{
return pointer(new TcpConnectionServer(io_service));
}
boost::asio::ip::tcp::socket& socket()
{
return m_socket;
}
void Start()
{
SendCycleMessage();
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
private:
TcpConnectionServer(boost::asio::io_service& io_service)
: m_socket(io_service),m_cycleUpdateRate(io_service,boost::posix_time::seconds(1))
{
}
void handle_read_data(const boost::system::error_code& error_code)
{
if (!error_code)
{
std::string answer=doSomeThingWithData(m_data);
writeImpl(answer);
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
else
{
std::cout << error_code.message() << "ERROR DELETE READ \n";
// delete this;
}
}
void SendCycleMessage()
{
std::string data = "some usefull data";
writeImpl(data);
m_cycleUpdateRate.expires_from_now(boost::posix_time::seconds(1));
m_cycleUpdateRate.async_wait(boost::bind(&TcpConnectionServer::SendTracedParameter,this));
}
void writeImpl(const std::string& message)
{
m_messageOutputQueue.push_back(message);
if (m_messageOutputQueue.size() > 1)
{
// outstanding async_write
return;
}
this->write();
}
void write()
{
m_message = m_messageOutputQueue[0];
boost::asio::async_write(
m_socket,
boost::asio::buffer(m_message),
boost::bind(&TcpConnectionServer::writeHandler, this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void writeHandler(const boost::system::error_code& error, const size_t bytesTransferred)
{
m_messageOutputQueue.pop_front();
if (error)
{
std::cerr << "could not write: " << boost::system::system_error(error).what() << std::endl;
return;
}
if (!m_messageOutputQueue.empty())
{
// more messages to send
this->write();
}
}
boost::asio::ip::tcp::socket m_socket;
boost::asio::deadline_timer m_cycleUpdateRate;
std::string m_message;
const size_t m_sizeOfHeader = 5;
boost::array<char, 5> m_headerData;
std::vector<char> m_bodyData;
std::deque<std::string> m_messageOutputQueue;
};
With this implementation I will not need boost::asio::strand or? Because I will not modify the m_messageOutputQueue from an other thread.
But when I have on my client side an m_messageOutputQueue which i can access from an other thread on this point I will need strand? Because then i need the synchronization? Did I understand something wrong?

The differentiation of the message is part of your application protocol.
ASIO merely provides transport.
Now, indeed if you want to have a "keepalive" message you will have to design your protocol in such away that the client can distinguish the messages.
The trick is to think of it at a higher level. Don't deal with async_read on the client directly. Instead, make async_read put messages on a queue (or several queues; the status messages could not even go in a queue but supersede a previous non-handled status update, e.g.).
Then code your client against those queues.
A simple thing that is typically done is to introduce message framing and a message type id:
FRAME offset 0: message length(N)
FRAME offset 4: message data
FRAME offset 4+N: message checksum
FRAME offset 4+N+sizeof checksum: sentinel (e.g. 0x00, or a larger unique signature)
The structure there makes the protocol more extensible. It's easy to add encryption/compression without touch all other code. There's built-in error detection etc.

Related

How to see raw tcp data on async_accept failure?

I am using boost::beast library for both WebSocket and TCP server.
Because of requirement, I have to use same port. Thus I implemented server following it.
void on_run() {
// Set suggested timeout settings for the websocket
m_ws.set_option(...);
m_ws.async_accept(
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
}
virtual void on_accept(beast::error_code ec) {
if(ec) {
std::string msg = ec.message();
CONSOLE_INFO("err: {}", msg);
if(msg != "bad method") {
return fail(ec, "accept");
} else {
doReadTcp();
return;
}
}
doReadWs();
}
void doReadTcp() {
m_ws.next_layer().async_read_some(boost::asio::buffer(m_recvData, 15),
[this, self = shared_from_this()](const boost::system::error_code &error,
size_t bytes_transferred) {
if(error) {
return fail(error, "tcp read fail");
}
CONSOLE_INFO("recvs: {}", bytes_transferred);
doReadTcp();
});
}
void doReadWs() {
m_ws.async_read(...);
}
After accept function is failed, I try to read raw tcp data, however I wasn't able to know passed data. I can only know failure reason via ec.message(). When accept function is failed, can I know passed data?
If It is impossible what I thought, how to solve this problem?
I found solution.
m_ws.async_accept(net::buffer(m_untilStr),
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
websocket::stream supports buffered accept function.
Thus firstly tcp socket fill handshake data, call the async_accept(buffer, handler).

boost asio ssl writing part of data

My client-server app, that communicating through boost asio, usign functions:
When connection starts, client send to server bunch of requests, server send back some response.
After adding asio::ssl to project i get following problem.
Sometimes, 1/5 times, server reads only first fixed part of requests. When client disconnected, server get all missed requests.
On client side all seems good, callbakcs called with no errors and writed sizes are proper. But result from packet sniffer show that client not sending this part of requests.
Client :
Size of each "frame" located in header, first must read atleast header.
Thread Worker used for background work, and pushing ready packets to storage.
using SSLSocket = boost::asio::ssl::stream<boost::asio::ip::tcp::socket>;
class AsyncStrategy :
public NetworkStrategy
{
// other data...
void _WriteHandler(const boost::system::error_code& err, size_t bytes);
bool Connect(const boost::asio::ip::tcp::endpoint& endpoint);
void _BindMessage();
void _BindMessageRemainder(size_t size);
void _AcceptMessage(const boost::system::error_code& err_code, size_t bytes);
void _AcceptMessageRemainder(const boost::system::error_code& err_code, size_t bytes);
// to keep io_service running
void _BindTimer();
void _DumpTimer(const boost::system::error_code& error);
void _SolveProblem(const boost::system::error_code& err_code);
void _Disconnect();
bool verify_certificate(bool preverified,
boost::asio::ssl::verify_context& ctx);
PacketQuery query;
boost::array <Byte, PacketMaxSize> WriteBuff;
boost::array <Byte, PacketMaxSize> ReadBuff;
boost::asio::ip::tcp::endpoint ep;
boost::asio::io_service service;
boost::asio::deadline_timer _Timer{ service };
boost::asio::ssl::context _SSLContext;
SSLSocket sock;
boost::thread Worker;
bool _ThreadWorking;
bool _Connected = false;
};
AsyncStrategy::AsyncStrategy( MessengerAPI& api)
: API{api},_SSLContext{service,boost::asio::ssl::context::sslv23 },
sock{ service,_SSLContext }, _Timer{service},
Worker{ [&]() {
_BindTimer();
service.run();
} },
_ThreadWorking{ true }
{
_SSLContext.set_verify_mode(boost::asio::ssl::verify_peer);
_SSLContext.set_verify_callback(
boost::bind(&AsyncStrategy::verify_certificate, this, _1, _2));
_SSLContext.load_verify_file("ca.pem");
}
bool AsyncStrategy::verify_certificate(bool preverified,
boost::asio::ssl::verify_context& ctx)
{
return preverified;
}
void AsyncStrategy::_BindMessage()
{
boost::asio::async_read(sock, buffer(ReadBuff,BaseHeader::HeaderSize()),
boost::bind(&AsyncStrategy::_AcceptMessage, this, _1, _2));
}
bool AsyncStrategy::Connect(const boost::asio::ip::tcp::endpoint& endpoint)
{
ep = endpoint;
boost::system::error_code err;
sock.lowest_layer().connect(ep, err);
if (err)
throw __ConnectionRefused{};
// need blocking handshake
sock.handshake(boost::asio::ssl::stream_base::client, err);
if (err)
throw __ConnectionRefused{};
_BindMessage();
return true;
}
void AsyncStrategy::_AcceptMessage(const boost::system::error_code& err_code, size_t bytes)
{
// checking header, to see, packet ends or not
// if there is more data in packet, read rest my binding function
// pseudocode
if( need_load_more )
_BindMessageRemainder(BytesToReceive(FrameSize));
return;
}
// if not use this bind this function next time
_CheckPacket(ReadBuff.c_array(), bytes);
_BindMessage();
}
void AsyncStrategy::_AcceptMessageRemainder(const boost::system::error_code& err_code, size_t bytes)
{
if (err_code)
{
_SolveProblem(err_code);
return;
}
_CheckPacket(ReadBuff.c_array(), bytes + BaseHeader::HeaderSize());
_BindMessage();
}
bool AsyncStrategy::Send(const TransferredData& Data)
{
// alreay known, that that data fits in buffer
Data.ToBuffer(WriteBuff.c_array());
boost::asio::async_write(sock,
buffer(WriteBuff, Data.NeededSize()),
boost::bind(&AsyncStrategy::_WriteHandler, this, _1, _2));
return true;
}
void AsyncStrategy::_WriteHandler(const boost::system::error_code& err, size_t bytes)
{
if (err)
_SolveProblem(err);
}
After removing all ssl stuff, data transfer is normal. As i mentioned, all works properly before ssl integration.
Finding solution, i discovered that if send with delay, tried 200 ms, all data transferring normally.
Win10, boost 1.60, OpenSSL 1.0.2n
I guess there may be an error in my code, but I tried almost everything I thought. Looking for advice.
We can't see how Send is actually called.
Perhaps it needs to be synchronized.
We can that it reuses the same buffer each time, so two writes overlapping will clobber that buffer.
We can also see that you're not verifying that the size of the Data argument fits into the PacketMaxSize buffer.
This means you will not only lose data if you exceed the expected buffer size, it will also invoke Undefined Behaviour

Handling multiple clients with async_accept

I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
ctx.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
ctx.use_private_key_file(..); // My data setup
ctx.use_certificate_chain_file(...); // My data setup
boost::asio::ip::tcp::acceptor acceptor(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));
for (;;) {
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
acceptor.async_accept(sock.next_layer(), yield);
sock.async_handshake(boost::asio::ssl::stream_base::server, yield);
auto ec = boost::system::error_code{};
char data_[1024];
auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error, is this desirable?
sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error
// Shutdown gracefully
sock.async_shutdown(yield[ec]);
if (ec && (ec.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
{
sock.lowest_layer().close();
}
}
});
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.
Will another connection be accepted if one has already been accepted, i.e. it's already past the async_accept line?
It's a bit hard to understand the specifics of your question, since the code is incomplete (e.g., there's a return in your block, but it's unclear what is that block part of).
Notwithstanding, the documentation contains an example of a TCP echo server using coroutines. It seems you basically need to add SSL support to it, to adapt it to your needs.
If you look at main, it has the following chunk:
boost::asio::spawn(io_service,
[&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
tcp::endpoint(tcp::v4(), std::atoi(argv[1])));
for (;;)
{
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if (!ec) std::make_shared<session>(std::move(socket))->go();
}
});
This loops endlessly, and, following each (successful) call to async_accept, handles accepting the next connection (while this connection and others might still be active).
Again, I'm not sure about your code, but it contains exits from the loop like
return; //connection closed cleanly by peer
To illustrate the point, here are two applications.
The first is a Python multiprocessing echo client, adapted from PMOTW:
import socket
import sys
import multiprocessing
def session(i):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print 'connecting to %s port %s' % server_address
sock.connect(server_address)
print 'connected'
for _ in range(300):
try:
# Send data
message = 'client ' + str(i) + ' message'
print 'sending "%s"' % message
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print 'received "%s"' % data
except:
print >>sys.stderr, 'closing socket'
sock.close()
if __name__ == '__main__':
pool = multiprocessing.Pool(8)
pool.map(session, range(8))
The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.
When run, it outputs
...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...
showing that the echo sessions are indeed interleaved.
Now for the echo server. I've slightly adapted the example from the docs:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session :
public std::enable_shared_from_this<session> {
public:
session(tcp::socket socket) : socket_(std::move(socket)) {}
void start() { do_read(); }
private:
void do_read() {
auto self(
shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if(!ec)
do_write(length);
});
}
void do_write(std::size_t length) {
auto self(shared_from_this());
socket_.async_write_some(
boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec)
do_read();
});
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
server(boost::asio::io_service& io_service, short port) :
acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service) {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
socket_,
[this](boost::system::error_code ec) {
if(!ec)
std::make_shared<session>(std::move(socket_))->start();
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[]) {
const int port = 5000;
try {
boost::asio::io_service io_service;
server s{io_service, port};
io_service.run();
}
catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
This shows that this server indeed interleaves.
Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).
However, this is not a fundamental difference, w.r.t. your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio's control loop in both versions, which monitors all the current operations which can proceed.
From the asio coroutine docs:
Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.
It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio.

Unsolicited messages in boost::asio crashes application, without SSL it works fine, why?

I want to send unsolicited messages over an SSL connection. Meaning that the server sends a message not based on a request from a client, but because some event happened that the client needs to know about.
I just use the SSL server example from the boost site, added a timer that sends 'hello' after 10 seconds, everything works fine before the timer expires (the server echo's everything), the 'hello' is also received, but after that the application crashes on the next time a text is sent to the server.
For me even more strange is the fact that when I disable the SSL code, so use a normal socket and do the same using telnet, it works FINE and keeps on working fine!!!
I ran into this problem for the second time now, and I really do not have an idea why this is happening the way it happens.
Below is the total source that I altered to demonstrate the problem. Compile it without the SSL define and use telnet and everything works OK, define SSL and use openssl, or the client SSL example from the boost website and the thing crashes.
#include <cstdlib>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
//#define SSL
typedef boost::asio::ssl::stream<boost::asio::ip::tcp::socket> ssl_socket;
class session
{
public:
session(boost::asio::io_service& io_service,
boost::asio::ssl::context& context)
#ifdef SSL
: socket_(io_service, context)
#else
: socket_(io_service)
#endif
{
}
ssl_socket::lowest_layer_type& socket()
{
return socket_.lowest_layer();
}
void start()
{
#ifdef SSL
socket_.async_handshake(boost::asio::ssl::stream_base::server,
boost::bind(&session::handle_handshake, this,
boost::asio::placeholders::error));
#else
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
boost::shared_ptr< boost::asio::deadline_timer > timer(new boost::asio::deadline_timer( socket_.get_io_service() ));
timer->expires_from_now( boost::posix_time::seconds( 10 ) );
timer->async_wait( boost::bind( &session::SayHello, this, _1, timer ) );
#endif
}
void handle_handshake(const boost::system::error_code& error)
{
if (!error)
{
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
boost::shared_ptr< boost::asio::deadline_timer > timer(new boost::asio::deadline_timer( socket_.get_io_service() ));
timer->expires_from_now( boost::posix_time::seconds( 10 ) );
timer->async_wait( boost::bind( &session::SayHello, this, _1, timer ) );
}
else
{
delete this;
}
}
void SayHello(const boost::system::error_code& error, boost::shared_ptr< boost::asio::deadline_timer > timer) {
std::string hello = "hello";
boost::asio::async_write(socket_,
boost::asio::buffer(hello, hello.length()),
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
timer->expires_from_now( boost::posix_time::seconds( 10 ) );
timer->async_wait( boost::bind( &session::SayHello, this, _1, timer ) );
}
void handle_read(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
boost::asio::async_write(socket_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
}
else
{
std::cout << "session::handle_read() -> Delete, ErrorCode: "<< error.value() << std::endl;
delete this;
}
}
void handle_write(const boost::system::error_code& error)
{
if (!error)
{
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
std::cout << "session::handle_write() -> Delete, ErrorCode: "<< error.value() << std::endl;
delete this;
}
}
private:
#ifdef SSL
ssl_socket socket_;
#else
boost::asio::ip::tcp::socket socket_;
#endif
enum { max_length = 1024 };
char data_[max_length];
};
class server
{
public:
server(boost::asio::io_service& io_service, unsigned short port)
: io_service_(io_service),
acceptor_(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port)),
context_(boost::asio::ssl::context::sslv23)
{
#ifdef SSL
context_.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
context_.set_password_callback(boost::bind(&server::get_password, this));
context_.use_certificate_chain_file("server.crt");
context_.use_private_key_file("server.key", boost::asio::ssl::context::pem);
context_.use_tmp_dh_file("dh512.pem");
#endif
start_accept();
}
std::string get_password() const
{
return "test";
}
void start_accept()
{
session* new_session = new session(io_service_, context_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session,
boost::asio::placeholders::error));
}
void handle_accept(session* new_session,
const boost::system::error_code& error)
{
if (!error)
{
new_session->start();
}
else
{
delete new_session;
}
start_accept();
}
private:
boost::asio::io_service& io_service_;
boost::asio::ip::tcp::acceptor acceptor_;
boost::asio::ssl::context context_;
};
int main(int argc, char* argv[])
{
try
{
boost::asio::io_service io_service;
using namespace std; // For atoi.
server s(io_service, 7777 /*atoi(argv[1])*/);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
I use boost 1.49 and OpenSSL 1.0.0i-fips 19 Apr 2012. I tried investigating this problem as much as possible, the last time I had this problem (a couple of months ago), I received an error number that I could trace to this error message: error: decryption failed or bad record mac.
But I have no idea what is going wrong and how to fix this, any suggestions are welcome.
The problem is multiple concurrent async read and writes. I were able to crash this program even with raw sockets (glibc detected double free or corruption). Let's see what happens after session starts (in braces I put number of concurrent scheduled async reads and writes):
schedule async read (1, 0)
(assume that data comes) handle_read is executed, it schedules async write (0, 1)
(data are written) handle_write is executed, it schedules async read (1, 0)
Now, it could loop over 1. - 3. without any problem indefinitely. But then timer expires...
(assume, that no new data come from client, so there is still one async read scheduled) timer expires, so SayHello is executed, it schedules async write, still no problem (1, 1)
(data from SayHello are written, but still no new data comes from client) handle_write is executed, it schedules async read (2, 0)
Now, we are done. If any new data from client will come, part of them could be read by one async read and part by another. For raw sockets, it might even seem to work (despite possibility, that there might be 2 concurrent writes scheduled, so echo on client side might look mixed). For SSL this might corrupt incoming data stream, and this is probably what happens.
How to fix it:
strand will not help in this case (it is not concurrent handler executions, but scheduled async reads and writes).
It is not enough, if async write handler in SayHello does nothing (there will be no concurrent reads then, but still concurrent writes might occur).
If you really want to have two diffident kind of writes (echo and timer), you have to implement some kind of queue of messages to write, to avoid mixing writes from echo and timer.
General remark: it was simple example, but using shared_ptr instead of delete this is much better way of handling memory allocation with boost::asio. It will prevent from missing errors resulting in memory leak.

boost asio - write equivalent piece of code

I have this piece of code using standard sockets:
void set_fds(int sock1, int sock2, fd_set *fds) {
FD_ZERO (fds);
FD_SET (sock1, fds);
FD_SET (sock2, fds);
}
void do_proxy(int client, int conn, char *buffer) {
fd_set readfds;
int result, nfds = max(client, conn)+1;
set_fds(client, conn, &readfds);
while((result = select(nfds, &readfds, 0, 0, 0)) > 0) {
if (FD_ISSET (client, &readfds)) {
int recvd = recv(client, buffer, 256, 0);
if(recvd <= 0)
return;
send_sock(conn, buffer, recvd);
}
if (FD_ISSET (conn, &readfds)) {
int recvd = recv(conn, buffer, 256, 0);
if(recvd <= 0)
return;
send_sock(client, buffer, recvd);
}
set_fds(client, conn, &readfds);
}
I have sockets client and conn and I need to "proxy" traffic between them (this is part of a socks5 server implementation, you may see https://github.com/mfontanini/Programs-Scripts/blob/master/socks5/socks5.cpp). How can I achieve this under asio ?
I must specify that until this point both sockets were operated under blocking mode.
Tried to use this without success:
ProxySession::ProxySession(ba::io_service& ioService, socket_ptr socket, socket_ptr clientSock): ioService_(ioService), socket_(socket), clientSock_(clientSock)
{
}
void ProxySession::Start()
{
socket_->async_read_some(boost::asio::buffer(data_, 1),
boost::bind(&ProxySession::HandleProxyRead, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void ProxySession::HandleProxyRead(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
boost::asio::async_write(*clientSock_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&ProxySession::HandleProxyWrite, this,
boost::asio::placeholders::error));
}
else
{
delete this;
}
}
void ProxySession::HandleProxyWrite(const boost::system::error_code& error)
{
if (!error)
{
socket_->async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&ProxySession::HandleProxyRead, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
delete this;
}
}
The issue is that if I do ba::read(*socket_, ba::buffer(data_,256)) I can read data that comes from my browser client through socks proxy but in the version from above ProxySession::Start does not lead to HandleProxyRead being called in any circumstances.
I don't really need an async way of exchanging data here, it;s just that I've come by with this solution here. Also from where I called ProxySession->start from code I needed to introduce a sleep because otherwise the thread context from which this was executing was being shut down.
*Update 2 * See below one of my updates. The question block is getting too big.
The problem ca be solved by using asynchronous write/read functions in order to have something similar with presented code. Basically use async_read_some()/async_write() - or other async functions in these categories. Also in order for async processing to work one must call boost::asio::io_service.run() that will dispatch completion handler for async processing.
I have managed to come with this. This solution solves the problem of "data exchange" for the 2 sockets (that must happen acording to socks5 server proxy) but it is very compute intensive. Any ideas ?
std::size_t readable = 0;
boost::asio::socket_base::bytes_readable command1(true);
boost::asio::socket_base::bytes_readable command2(true);
try
{
while (1)
{
socket_->io_control(command1);
clientSock_->io_control(command2);
if ((readable = command1.get()) > 0)
{
transf = ba::read(*socket_, ba::buffer(data_,readable));
ba::write(*clientSock_, ba::buffer(data_,transf));
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
}
if ((readable = command2.get()) > 0)
{
transf = ba::read(*clientSock_, ba::buffer(data_,readable));
ba::write(*socket_, ba::buffer(data_,transf));
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
}
}
}
catch (std::exception& ex)
{
std::cerr << "Exception in thread while exchanging: " << ex.what() << "\n";
return;
}