boost asio - write equivalent piece of code - c++

I have this piece of code using standard sockets:
void set_fds(int sock1, int sock2, fd_set *fds) {
FD_ZERO (fds);
FD_SET (sock1, fds);
FD_SET (sock2, fds);
}
void do_proxy(int client, int conn, char *buffer) {
fd_set readfds;
int result, nfds = max(client, conn)+1;
set_fds(client, conn, &readfds);
while((result = select(nfds, &readfds, 0, 0, 0)) > 0) {
if (FD_ISSET (client, &readfds)) {
int recvd = recv(client, buffer, 256, 0);
if(recvd <= 0)
return;
send_sock(conn, buffer, recvd);
}
if (FD_ISSET (conn, &readfds)) {
int recvd = recv(conn, buffer, 256, 0);
if(recvd <= 0)
return;
send_sock(client, buffer, recvd);
}
set_fds(client, conn, &readfds);
}
I have sockets client and conn and I need to "proxy" traffic between them (this is part of a socks5 server implementation, you may see https://github.com/mfontanini/Programs-Scripts/blob/master/socks5/socks5.cpp). How can I achieve this under asio ?
I must specify that until this point both sockets were operated under blocking mode.
Tried to use this without success:
ProxySession::ProxySession(ba::io_service& ioService, socket_ptr socket, socket_ptr clientSock): ioService_(ioService), socket_(socket), clientSock_(clientSock)
{
}
void ProxySession::Start()
{
socket_->async_read_some(boost::asio::buffer(data_, 1),
boost::bind(&ProxySession::HandleProxyRead, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void ProxySession::HandleProxyRead(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
boost::asio::async_write(*clientSock_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&ProxySession::HandleProxyWrite, this,
boost::asio::placeholders::error));
}
else
{
delete this;
}
}
void ProxySession::HandleProxyWrite(const boost::system::error_code& error)
{
if (!error)
{
socket_->async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&ProxySession::HandleProxyRead, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
delete this;
}
}
The issue is that if I do ba::read(*socket_, ba::buffer(data_,256)) I can read data that comes from my browser client through socks proxy but in the version from above ProxySession::Start does not lead to HandleProxyRead being called in any circumstances.
I don't really need an async way of exchanging data here, it;s just that I've come by with this solution here. Also from where I called ProxySession->start from code I needed to introduce a sleep because otherwise the thread context from which this was executing was being shut down.
*Update 2 * See below one of my updates. The question block is getting too big.

The problem ca be solved by using asynchronous write/read functions in order to have something similar with presented code. Basically use async_read_some()/async_write() - or other async functions in these categories. Also in order for async processing to work one must call boost::asio::io_service.run() that will dispatch completion handler for async processing.

I have managed to come with this. This solution solves the problem of "data exchange" for the 2 sockets (that must happen acording to socks5 server proxy) but it is very compute intensive. Any ideas ?
std::size_t readable = 0;
boost::asio::socket_base::bytes_readable command1(true);
boost::asio::socket_base::bytes_readable command2(true);
try
{
while (1)
{
socket_->io_control(command1);
clientSock_->io_control(command2);
if ((readable = command1.get()) > 0)
{
transf = ba::read(*socket_, ba::buffer(data_,readable));
ba::write(*clientSock_, ba::buffer(data_,transf));
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
}
if ((readable = command2.get()) > 0)
{
transf = ba::read(*clientSock_, ba::buffer(data_,readable));
ba::write(*socket_, ba::buffer(data_,transf));
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
}
}
}
catch (std::exception& ex)
{
std::cerr << "Exception in thread while exchanging: " << ex.what() << "\n";
return;
}

Related

How to see raw tcp data on async_accept failure?

I am using boost::beast library for both WebSocket and TCP server.
Because of requirement, I have to use same port. Thus I implemented server following it.
void on_run() {
// Set suggested timeout settings for the websocket
m_ws.set_option(...);
m_ws.async_accept(
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
}
virtual void on_accept(beast::error_code ec) {
if(ec) {
std::string msg = ec.message();
CONSOLE_INFO("err: {}", msg);
if(msg != "bad method") {
return fail(ec, "accept");
} else {
doReadTcp();
return;
}
}
doReadWs();
}
void doReadTcp() {
m_ws.next_layer().async_read_some(boost::asio::buffer(m_recvData, 15),
[this, self = shared_from_this()](const boost::system::error_code &error,
size_t bytes_transferred) {
if(error) {
return fail(error, "tcp read fail");
}
CONSOLE_INFO("recvs: {}", bytes_transferred);
doReadTcp();
});
}
void doReadWs() {
m_ws.async_read(...);
}
After accept function is failed, I try to read raw tcp data, however I wasn't able to know passed data. I can only know failure reason via ec.message(). When accept function is failed, can I know passed data?
If It is impossible what I thought, how to solve this problem?
I found solution.
m_ws.async_accept(net::buffer(m_untilStr),
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
websocket::stream supports buffered accept function.
Thus firstly tcp socket fill handshake data, call the async_accept(buffer, handler).

Elegant way of reconnecting loop with boost::asio?

I am trying to write a very elegant way of handling a reconnect loop with boost async_connect(...). The problem is, I don't see a way how I could elegantly solve the following problem:
I have a TCP client that should try to connect asynchronously to a server, if the connection fails because the server is offline or any other error occurs, wait a given amount of time and try to reconnect. There are multiple things to take into consideration here:
Avoidance of global variables if possible
It has to be async connect
A very basic client is instantiated like so:
tcpclient::tcpclient(std::string host, int port) : _endpoint(boost::asio::ip::address::from_string(host), port), _socket(_ios) {
logger::log_info("Initiating client ...");
}
Attempt to connect to the server:
void tcpclient::start() {
bool is_connected = false;
while (!is_connected) {
_socket.async_connect(_endpoint, connect_handler);
_ios.run();
}
// read write data (?)
}
The handler:
void tcpclient::connect_handler(const boost::system::error_code &error) {
if(error){
// trigger disconnect (?)
logger::log_error(error.message());
return;
}
// Connection is established at this point
// Update timer state and start authentication on server ?
logger::log_info("Connected?");
}
How can I properly start reconnecting everytime the connection fails (or is dropped)? Since the handler is static I can not modify a class attribute that indicates the connection status? I want to avoid using hacky global variable workarounds.
How can I solve this issue in a proper way?
My attempt would be something like this:
tcpclient.h
enum ConnectionStatus{
NOT_CONNECTED,
CONNECTED
};
class tcpclient {
public:
tcpclient(std::string host, int port);
void start();
private:
ConnectionStatus _status = NOT_CONNECTED;
void connect_handler(const boost::system::error_code& error);
boost::asio::io_service _ios;
boost::asio::ip::tcp::endpoint _endpoint;
boost::asio::ip::tcp::socket _socket;
};
tcpclient.cpp
#include "tcpclient.h"
#include <boost/chrono.hpp>
#include "../utils/logger.h"
tcpclient::tcpclient(std::string host, int port) : _endpoint(boost::asio::ip::address::from_string(host), port),
_socket(_ios) {
logger::log_info("Initiating client ...");
logger::log_info("Server endpoint: " + _endpoint.address().to_string());
}
void tcpclient::connect_handler(const boost::system::error_code &error) {
if(!error){
_status = CONNECTED;
logger::log_info("Connected.");
}
else{
_status = NOT_CONNECTED;
logger::log_info("Failed to connect");
_socket.close();
}
}
void tcpclient::start() {
while (_status == NOT_CONNECTED) {
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
_socket.close();
_socket.async_connect(_endpoint, std::bind(&tcpclient::connect_handler, this, std::placeholders::_1));
_ios.run();
}
}
The problem is that the reconnect is not working properly and the application seems to freeze for some reason? Aside from that reconnecting also seems problematic once a connection was established and is then dropped (e.g. due to the server crashing/closing).
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); will freeze program for 2 seconds. What can you do here is to launch async timer when connection attempt fails:
::boost::asio::steady_timer m_timer{_ios, boost::asio::chrono::seconds{2}};
void tcpclient::connect_handler(const boost::system::error_code &error)
{
if(!error)
{
_status = CONNECTED;
logger::log_info("Connected.");
}
else
{
_status = NOT_CONNECTED;
logger::log_info("Failed to connect");
_socket.close();
m_timer.expires_from_now(boost::asio::chrono::seconds{2});
m_timer.async_wait(std::bind(&tcpclient::on_ready_to_reconnect, this, std::placeholders::_1));
}
}
void tcpclient::on_ready_to_reconnect(const boost::system::error_code &error)
{
try_connect();
}
void tcpclient::try_connect()
{
m_socket.async_connect(_endpoint, std::bind(&tcpclient::connect_handler, this, std::placeholders::_1));
}
void tcpclient::start()
{
try_connect();
_ios.run();
}
There is also no need for while (_status == NOT_CONNECTED) loop, because io service will be busy and _ios.run(); won't return until connection is established.

Handling multiple clients with async_accept

I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
ctx.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
ctx.use_private_key_file(..); // My data setup
ctx.use_certificate_chain_file(...); // My data setup
boost::asio::ip::tcp::acceptor acceptor(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));
for (;;) {
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
acceptor.async_accept(sock.next_layer(), yield);
sock.async_handshake(boost::asio::ssl::stream_base::server, yield);
auto ec = boost::system::error_code{};
char data_[1024];
auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error, is this desirable?
sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error
// Shutdown gracefully
sock.async_shutdown(yield[ec]);
if (ec && (ec.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
{
sock.lowest_layer().close();
}
}
});
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.
Will another connection be accepted if one has already been accepted, i.e. it's already past the async_accept line?
It's a bit hard to understand the specifics of your question, since the code is incomplete (e.g., there's a return in your block, but it's unclear what is that block part of).
Notwithstanding, the documentation contains an example of a TCP echo server using coroutines. It seems you basically need to add SSL support to it, to adapt it to your needs.
If you look at main, it has the following chunk:
boost::asio::spawn(io_service,
[&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
tcp::endpoint(tcp::v4(), std::atoi(argv[1])));
for (;;)
{
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if (!ec) std::make_shared<session>(std::move(socket))->go();
}
});
This loops endlessly, and, following each (successful) call to async_accept, handles accepting the next connection (while this connection and others might still be active).
Again, I'm not sure about your code, but it contains exits from the loop like
return; //connection closed cleanly by peer
To illustrate the point, here are two applications.
The first is a Python multiprocessing echo client, adapted from PMOTW:
import socket
import sys
import multiprocessing
def session(i):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print 'connecting to %s port %s' % server_address
sock.connect(server_address)
print 'connected'
for _ in range(300):
try:
# Send data
message = 'client ' + str(i) + ' message'
print 'sending "%s"' % message
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print 'received "%s"' % data
except:
print >>sys.stderr, 'closing socket'
sock.close()
if __name__ == '__main__':
pool = multiprocessing.Pool(8)
pool.map(session, range(8))
The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.
When run, it outputs
...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...
showing that the echo sessions are indeed interleaved.
Now for the echo server. I've slightly adapted the example from the docs:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session :
public std::enable_shared_from_this<session> {
public:
session(tcp::socket socket) : socket_(std::move(socket)) {}
void start() { do_read(); }
private:
void do_read() {
auto self(
shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if(!ec)
do_write(length);
});
}
void do_write(std::size_t length) {
auto self(shared_from_this());
socket_.async_write_some(
boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec)
do_read();
});
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
server(boost::asio::io_service& io_service, short port) :
acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service) {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
socket_,
[this](boost::system::error_code ec) {
if(!ec)
std::make_shared<session>(std::move(socket_))->start();
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[]) {
const int port = 5000;
try {
boost::asio::io_service io_service;
server s{io_service, port};
io_service.run();
}
catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
This shows that this server indeed interleaves.
Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).
However, this is not a fundamental difference, w.r.t. your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio's control loop in both versions, which monitors all the current operations which can proceed.
From the asio coroutine docs:
Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.
It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio.

boost asio tcp async read/write

i have an understanding problem how boost asio handles this:
When I watch my request response on client side, I can use following boost example Example
But I don't understand what happens if the server send every X ms some status information to the client. Have I open a serperate socket for this or can my client difference which is the request, response and the cycleMessage ?
Can it happen, that the client send a Request and read is as cycleMessage? Because he is also waiting for async_read because of this Message?
class TcpConnectionServer : public boost::enable_shared_from_this<TcpConnectionServer>
{
public:
typedef boost::shared_ptr<TcpConnectionServer> pointer;
static pointer create(boost::asio::io_service& io_service)
{
return pointer(new TcpConnectionServer(io_service));
}
boost::asio::ip::tcp::socket& socket()
{
return m_socket;
}
void Start()
{
SendCycleMessage();
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
private:
TcpConnectionServer(boost::asio::io_service& io_service)
: m_socket(io_service),m_cycleUpdateRate(io_service,boost::posix_time::seconds(1))
{
}
void handle_read_data(const boost::system::error_code& error_code)
{
if (!error_code)
{
std::string answer=doSomeThingWithData(m_data);
writeImpl(answer);
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
else
{
std::cout << error_code.message() << "ERROR DELETE READ \n";
// delete this;
}
}
void SendCycleMessage()
{
std::string data = "some usefull data";
writeImpl(data);
m_cycleUpdateRate.expires_from_now(boost::posix_time::seconds(1));
m_cycleUpdateRate.async_wait(boost::bind(&TcpConnectionServer::SendTracedParameter,this));
}
void writeImpl(const std::string& message)
{
m_messageOutputQueue.push_back(message);
if (m_messageOutputQueue.size() > 1)
{
// outstanding async_write
return;
}
this->write();
}
void write()
{
m_message = m_messageOutputQueue[0];
boost::asio::async_write(
m_socket,
boost::asio::buffer(m_message),
boost::bind(&TcpConnectionServer::writeHandler, this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void writeHandler(const boost::system::error_code& error, const size_t bytesTransferred)
{
m_messageOutputQueue.pop_front();
if (error)
{
std::cerr << "could not write: " << boost::system::system_error(error).what() << std::endl;
return;
}
if (!m_messageOutputQueue.empty())
{
// more messages to send
this->write();
}
}
boost::asio::ip::tcp::socket m_socket;
boost::asio::deadline_timer m_cycleUpdateRate;
std::string m_message;
const size_t m_sizeOfHeader = 5;
boost::array<char, 5> m_headerData;
std::vector<char> m_bodyData;
std::deque<std::string> m_messageOutputQueue;
};
With this implementation I will not need boost::asio::strand or? Because I will not modify the m_messageOutputQueue from an other thread.
But when I have on my client side an m_messageOutputQueue which i can access from an other thread on this point I will need strand? Because then i need the synchronization? Did I understand something wrong?
The differentiation of the message is part of your application protocol.
ASIO merely provides transport.
Now, indeed if you want to have a "keepalive" message you will have to design your protocol in such away that the client can distinguish the messages.
The trick is to think of it at a higher level. Don't deal with async_read on the client directly. Instead, make async_read put messages on a queue (or several queues; the status messages could not even go in a queue but supersede a previous non-handled status update, e.g.).
Then code your client against those queues.
A simple thing that is typically done is to introduce message framing and a message type id:
FRAME offset 0: message length(N)
FRAME offset 4: message data
FRAME offset 4+N: message checksum
FRAME offset 4+N+sizeof checksum: sentinel (e.g. 0x00, or a larger unique signature)
The structure there makes the protocol more extensible. It's easy to add encryption/compression without touch all other code. There's built-in error detection etc.

udp point to point communication on windows multi-homed server

In my multicast receiver application I join a group and receive multicast data successfully.
Also, there's an api for filling any gaps. This uses udp. When a gap is detected a retransmission request is sent. There's a thread dedicated to receiving and processing udp datagrams that come back in response to a request.
This all worked fine on a windows machine with one interface card. Now we need to get this to run on a multi-homed machine with 2 NIC cards.
To get the multicast to work we had to add routes so that the app would send the joins out the correct NIC. Again, this part works fine.
However, the point to point udp receive_from method throws an error immediately upon entry. My understanding is that the sender_endpoint is initialized with the method itself. Do I need to do something special for this udp socket to not throw errors upon entry? Do I need to bind the socket in some way or set up special routes at the host level?
Any help would be appreciated.
Here's the error that comes back:
ERR:reRequestMoldUdp64::waitForResponseNT got error:An invalid argument was supplied
boost::asio::ip::address m_targetIP;
short m_port;
udp::endpoint m_receiver_endpoint;
m_receiver_endpoint.address(m_targetIP);
m_receiver_endpoint.port(m_port);
if (!m_socket.is_open()) {
openUdpSocket();
}
while (1)
{
m_last_ReceiveLen = 0;
udp::endpoint sender_endpoint;
try {
//m_last_ReceiveLen = m_socket.receive_from( (boost::asio::buffer(m_Buffer, max_length)), sender_endpoint);
m_last_ReceiveLen = m_socket.receive_from( (boost::asio::buffer(m_Buffer, max_length)), sender_endpoint, 0, ec);
if (ec) {
_snprintf(logBuf, sizeof(logBuf), "%s got error:%s", __FUNCTION__, ec.message().c_str());
MyLog(LOG_ERROR, logBuf);
myExit(__FILE__, __LINE__, 1);
}
}
catch (std::exception& e) {
_snprintf(logBuf, sizeof(logBuf), "INFO:%s sender_endpoint.address[%s] error:[%s]",
__FUNCTION__,
sender_endpoint.address().to_string().c_str(),
e.what());
MyLogAlways(logBuf);
if (!m_socket.is_open()) {
openUdpSocket();
}
if (++m_waitForResponseNTRetryCnt > 100) {
myExit(__FILE__, __LINE__, 1);
}
else {
Sleep(100); // 100 ms
continue;
}
}
std::string dummyStr;
const UINT64 dummySeqno(0);
udpResponseAPI(BuildMap,dummySeqno, dummyStr);
}
}
void reRequestMoldUdp64::openUdpSocket()
{
char logBuf[512];
if (!m_socket.is_open()) {
m_socket.open(udp::v4(), m_Error);
}
else {
return;
}
if (!m_socket.is_open()) {
_snprintf(logBuf, sizeof(logBuf), "%s udp socket didn't reopen error:[%s]", __FUNCTION__, m_Error.message().c_str());
MyLog(LOG_ERROR, logBuf);
}
else {
_snprintf(logBuf, sizeof(logBuf), "%s udp socket successfully opened", __FUNCTION__);
MyLogAlways(logBuf);
}
}
//----------------------------------------------------------------------------------------------------
void reRequestMoldUdp64::sendRequest(std::string request)
{
MyLog("INFO:Begin sendRequest");
m_lastRequestSent = request;
char myBuff[128];
int len(request.length());
memcpy(myBuff, request.c_str(), len);
#if 0
_snprintf(logBuf, sizeof(logBuf), "INFO:Before sendto() len:%d ip:%s port:%hu",
len,
m_receiver_endpoint.address().to_string().c_str(),
m_receiver_endpoint.port()
);
MyLogAlways(logBuf);
#endif
_snprintf(logBuf, sizeof(logBuf),"%s sending udp rerequest to:%s", __FUNCTION__,
m_receiver_endpoint.address().to_string().c_str());
MyLogAlways(logBuf);
if (m_socket.is_open()) {
m_socket.send_to(boost::asio::buffer(myBuff, len), m_receiver_endpoint);
}
else {
m_socket.open(udp::v4(), m_Error);
m_socket.send_to(boost::asio::buffer(myBuff, len), m_receiver_endpoint);
MyLogAlways("ERR:reRequest socket is not open");
}
}