Async accept an ssl socket using asio and c++ - c++

I am trying to write an async server using asio with SSL encrypted sockets. Currently I have code that does not use SSL, and after following this tutorial I have a basic idea of how to accept an SSL socket, however I do not know how to adapt this code to accept an SSL connection:
void waitForClients() {
acceptor.async_accept(
[this](std::error_code ec, asio::ip::tcp::socket socket) {
if (!ec) {
Conn newConn = std::make_shared<Connection>(ctx, std::move(socket));
connections.push_back(newConn);
} else {
std::cerr << "[SERVER] New connection error: " << ec.message() << "\n";
}
waitForClients();
}
);
}
//this is how the tutorial shows to accept a connection
ssl_socket socket(io_context, ssl_context);
acceptor.accept(socket.next_layer());
The issue is that the callback for acceptor.async_accept gives an ordinary asio::ip::tcp::socket rather than an asio::ssl::ssl_socket<asio::ip::tcp::socket>, and I cannot find any documentation that suggests there is a method of async_accepting an SSL socket in such a way. The only method I have seen is to construct a socket first then accept it afterwards, which cannot be done in this asynchronous manner.
Any help would be much appreciated.

I solved the problem by realising that the second argument to the constructor of asio::ssl::stream<asio::ip::tcp::socket> is any initialiser for the underlying type asio::ip::tcp::socket. Thus the problem can be solved:
void waitForClients() {
acceptor.async_accept(
[this](std::error_code ec, asio::ip::tcp::socket socket) {
if (!ec) {
//initialise an ssl stream from already created socket
asio::ssl::stream<asio::ip::tcp::socket> sslStream(sslCtx, std::move(socket);
//then pass it on to be used
Conn newConn = std::make_shared<Connection>(ctx, sslStream);
connections.push_back(newConn);
} else {
std::cerr << "[SERVER] New connection error: " << ec.message() << "\n";
}
waitForClients();
}
);
}

Related

How could I use a different port for transmitting data by asio (non-boost version)?

Normally, the port set up for listening is used for accepting connections only, right? When a new connection is built, the sever will use a different port for data transmitting. Here are some parts of the code:
void WaitForConnection()
{
m_asioAcceptor.async_accept(
[this](std::error_code ec, asio::ip::tcp::socket socket)
{
if (!ec)
{
std::shared_ptr<connection<T>> new =
std::make_shared<connection<T>>(connection<T>::owner::server,
m_asioContext, std::move(socket),
m_qMessagesIn);
...
}
else
{
std::cout << "[SERVER] New Connection Error: " << ec.message() << "\n";
}
WaitForConnection();
});
}
Then I found the server use the same port for both listening and data transmitting with all clients. How could I instruct asio to use a unique port for each connection?
Thank you for your help!

udp broadcast using boost::asio under windows

I'm having problems with the udp broadcast subsection of an application. I am using boost 1.62.0 under windows 10.
void test_udp_broadcast(void)
{
boost::asio::io_service io_service;
boost::asio::ip::udp::socket socket(io_service);
boost::asio::ip::udp::endpoint remote_endpoint;
socket.open(boost::asio::ip::udp::v4());
socket.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket.set_option(boost::asio::socket_base::broadcast(true));
remote_endpoint = boost::asio::ip::udp::endpoint(boost::asio::ip::address_v4::any(), 4000);
try {
socket.bind(remote_endpoint);
socket.send_to(boost::asio::buffer("abc", 3), remote_endpoint);
} catch (boost::system::system_error e) {
std::cout << e.what() << std::endl;
}
}
I receive:
send_to: The requested address is not valid in its context
From the catch.
I've attempted to change the endpoint from any() to broadcast(), however this only throws the same error on bind().
I normally program under linux, and this code works on my normal target. So I'm scratching my head as to what I'm doing wrong here. Can anyone give me a poke in the right direction?
I believe you want to bind your socket to a local endpoint with any() (if you wish to receive broadcast packets - see this question), and send to a remote endpoint using broadcast() (see this question).
The following compiles for me and does not throw any errors:
void test_udp_broadcast(void)
{
boost::asio::io_service io_service;
boost::asio::ip::udp::socket socket(io_service);
boost::asio::ip::udp::endpoint local_endpoint;
boost::asio::ip::udp::endpoint remote_endpoint;
socket.open(boost::asio::ip::udp::v4());
socket.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket.set_option(boost::asio::socket_base::broadcast(true));
local_endpoint = boost::asio::ip::udp::endpoint(boost::asio::ip::address_v4::any(), 4000);
remote_endpoint = boost::asio::ip::udp::endpoint(boost::asio::ip::address_v4::broadcast(), 4000);
try {
socket.bind(local_endpoint);
socket.send_to(boost::asio::buffer("abc", 3), remote_endpoint);
} catch (boost::system::system_error e) {
std::cout << e.what() << std::endl;
}
}

Handling multiple clients with async_accept

I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
ctx.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
ctx.use_private_key_file(..); // My data setup
ctx.use_certificate_chain_file(...); // My data setup
boost::asio::ip::tcp::acceptor acceptor(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));
for (;;) {
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
acceptor.async_accept(sock.next_layer(), yield);
sock.async_handshake(boost::asio::ssl::stream_base::server, yield);
auto ec = boost::system::error_code{};
char data_[1024];
auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error, is this desirable?
sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error
// Shutdown gracefully
sock.async_shutdown(yield[ec]);
if (ec && (ec.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
{
sock.lowest_layer().close();
}
}
});
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.
Will another connection be accepted if one has already been accepted, i.e. it's already past the async_accept line?
It's a bit hard to understand the specifics of your question, since the code is incomplete (e.g., there's a return in your block, but it's unclear what is that block part of).
Notwithstanding, the documentation contains an example of a TCP echo server using coroutines. It seems you basically need to add SSL support to it, to adapt it to your needs.
If you look at main, it has the following chunk:
boost::asio::spawn(io_service,
[&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
tcp::endpoint(tcp::v4(), std::atoi(argv[1])));
for (;;)
{
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if (!ec) std::make_shared<session>(std::move(socket))->go();
}
});
This loops endlessly, and, following each (successful) call to async_accept, handles accepting the next connection (while this connection and others might still be active).
Again, I'm not sure about your code, but it contains exits from the loop like
return; //connection closed cleanly by peer
To illustrate the point, here are two applications.
The first is a Python multiprocessing echo client, adapted from PMOTW:
import socket
import sys
import multiprocessing
def session(i):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print 'connecting to %s port %s' % server_address
sock.connect(server_address)
print 'connected'
for _ in range(300):
try:
# Send data
message = 'client ' + str(i) + ' message'
print 'sending "%s"' % message
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print 'received "%s"' % data
except:
print >>sys.stderr, 'closing socket'
sock.close()
if __name__ == '__main__':
pool = multiprocessing.Pool(8)
pool.map(session, range(8))
The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.
When run, it outputs
...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...
showing that the echo sessions are indeed interleaved.
Now for the echo server. I've slightly adapted the example from the docs:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session :
public std::enable_shared_from_this<session> {
public:
session(tcp::socket socket) : socket_(std::move(socket)) {}
void start() { do_read(); }
private:
void do_read() {
auto self(
shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if(!ec)
do_write(length);
});
}
void do_write(std::size_t length) {
auto self(shared_from_this());
socket_.async_write_some(
boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec)
do_read();
});
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
server(boost::asio::io_service& io_service, short port) :
acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service) {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
socket_,
[this](boost::system::error_code ec) {
if(!ec)
std::make_shared<session>(std::move(socket_))->start();
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[]) {
const int port = 5000;
try {
boost::asio::io_service io_service;
server s{io_service, port};
io_service.run();
}
catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
This shows that this server indeed interleaves.
Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).
However, this is not a fundamental difference, w.r.t. your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio's control loop in both versions, which monitors all the current operations which can proceed.
From the asio coroutine docs:
Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.
It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio.

Stopping async_connect

I currently use Windows 7 64bit, MSVC2010 and Boost.Asio 1.57. I would like to connect to a TCP server with a timeout. If the timeout expires, I should close the connection as soon as possible as the IP address (chosen by a user) is probably wrong.
I know I should use async requests because sync requests have no timeouts options included. So I'm using async_connect with an external timeout. This is a solution I have found in many places, including stackoverflow.
The problem is that the following code does not behave like I wished. async_connect is not "cancelled" by the socket.close(). With my computer, closing the socket takes about 15 seconds to complete, which makes my program not responsive for a while...
I would like to have a decent timeout (approx. 3 seconds) and close the socket after this time, so that the user can try to connect with another IP address (from the HMI)
#include <iostream>
#include <boost\asio.hpp>
#include <boost\shared_ptr.hpp>
#include <boost\bind.hpp>
using boost::asio::ip::tcp;
class tcp_client
{
public:
tcp_client(boost::asio::io_service& io_service, tcp::endpoint& endpoint, long long timeout = 3000000)
:m_io_service (io_service),
m_endpoint(endpoint),
m_timer(io_service),
m_timeout(timeout)
{
connect();
}
void stop()
{
m_socket->close();
}
private:
void connect()
{
m_socket.reset(new tcp::socket(m_io_service));
std::cout << "TCP Connection in progress" << std::endl;
m_socket->async_connect(m_endpoint,
boost::bind(&tcp_client::handle_connect, this,
m_socket,
boost::asio::placeholders::error)
);
m_timer.expires_from_now(boost::posix_time::microseconds(m_timeout));
m_timer.async_wait(boost::bind(&tcp_client::HandleWait, this, boost::asio::placeholders::error));
}
void handle_connect(boost::shared_ptr<tcp::socket> socket, const boost::system::error_code& error)
{
if (!error)
{
std::cout << "TCP Connection : connected !" << std::endl;
m_timer.expires_at(boost::posix_time::pos_infin); // Stop the timer !
// Read normally
}
else
{
std::cout << "TCP Connection failed" << std::endl;
}
}
public:
void HandleWait(const boost::system::error_code& error)
{
if (!error)
{
std::cout << "Connection not established..." << std::endl;
std::cout << "Trying to close socket..." << std::endl;
stop();
return;
}
}
boost::asio::io_service& m_io_service;
boost::shared_ptr<tcp::socket> m_socket;
tcp::endpoint m_endpoint;
boost::asio::deadline_timer m_timer;
long long m_timeout;
};
int main()
{
boost::asio::io_service io_service;
tcp::endpoint endpoint(boost::asio::ip::address_v4::from_string("192.168.10.74"), 7171); // invalid address
tcp_client tcpc(io_service, endpoint);
io_service.run();
system("pause");
}
The only solution I found is to run io_service:run() in many threads, and create a new socket for each connection. But this solution does not appear valid to me as I have to specify a number of threads and I don't know how many wrong address the user will enter in my HMI. Yes, some users are not as clever as others...
What's wrong with my code ? How do I interrupt a TCP connection in a clean and fast way ?
Best regards,
Poukill
There's nothing elementary wrong with the code, and it does exactly what you desire on my Linux box:
TCP Connection in progress
Connection not established...
Trying to close socket...
TCP Connection failed
real 0m3.003s
user 0m0.002s
sys 0m0.000s
Notes:
You may have success adding a cancel() call to the stop() function:
void stop()
{
m_socket->cancel();
m_socket->close();
}
You should check for abortion of the timeout though:
void HandleWait(const boost::system::error_code& error)
{
if (error && error != boost::asio::error::operation_aborted)
{
std::cout << "Connection not established..." << std::endl;
std::cout << "Trying to close socket..." << std::endl;
stop();
return;
}
}
Otherwise the implicit cancel of the timer after successful connect will still close() the socket :)
If you want to run (many) connection attempts in parallel, you don't need any more threads or even more than one io_service. This is the essence of Boost Asio: you can do asynchronous IO operations on a single thread.
This answer gives a pretty isolated picture of this (even though the connections are done using ZMQ there): boost asio deadline_timer async_wait(N seconds) twice within N seconds cause operation canceled
another example, this time about timing out many sessions independently on a single io_service: boost::asio::deadline_timer::async_wait not firing callback

How to check if SSL socket gets closed (async)

I've been using boost asio for networking for some time, but never for SSL sockets. Now i'm required to use SSL sockets and they work pretty fine. But i am not able to find out when a sockets get closed (I usually did this as I did with regular sockets - checking the error value when using boost::asio::async_read_until() in the callback function.
Here's some relevant code snippets:
boost::asio::streambuf streambuf;
boost::asio::ssl::context sslctx(io_service, boost::asio::ssl::context::tlsv1);
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock(io_service, sslctx);
void DoAsyncRead()
{
boost::asio::async_read_until(sock, streambuf, "\n", MyReadHandler);
}
void MyReadHandler(const boost::system::error_code& error, size_t bytes_transferred)
{
if (error) {
std::cout << "Read error: " << error.message() << std::endl;
} else {
// ...
}
}
The error condition is never true, even if I kill the server, or drop the client connection. How can I track if the connection is closed?
EOS is not an error condition in most APIs. It is a sentinel value returned instead of a byte count, typically zero (Unix) or -1 (Java).