I have to write a program that initializes a array of TCP sockets, and use async i/o to read data using a thread pool. Im new to async io, thread pools, shared_ptrs. What I now have is a working program with one socket. Heres the clipping:
boost::shared_ptr< asio::ip::tcp::socket > sock1(
new asio::ip::tcp::socket( *io_service )
);
boost::shared_ptr< asio::ip::tcp::acceptor > acceptor( new asio::ip::tcp::acceptor( *io_service ) );
asio::ip::tcp::endpoint endpoint(asio::ip::tcp::v4(), portNum);
acceptor->open( endpoint.protocol() );
acceptor->set_option( asio::ip::tcp::acceptor::reuse_address( false ) );
acceptor->bind( endpoint );
acceptor->listen();
I am stuck in getting similar code for an "array of sockets", that is, I want to have acceptor[], that are binded to endpoint[]. I must pass around pointers to the acceptors an sockets, so shared_ptr comes in, and am unable to get it right.
for (i=0; i<10; i++) {
// init socket[i] with *io_service
// init endpoint[i]
// init acceptor[i] with *io_service
acceptor[i]->listen()
}
(btw, do I really need an socket[] array for this porpose?) Can someone please help me?
Here is a full example for using Boost ASIO to implement a TCP echo server listening to multiple ports, with a thread pool to distribute work across multiple cores. It is based on this example from the Boost documentation (providing a single-threaded TCP echo server).
Session class
The session class represents a single active socket connection with a client. It reads from the socket and then writes the same data into the socket to echo it back to the client. The implementation uses the async_... functions provided by Boost ASIO: These functions register a callback at the I/O service that will be triggered when the I/O operation has finished.
session.h
#pragma once
#include <array>
#include <memory>
#include <boost/asio.hpp>
/**
* A TCP session opened on the server.
*/
class session : public std::enable_shared_from_this<session> {
using endpoint_t = boost::asio::ip::tcp::endpoint;
using socket_t = boost::asio::ip::tcp::socket;
public:
session(boost::asio::io_service &service);
/**
* Start reading from the socket.
*/
void start();
/**
* Callback for socket reads.
*/
void handle_read(const boost::system::error_code &ec,
size_t bytes_transferred);
/**
* Callback for socket writes.
*/
void handle_write(const boost::system::error_code &ec);
/**
* Get a reference to the session socket.
*/
socket_t &socket() { return socket_; }
private:
/**
* Session socket
*/
socket_t socket_;
/**
* Buffer to be used for r/w operations.
*/
std::array<uint8_t, 4096> buffer_;
};
session.cpp
#include "session.h"
#include <functional>
#include <iostream>
#include <thread>
using boost::asio::async_write;
using boost::asio::buffer;
using boost::asio::io_service;
using boost::asio::error::connection_reset;
using boost::asio::error::eof;
using boost::system::error_code;
using boost::system::system_error;
using std::placeholders::_1;
using std::placeholders::_2;
session::session(io_service &service) : socket_{service} {}
void session::start() {
auto handler = std::bind(&session::handle_read, shared_from_this(), _1, _2);
socket_.async_read_some(buffer(buffer_), handler);
}
void session::handle_read(const error_code &ec, size_t bytes_transferred) {
if (ec) {
if (ec == eof || ec == connection_reset) {
return;
}
throw system_error{ec};
}
std::cout << "Thread " << std::this_thread::get_id() << ": Received "
<< bytes_transferred << " bytes on " << socket_.local_endpoint()
<< " from " << socket_.remote_endpoint() << std::endl;
auto handler = std::bind(&session::handle_write, shared_from_this(), _1);
async_write(socket_, buffer(buffer_.data(), bytes_transferred), handler);
}
void session::handle_write(const error_code &ec) {
if (ec) {
throw system_error{ec};
}
auto handler = std::bind(&session::handle_read, shared_from_this(), _1, _2);
socket_.async_read_some(buffer(buffer_), handler);
}
Server class
The server class creates an acceptor for each given port. The acceptor will listen to the port and dispatch a socket for each incoming connection request. The waiting for an incoming connection is again implemented with a async_... function.
server.h
#pragma once
#include <vector>
#include <boost/asio.hpp>
#include "session.h"
/**
* Listens to a socket and dispatches sessions for each incoming request.
*/
class server {
using acceptor_t = boost::asio::ip::tcp::acceptor;
using endpoint_t = boost::asio::ip::tcp::endpoint;
using socket_t = boost::asio::ip::tcp::socket;
public:
server(boost::asio::io_service &service, const std::vector<uint16_t> &ports);
/**
* Start listening for incoming requests.
*/
void start_accept(size_t index);
/**
* Callback for when a request comes in.
*/
void handle_accept(size_t index, std::shared_ptr<session> new_session,
const boost::system::error_code &ec);
private:
/**
* Reference to the I/O service that will call our callbacks.
*/
boost::asio::io_service &service_;
/**
* List of acceptors each listening to (a different) socket.
*/
std::vector<acceptor_t> acceptors_;
};
server.cpp
#include "server.h"
#include <functional>
#include <boost/asio.hpp>
using std::placeholders::_1;
using std::placeholders::_2;
using boost::asio::io_service;
using boost::asio::error::eof;
using boost::system::error_code;
using boost::system::system_error;
server::server(boost::asio::io_service &service,
const std::vector<uint16_t> &ports)
: service_{service} {
auto create_acceptor = [&](uint16_t port) {
acceptor_t acceptor{service};
endpoint_t endpoint{boost::asio::ip::tcp::v4(), port};
acceptor.open(endpoint.protocol());
acceptor.set_option(acceptor_t::reuse_address(false));
acceptor.bind(endpoint);
acceptor.listen();
return acceptor;
};
std::transform(ports.begin(), ports.end(), std::back_inserter(acceptors_),
create_acceptor);
for (size_t i = 0; i < acceptors_.size(); i++) {
start_accept(i);
}
}
void server::start_accept(size_t index) {
auto new_session{std::make_shared<session>(service_)};
auto handler =
std::bind(&server::handle_accept, this, index, new_session, _1);
acceptors_[index].async_accept(new_session->socket(), handler);
}
void server::handle_accept(size_t index, std::shared_ptr<session> new_session,
const boost::system::error_code &ec) {
if (ec) {
throw system_error{ec};
}
new_session->start();
start_accept(index);
}
Main
The main function creates the server for a series of ports.
For this example, the ports are set to 5000,...,5010. It then spawns a series of threads for each CPU core that calls the run function of the I/O service provided by Boost ASIO. The I/O service is capable of handling such a multi-threading scenario, dispatching work among the threads that have called its run function (reference):
Multiple threads may call the run() function to set up a pool of threads from which the io_context may execute handlers. All threads that are waiting in the pool are equivalent and the io_context may choose any one of them to invoke a handler.
server_main.cpp
#include "server.h"
#include <numeric>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
int main() {
std::vector<uint16_t> ports{};
// Fill ports with range [5000,5000+n)
ports.resize(10);
std::iota(ports.begin(), ports.end(), 5000);
boost::asio::io_service service{};
server s{service, ports};
// Spawn thread group for running the I/O service
size_t thread_count = std::min(
static_cast<size_t>(boost::thread::hardware_concurrency()), ports.size());
boost::thread_group tg{};
for (size_t i = 0; i < thread_count; ++i) {
tg.create_thread([&]() { service.run(); });
}
tg.join_all();
return 0;
}
You could compile the server for example with g++ -O2 -lboost_thread -lpthread {session,server,server_main}.cpp -o server. If you run the server with clients that send it random data, you would get output such as:
Thread 140043413878528: Received 4096 bytes on 127.0.0.1:5007 from 127.0.0.1:40856
Thread 140043405485824: Received 4096 bytes on 127.0.0.1:5000 from 127.0.0.1:42556
Thread 140043388700416: Received 4096 bytes on 127.0.0.1:5005 from 127.0.0.1:58582
Thread 140043388700416: Received 4096 bytes on 127.0.0.1:5001 from 127.0.0.1:40192
Thread 140043388700416: Received 4096 bytes on 127.0.0.1:5003 from 127.0.0.1:42508
Thread 140043397093120: Received 4096 bytes on 127.0.0.1:5008 from 127.0.0.1:37808
Thread 140043388700416: Received 4096 bytes on 127.0.0.1:5006 from 127.0.0.1:35440
Thread 140043397093120: Received 4096 bytes on 127.0.0.1:5009 from 127.0.0.1:58306
Thread 140043405485824: Received 4096 bytes on 127.0.0.1:5002 from 127.0.0.1:56300
You can see the server handling multiple ports, with work being distributed among the worker threads (not necessarily restricting each thread to a specific port).
Related
Imagine that you have some websocket client, that downloading some data in loop like this:
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include "nlohmann/json.hpp"
namespace beast = boost::beast;
namespace websocket = beast::websocket;
using tcp = boost::asio::ip::tcp;
class Client {
public:
Client(boost::asio::io_context &ctx) : ws_{ctx}, ctx_{ctx} {
ws_.set_option(websocket::stream_base::timeout::suggested(boost::beast::role_type::client));
#define HOST "127.0.0.1"
#define PORT "8000"
boost::asio::connect(ws_.next_layer(), tcp::resolver{ctx_}.resolve(HOST, PORT));
ws_.handshake(HOST ":" PORT, "/api/v1/music");
#undef HOST
#undef PORT
}
~Client() {
if (ws_.is_open()) {
ws_.close(websocket::normal);
}
}
nlohmann::json NextPacket(std::size_t offset) {
nlohmann::json request;
request["offset"] = offset;
ws_.write(boost::asio::buffer(request.dump()));
beast::flat_buffer buffer;
ws_.read(buffer);
return nlohmann::json::parse(std::string_view{reinterpret_cast<const char *>(buffer.data().data()), buffer.size()});
}
private:
boost::beast::websocket::stream<boost::asio::ip::tcp::socket> ws_;
boost::asio::io_context &ctx_;
};
// ... some function
int main() {
boost::asio::io_context context;
boost::asio::executor_work_guard<boost::asio::io_context::executor_type> guard{context.get_executor()};
std::thread{[&context]() { context.run(); }}.detach();
static constexpr std::size_t kSomeVeryBigConstant{1'000'000'000};
Client client{context};
std::size_t offset{};
while (offset < kSomeVeryBigConstant) {
offset += client.NextPacket(offset)["offset"].get<std::size_t>();
// UPDATE:
userDefinedLongPauseHere();
}
}
On the server side we have ping requests with some frequency. Were should I handle ping requests? As I understand it, control_callback controls calls to ping, pong and close functions, not requests. With the read or read_async functions, I also cannot catch the ping request.
Beast responds to pings with pongs automatically, as described here: https://github.com/boostorg/beast/issues/899#issuecomment-346333014
Whenever you call read(), it can process a ping and send a pong without you knowing about that.
I modified the websocket++ echo server example to use multiple threads:
#include <iostream>
#include <boost/thread/thread.hpp>
#include <websocketpp/config/asio_no_tls.hpp>
#include <websocketpp/server.hpp>
typedef websocketpp::server<websocketpp::config::asio> server;
using websocketpp::lib::placeholders::_1;
using websocketpp::lib::placeholders::_2;
using websocketpp::lib::bind;
// pull out the type of messages sent by our config
typedef server::message_ptr message_ptr;
// Define a callback to handle incoming messages
void on_message(server* s, websocketpp::connection_hdl hdl, message_ptr msg)
{
try {
s->send(hdl, msg->get_payload(), msg->get_opcode());
} catch (const websocketpp::lib::error_code& e) {
std::cout << "Echo failed because: " << e << "(" << e.message() << ")" << std::endl;
}
}
int main()
{
// Create a server endpoint
server echo_server;
boost::asio::io_service io_service;
try {
// Initialize Asio
echo_server.init_asio(&io_service);
// Register our message handler
echo_server.set_message_handler(bind(&on_message, &echo_server, ::_1, ::_2));
// Listen on port 9002
echo_server.set_reuse_addr(true);
echo_server.listen(9002);
// Start the server accept loop
echo_server.start_accept();
boost::thread_group threadpool;
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
threadpool.create_thread(boost::bind(&boost::asio::io_service::run, &io_service));
threadpool.join_all();
} catch (websocketpp::exception const& e) {
std::cout << e.what() << std::endl;
} catch (...) {
std::cout << "other exception" << std::endl;
}
}
I connect using one client which sends many messages asynchronously. Then the server crashes with:
2016-08-31 13:05:44] [info] asio async_read_at_least error: system:125 (Operation canceled)
[2016-08-31 13:05:44] [error] handle_read_frame error: websocketpp.transport:2 (Underlying Transport Error)
terminate called after throwing an instance of 'std::bad_weak_ptr'
what(): bad_weak_ptr
Judging from the websocket++ manual on Thread Safety, what I am doing should be threadsafe:
The Asio transport provides full thread safety for endpoint. Works
with an io_service thread pool where multiple threads are calling
io_service.run();
...
All core transports guarantee that the handlers for a given connection
will be serialized. When the transport and concurrency policies
support endpoint concurrency, anything involving a connection_hdl
should be thread safe. i.e. It is safe to pass connection_hdls to
other threads, store them indefinitely, and call endpoint methods that
take them as parameters from any thread at any time.
What am I missing here?
The client I am using is based on NodeJS:
client.js
var port = 9002;
var times = 10000;
var WebSocket = require("ws");
var ws = new WebSocket("ws://localhost:" + port);
ws.on('open', function open() {
for(var i = 0; i < times; ++i) {
ws.send(i);
}
});
start via:
$ npm install --save ws
$ node client.js
I am trying to get the daytime6 server example (Asynchronous UDP daytime server) in boost working. I compile the below program using
g++ -std=c++11 -g -Wall -pedantic udp_server.cpp -o udp_server -lboost_system
I start the udp_server. I can see port number 13 (UDP) being opened using netstat command.
However If I trying to client to the server using netcat
nc -u localhost 13
it doesn't seem to give any reply.However I can get the asynchronous TCP daytime server to work fine.
#include <ctime>
#include <iostream>
#include <string>
#include <boost/array.hpp>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/asio.hpp>
using boost::asio::ip::udp;
std::string make_daytime_string()
{
using namespace std; // For time_t, time and ctime;
time_t now = time(0);
return ctime(&now);
}
class udp_server
{
public:
udp_server(boost::asio::io_service& io_service)
: socket_(io_service, udp::endpoint(udp::v4(), 13))
{
start_receive();
}
private:
void start_receive()
{
socket_.async_receive_from(
boost::asio::buffer(recv_buffer_), remote_endpoint_,
boost::bind(&udp_server::handle_receive, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_receive(const boost::system::error_code& error,
std::size_t /*bytes_transferred*/)
{
if (!error || error == boost::asio::error::message_size)
{
boost::shared_ptr<std::string> message(
new std::string(make_daytime_string()));
socket_.async_send_to(boost::asio::buffer(*message), remote_endpoint_,
boost::bind(&udp_server::handle_send, this, message,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
start_receive();
}
}
void handle_send(boost::shared_ptr<std::string> message,
const boost::system::error_code& error,
std::size_t bytes_transferred)
{
}
udp::socket socket_;
udp::endpoint remote_endpoint_;
boost::array<char, 1> recv_buffer_;
};
int main()
{
try
{
boost::asio::io_service io_service;
udp_server server(io_service);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
The following command does not send a message:
$ nc -u localhost 13
Instead, netcat will wait, reading from stdin until end-of-file. Upon receiving end-of-file, it will send the message it reads to localhost on port 13 using UDP.
On the other hand, the following command:
$ echo 'msg' | nc -u localhost 13
writes "msg" and end-of-file to netcat's stdin, resulting in netcat sending a UDP datagram containing "msg" to localhost on port 13.
The asynchronous UDP daytime server example responds to any message it receives with the current date and time:
class udp_server
{
public:
udp_server(...)
{
start_receive();
}
private:
void start_receive()
{
socket_.async_receive_from(...,
boost::bind(&udp_server::handle_receive, ...));
}
void handle_receive(...)
{
message = make_daytime_string();
socket_.async_send_to(boost::asio::buffer(message), ...);
}
};
As the first command does not write a message, the udp_server never receives a message to which it can respond. The latter command causes a message to be written, and the udp_server responds with the date and time.
The asynchronous TCP daytime server writes a message upon accepting a connection, then closes the connection. When using TCP, netcat will attempt to connect to the destination immediately. In the case of the tcp_server, netcat will establish a TCP connection, receive the date and time, detect that the remote peer has closed the connection and exit.
I am trying for a way to write a socket class to connect my NS-3 simulation to an outside program. So what I want to do is to create packets in NS-3 and send them through this socket to an outside tool, do some simple manipulations on the packet in that tool, and then send it back to NS-3. I don't think the built in NS-3 socket can be used for this purpose.
Has anyone come across something like this before or has any suggestions?
Your help is very much appreciated!
I'm using a TCP socket to connect an external Python TCP Socket using NS-3, here is the code:
/*
* Create a NS-3 Application that opens a TCP Socket and
* waits for incoming connections
*
*/
#include "ns3/icmpv4.h"
#include "ns3/assert.h"
#include "ns3/log.h"
#include "ns3/ipv4-address.h"
#include "ns3/socket.h"
#include "ns3/integer.h"
#include "ns3/boolean.h"
#include "ns3/inet-socket-address.h"
#include "ns3/packet.h"
#include "ns3/trace-source-accessor.h"
#include "ns3/config.h"
#include "ns3/tos-device.h"
#include "ns3/names.h"
#include "ns3/string.h"
#include "ns3/object.h"
namespace ns3 {
IOProxyServer::IOProxyServer ()
{
m_socket = 0;
}
TypeId IOProxyServer::GetTypeId (void)
{
static TypeId tid = TypeId ("ns3::IoProxyServer")
.SetParent<Application> ()
.AddConstructor<IOProxyServer> ()
.AddAttribute("RemotePortNumber",
"Remote port listening for connections",
IntegerValue(9999),
MakeIntegerAccessor(&IOProxyServer::m_remotePortNumber),
MakeIntegerChecker<int64_t>())
.AddAttribute("RemoteIp",
"Remote IP listening for connections",
StringValue("127.0.0.1"),
MakeStringAccessor(&IOProxyServer::m_remoteIp),
MakeStringChecker())
.AddAttribute("LocalPortNumber",
"Local port for incoming connections",
IntegerValue(3333),
MakeIntegerAccessor(&IOProxyServer::m_localPortNumber),
MakeIntegerChecker<int64_t>())
.AddAttribute("LocalIp",
"Local IP for incoming connections",
StringValue("127.0.0.1"),
MakeStringAccessor(&IOProxyServer::m_localIp),
MakeStringChecker());
return tid;
}
void IOProxyServer::StartApplication (void)
{
NS_LOG_FUNCTION (this);
m_socket = Socket::CreateSocket (GetNode (), TypeId::LookupByName ("ns3::TcpSocketFactory"));
NS_ASSERT_MSG (m_socket != 0, "An error has happened when trying to create the socket");
InetSocketAddress src = InetSocketAddress (Ipv4Address::GetAny(), m_localPortNumber );
InetSocketAddress dest = InetSocketAddress(Ipv4Address(m_remoteIp.c_str()), m_remotePortNumber);
int status;
status = m_socket->Bind (src);
NS_ASSERT_MSG (status != -1, "An error has happened when trying to bind to local end point");
status = m_socket->Connect(dest);
NS_ASSERT_MSG (status != -1, "An error has happened when trying to connect to remote end point");
// Configures the callbacks for the different events related with the connection
//m_socket->SetConnectCallback
m_socket->SetAcceptCallback (
MakeNullCallback<bool, Ptr<Socket>, const Address &> (),
MakeCallback (&IOProxyServer::HandleAccept, this));
m_socket->SetRecvCallback (
MakeCallback (&IOProxyServer::HandleRead, this));
m_socket->SetDataSentCallback (
MakeCallback (&IOProxyServer::HandleSend,this));
//m_socket->SetSendCallback
m_socket->SetCloseCallbacks (
MakeCallback (&IOProxyServer::HandlePeerClose, this),
MakeCallback (&IOProxyServer::HandlePeerError, this));
// If we need to configure a reception only socket or a sending only socket
// we need to call one of the following methods:
// m_socket->ShutdownSend();
// m_socket->ShutdownRecv();
}
void IOProxyServer::StopApplication (void)
{
NS_LOG_FUNCTION (this);
m_socket->Close();
}
void IOProxyServer::HandlePeerClose (Ptr<Socket> socket)
{
NS_LOG_FUNCTION (this << socket);
}
void IOProxyServer::HandlePeerError (Ptr<Socket> socket)
{
NS_LOG_FUNCTION (this << socket);
}
void IOProxyServer::HandleSend (Ptr<Socket> socket, uint32_t dataSent)
{
NS_LOG_FUNCTION (this << socket);
}
void IOProxyServer::HandleAccept (Ptr<Socket> s, const Address& from)
{
NS_LOG_FUNCTION (this << s << from);
s->SetRecvCallback (MakeCallback (&IOProxyServer::HandleRead, this));
}
void IOProxyServer::HandleRead (Ptr<Socket> socket)
{
NS_LOG_FUNCTION (this << socket);
Ptr<Packet> packet;
while ((packet = socket->RecvFrom (from)))
{
if (packet->GetSize () == 0)
{ //EOF
break;
}
if (InetSocketAddress::IsMatchingType (from))
{
//Do whatever you need with the incoming info
}
}
}
void IOProxyServer::SendData()
{
//Do whatever you need for creating your packet and send it using the socket
//Ptr<Packet> packet = Create<Packet>(pointer, sizeof(pointer));
//m_socket->Send(packet, 0, from);
}
IOProxyServer::~IOProxyServer ()
{
}
void IOProxyServer::DoDispose (void)
{
NS_LOG_FUNCTION (this);
m_socket = 0;
Application::DoDispose ();
}
} // namespace ns3
I want to send unsolicited messages over an SSL connection. Meaning that the server sends a message not based on a request from a client, but because some event happened that the client needs to know about.
I just use the SSL server example from the boost site, added a timer that sends 'hello' after 10 seconds, everything works fine before the timer expires (the server echo's everything), the 'hello' is also received, but after that the application crashes on the next time a text is sent to the server.
For me even more strange is the fact that when I disable the SSL code, so use a normal socket and do the same using telnet, it works FINE and keeps on working fine!!!
I ran into this problem for the second time now, and I really do not have an idea why this is happening the way it happens.
Below is the total source that I altered to demonstrate the problem. Compile it without the SSL define and use telnet and everything works OK, define SSL and use openssl, or the client SSL example from the boost website and the thing crashes.
#include <cstdlib>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
//#define SSL
typedef boost::asio::ssl::stream<boost::asio::ip::tcp::socket> ssl_socket;
class session
{
public:
session(boost::asio::io_service& io_service,
boost::asio::ssl::context& context)
#ifdef SSL
: socket_(io_service, context)
#else
: socket_(io_service)
#endif
{
}
ssl_socket::lowest_layer_type& socket()
{
return socket_.lowest_layer();
}
void start()
{
#ifdef SSL
socket_.async_handshake(boost::asio::ssl::stream_base::server,
boost::bind(&session::handle_handshake, this,
boost::asio::placeholders::error));
#else
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
boost::shared_ptr< boost::asio::deadline_timer > timer(new boost::asio::deadline_timer( socket_.get_io_service() ));
timer->expires_from_now( boost::posix_time::seconds( 10 ) );
timer->async_wait( boost::bind( &session::SayHello, this, _1, timer ) );
#endif
}
void handle_handshake(const boost::system::error_code& error)
{
if (!error)
{
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
boost::shared_ptr< boost::asio::deadline_timer > timer(new boost::asio::deadline_timer( socket_.get_io_service() ));
timer->expires_from_now( boost::posix_time::seconds( 10 ) );
timer->async_wait( boost::bind( &session::SayHello, this, _1, timer ) );
}
else
{
delete this;
}
}
void SayHello(const boost::system::error_code& error, boost::shared_ptr< boost::asio::deadline_timer > timer) {
std::string hello = "hello";
boost::asio::async_write(socket_,
boost::asio::buffer(hello, hello.length()),
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
timer->expires_from_now( boost::posix_time::seconds( 10 ) );
timer->async_wait( boost::bind( &session::SayHello, this, _1, timer ) );
}
void handle_read(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
boost::asio::async_write(socket_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
}
else
{
std::cout << "session::handle_read() -> Delete, ErrorCode: "<< error.value() << std::endl;
delete this;
}
}
void handle_write(const boost::system::error_code& error)
{
if (!error)
{
socket_.async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&session::handle_read, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
std::cout << "session::handle_write() -> Delete, ErrorCode: "<< error.value() << std::endl;
delete this;
}
}
private:
#ifdef SSL
ssl_socket socket_;
#else
boost::asio::ip::tcp::socket socket_;
#endif
enum { max_length = 1024 };
char data_[max_length];
};
class server
{
public:
server(boost::asio::io_service& io_service, unsigned short port)
: io_service_(io_service),
acceptor_(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port)),
context_(boost::asio::ssl::context::sslv23)
{
#ifdef SSL
context_.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
context_.set_password_callback(boost::bind(&server::get_password, this));
context_.use_certificate_chain_file("server.crt");
context_.use_private_key_file("server.key", boost::asio::ssl::context::pem);
context_.use_tmp_dh_file("dh512.pem");
#endif
start_accept();
}
std::string get_password() const
{
return "test";
}
void start_accept()
{
session* new_session = new session(io_service_, context_);
acceptor_.async_accept(new_session->socket(),
boost::bind(&server::handle_accept, this, new_session,
boost::asio::placeholders::error));
}
void handle_accept(session* new_session,
const boost::system::error_code& error)
{
if (!error)
{
new_session->start();
}
else
{
delete new_session;
}
start_accept();
}
private:
boost::asio::io_service& io_service_;
boost::asio::ip::tcp::acceptor acceptor_;
boost::asio::ssl::context context_;
};
int main(int argc, char* argv[])
{
try
{
boost::asio::io_service io_service;
using namespace std; // For atoi.
server s(io_service, 7777 /*atoi(argv[1])*/);
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
I use boost 1.49 and OpenSSL 1.0.0i-fips 19 Apr 2012. I tried investigating this problem as much as possible, the last time I had this problem (a couple of months ago), I received an error number that I could trace to this error message: error: decryption failed or bad record mac.
But I have no idea what is going wrong and how to fix this, any suggestions are welcome.
The problem is multiple concurrent async read and writes. I were able to crash this program even with raw sockets (glibc detected double free or corruption). Let's see what happens after session starts (in braces I put number of concurrent scheduled async reads and writes):
schedule async read (1, 0)
(assume that data comes) handle_read is executed, it schedules async write (0, 1)
(data are written) handle_write is executed, it schedules async read (1, 0)
Now, it could loop over 1. - 3. without any problem indefinitely. But then timer expires...
(assume, that no new data come from client, so there is still one async read scheduled) timer expires, so SayHello is executed, it schedules async write, still no problem (1, 1)
(data from SayHello are written, but still no new data comes from client) handle_write is executed, it schedules async read (2, 0)
Now, we are done. If any new data from client will come, part of them could be read by one async read and part by another. For raw sockets, it might even seem to work (despite possibility, that there might be 2 concurrent writes scheduled, so echo on client side might look mixed). For SSL this might corrupt incoming data stream, and this is probably what happens.
How to fix it:
strand will not help in this case (it is not concurrent handler executions, but scheduled async reads and writes).
It is not enough, if async write handler in SayHello does nothing (there will be no concurrent reads then, but still concurrent writes might occur).
If you really want to have two diffident kind of writes (echo and timer), you have to implement some kind of queue of messages to write, to avoid mixing writes from echo and timer.
General remark: it was simple example, but using shared_ptr instead of delete this is much better way of handling memory allocation with boost::asio. It will prevent from missing errors resulting in memory leak.