Overloadable boost::asio::basic_stream_socket - c++

Developing a network application, I have a Connection class that manages sending and receiving messages on the network. I'm using boost::asio.
I now want to let the Connection class handle connections both over TCP, and over local UNIX stream sockets. However, the template-design of boost confuses me. AFAICT, there's no shared base-class between local::stream_protocol::socket and ip::tcp::socket.
How would I go about creating a Connection that encapsulates the network-semantics such that other code don't have to deal with the details of what protocol is used?
I.E. I want to implemented something like:
class Connection() {
Connection(ip::tcp::endpoint& ep);
Connection(local::stream_protocol::endpoint& ep);
void send(Buffer& buf);
}
How would I achieve this?

After some pondering, my current solution is to make the send and recv functions of Connection virtual, and create a template-subclass of Connection, roughly:
template <typename Protocol>
class ConnectionImpl : public Connection {
typedef typename Protocol::socket Socket;
typedef typename Protocol::endpoint EndPoint;
Socket _socket;
public:
ConnectionImpl(boost::asio::io_service& ioSvc, const EndPoint& addr)
: Connection(ioSvc), _socket(ioSvc) {
_socket.connect(addr);
}
void trySend() {
// Initiate async send on _socket here
}
void tryRead() {
// Initiate async recv on _socket here
}
}
Is there a way to avoid the need to subclass and use of virtual functions?

AFAICT, there's no shared base-class between
local::stream_protocol::socket and ip::tcp::socket.
There is explicitly no base class for all socket objects on purpose, the documentation describes the rationale quite well
Unsafe and error prone aspects of the BSD socket API not included. For
example, the use of int to represent all sockets lacks type safety.
The socket representation in Boost.Asio uses a distinct type for each
protocol, e.g. for TCP one would use ip::tcp::socket, and for UDP one
uses ip::udp::socket

Use boost::asio:generic::stream_protocol::socket instead. When you call async_connect()/connect(), it will extract the family and protocol from the remote endpoint and then pass them to the socket() syscall to create the correct socket.
boost::asio::generic::stream_protocol::socket socket_{io_service};
if (use_unix_socket) {
boost::asio::local::stream_protocol::endpoint unix_endpoint{"/tmp/socketpath.sock"};
socket_.async_connect(unix_endpoint, [](boost::system::error_code ec){
}};
}
else {
boost::asio::ip::tcp::endpoint tcp_endpoint{...};
socket_.async_connect(tcp_endpoint, [](boost::system::error_code ec){
}};
}
And there is the code from boost::asio::basic_socket:
template <typename ConnectHandler>
BOOST_ASIO_INITFN_RESULT_TYPE(ConnectHandler,
void (boost::system::error_code))
async_connect(const endpoint_type& peer_endpoint,
BOOST_ASIO_MOVE_ARG(ConnectHandler) handler)
{
// If you get an error on the following line it means that your handler does
// not meet the documented type requirements for a ConnectHandler.
BOOST_ASIO_CONNECT_HANDLER_CHECK(ConnectHandler, handler) type_check;
if (!is_open())
{
boost::system::error_code ec;
const protocol_type protocol = peer_endpoint.protocol();
if (this->get_service().open(this->get_implementation(), protocol, ec))
{
detail::async_result_init<
ConnectHandler, void (boost::system::error_code)> init(
BOOST_ASIO_MOVE_CAST(ConnectHandler)(handler));
this->get_io_service().post(
boost::asio::detail::bind_handler(
BOOST_ASIO_MOVE_CAST(BOOST_ASIO_HANDLER_TYPE(
ConnectHandler, void (boost::system::error_code)))(
init.handler), ec));
return init.result.get();
}
}
return this->get_service().async_connect(this->get_implementation(),
peer_endpoint, BOOST_ASIO_MOVE_CAST(ConnectHandler)(handler));
}

Related

How to wait for a function to return with Boost:::Asio?

Background
I'm new to using Boost::Asio library and am having trouble getting the behaviour I want. I am trying to implement some network communication for custom hardware solution. The communication protocol stack we are using relies heavily on Boost::Asio async methods and I don't believe it is entirely thread safe.
I have successfully implemented sending but encountered a problem when trying to setup the await for receiving. Most boost::asio examples I have found rely on socket behaviour to implement async await with socket_.async_read_some() or other similar functions. However this doesn't work for us as our hardware solution requires calling driver function directly rather than utilising sockets.
The application uses an io_service that is passed into boost::asio::generic::raw_protocol::socket as well as other classes.
Example code from protocol stack using sockets
This is the example code from the protocol stack. do_receive() is called in the constructor of RawSocketLink.
void RawSocketLink::do_receive()
{
namespace sph = std::placeholders;
socket_.async_receive_from(
boost::asio::buffer(receive_buffer_), receive_endpoint_,
std::bind(&RawSocketLink::on_read, this, sph::_1, sph::_2));
}
void RawSocketLink::on_read(const boost::system::error_code& ec, std::size_t read_bytes)
{
if (!ec) {
// Do something with received data...
do_receive();
}
}
Our previous receive code without the protocol stack
Prior to implementing the stack we had been using the threading library to create separate threads for send and recieve. The receive method is shown below. Mostly it relies on calling the receive_data() function from the hardware drivers and waiting for it to return. This is a blocking call but is required to return data.
void NetworkAdapter::Receive() {
uint8_t temp_rx_buffer[2048];
rc_t rc;
socket_t *socket_ptr;
receive_params_t rx_params;
size_t rx_buffer_size;
char str[100];
socket_ptr = network_if[0];
while (1) {
rx_buffer_size = sizeof(temp_rx_buffer);
// Wait until receive_data returns then process
rc = receive_data(socket_ptr,
temp_rx_buffer,
&rx_buffer_size,
&rx_params,
WAIT_FOREVER);
if (rc_error(rc)) {
(void)fprintf(stderr, "Receive failed");
continue;
}
// Do something with received packet ....
}
return;
}
Note that the socket_t pointer in this code is not the same thing as a TCP/UDP socket for Boost::Asio.
Current implement of async receive
This is my current code and where I need help. I'm not sure how to use boost::asio method to wait for receive_data to return. We are trying to replicate the behaviour of socket.async_read_from(). The NetworkAdapter has access to the io_service.
void NetworkAdapter::do_receive() {
rc_t rc;
socket_t *socket_ptr;
receive_params_t rx_params;
size_t rx_buffer_size;
socket_ptr = network_if[0];
rx_buffer_size = receive_buffer_.size();
// What do I put here to await for this to return asynchronously?
rc = receive_data(socket_ptr, receive_buffer_.data(), &rx_buffer_size, &rx_params, ATLK_WAIT_FOREVER);
on_read(rc, rx_buffer_size, rx_params);
}
void NetworkAdapter::on_read(const rc_t &rc, std::size_t read_bytes, const receive_params_t &rx_params) {
if (!rc) {
// Do something with received data...
} else {
LOG(ERROR) << "Packet receieve failure";
}
do_receive();
}
Summary
How do I use boost::asio async/await functions to await a function return? In particular I want to replicate the behaviour of socket.async_receive_from() but with a function rather than a socket.
*Some function names and types have been changed due to data protection requirements.
N4045 Library Foundations for Asynchronous Operations, Revision 2
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4045.pdf
On page 24 there is an example on how to implement an asio async API in terms of callback-based os API.
// the async version of your operation, implementing all kinds of async paradigm in terms of callback async paradigm
template <class CompletionToken>
auto async_my_operation(/* any parameters needed by the sync version of your operation */, CompletionToken&& token)
{
// if CompletionToken is a callback function object, async_my_operation returns void, the callback's signature should be void(/* return type of the sync version of your operation, */error_code)
// if CompletionToken is boost::asio::use_future, async_my_operation returns future</* return type of the sync version of your operation */>
// if CompletionToken is ..., ...
// you are not inventing new async paradigms so you don't have to specialize async_completion or handler_type, you should focus on implement the os_api below
  async_completion<CompletionToken, void(/* return type of the sync version of your operation, */error_code)/* signature of callback in the callback case */> completion(token);
  typedef handler_type_t<CompletionToken, void(error_code)> Handler;
  unique_ptr<wait_op<Handler>> op(new wait_op<Handler>(move(completion.handler))); // async_my_operation initates your async operation and exits, so you have to store completion.handler on the heap, the completion.handler will be invoked later on a thread pool (e.g. threads blocked in IOCP if you are using os api, threads in io_context::run() if you are using asio (sockets accept an io_context during construction, so they know to use which io_context to run completion.handler))
  
// most os api accepts a void* and a void(*)(result_t, void*) as its C callback function, this is type erasure: the void* points to (some struct that at least contains) the C++ callback function object (can be any type you want), the void(*)(result_t, void*) points to a C callback function to cast the void* to a pointer to C++ callback function object and call it
  os_api(/* arguments, at least including:*/ op.get(), &wait_callback<Handler>);
  return completion.result.get();
}
// store the handler on the heap
template <class Handler>
struct wait_op {
  Handler handler_;
  explicit wait_op(Handler  handler) : handler_(move(handler)) {}
};
// os post a message into your process's message queue, you have several threads blocking in a os api (such as IOCP) or asio api (such as io_context::run()) that continuously takes message out from the queue and then call the C callback function, the C callback function calls your C++ callback function
template <class Handler>
void wait_callback(result_t result, void* param)
{
  unique_ptr<wait_op<Handler>> op(static_cast<wait_op<Handler>*>(param));
  op‐>handler_(/* turn raw result into C++ classes before passing it to C++ code */, error_code{});
}
//trivial implementation, you should consult the socket object to get the io_context it uses
void os_api(/* arguments needed by your operation */, void* p_callback_data, void(*p_callback_function)(result_t, void*))
{
std::thread([](){
get the result, blocks
the_io_context_of_the_socket_object.post([](){ (*p_callback_function)(result, p_callback_data); });
}).detach();
}
boost.asio has changed from async_completion and handler_type to async_result, so the above code is outdated.
Requirements on asynchronous operations - 1.75.0
https://www.boost.org/doc/libs/1_75_0/doc/html/boost_asio/reference/asynchronous_operations.html

Customising socket/close syscalls in boost::asio

I have a library which communicates with TCP and UDP sockets using boost::asio. This library is cross-platform and delegates some operations to the application using it via callbacks. In the case of sockets, the following must occur:
Library opens a socket (for an outbound connection).
Application receives a callback allowing it to customise behaviour
Library connects a socket and uses it
Application receives a callback allowing it to do any necessary cleanup
Library closes the socket
Here's how I thought I can achieve this:
class CustomizableTcpSocket {
public:
template <typename T, typename U>
auto async_connect(T&& endpoint, U&& handler) {
boost::system::error_code ec;
socket_.open(endpoint.protocol(), ec);
native_code_.socket_did_open(socket_.native_handle());
return socket_.async_connect(std::forward<U>(handler));
}
// same for async_write_some as well
template <typename... Args>
auto async_read_some(Args&&... args) {
return socket_.async_read_some(std::forward<Args>(args)...);
}
~CustomizableTcpSocket() {
if (socket_.is_open()) {
native_code_.socket_will_close(socket_.native_handle());
}
}
private:
NativeCode native_code_;
boost::asio::ip::tcp::socket socket_;
};
What I'm finding is that asio is sometimes closing the socket (at the OS level) before my destructor fires.
Is there a way I can be notified of a socket closing before asio actually does it?
ASIO has a debugging feature called handler tracking.
You could use it to intercept socket closures which are invoked as:
BOOST_ASIO_HANDLER_OPERATION((reactor_.context(), "socket", &impl, impl.socket_, "close"));
Just #define BOOST_ASIO_HANDLER_OPERATION(...) to whatever function you want called and in there check that arg 5 == "close".
Here's an example of how to use handler tracking.
For reference: the actual close() operation is not straightforward. Better to leave that as it is.

Server and Client at same time with Boost-Asio

I am an AspNet programmer with 57 years of age. Because I was the only one who worked a little, back in the beginning, with C ++, my bosses asked me to serve a customer who needs a communication agent with very specific characteristics. It can run as a daemon on multiple platforms and be both client and server at times. I do not know enough but I have to solve the problem and found a chance in the Boost / Asio library.
I am new to Boost-Asio and reading the documentation I created a server and a TCP socket client that exchanges messages perfectly and two-way, full duplex.
I read several posts where they asked for the same things I want, but all the answers suggested full duplex as if that meant having a client and a server in the same program. And it's not. The definition of full duplex refers to the ability to write and read from the same connection and every TCP connection is full duplex by default.
I need to make two programs can accept connections initiated by the other. There will be no permanent connection between the two programs. Sometimes one of them will ask for a connection and at other times the other will make this request and both need to be listening, accepting the connection, exchanging some messages and terminating the connection until new request is made.
The server I did seems to get stuck in the process of listening to the port to see if a connection is coming in and I can not continue with the process to be able to create a socket and request a connection with the other program. I need threads but I do not know enough about them.
It'is possible?
As I said I'm new to Boost / Asio and I tried to follow some documents of threads and Coroutines. Then I put the client codes in one method and the server in another.:
int main(int argc, char* argv[])
{
try
{
boost::thread t1(&server_agent);
boost::thread t2(&client_agent);
// wait
t1.join();
t2.join();
return 0;
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
and two Coroutines:
void client_agent() {
parameters param;
param.load();
boost::asio::io_service io_service1;
tcp::resolver resolver(io_service1);
char port[5];
_itoa(param.getNrPortaServComunic(), port, 10);
auto endpoint_iterator = resolver.resolve({ param.getIPServComunicPrincipal(), port });
std::list<client> clients;
client c(io_service1, endpoint_iterator, param);
while (true)
{
BOOL enviada = FALSE;
while (true) {
if (!enviada) {
std::cout << "sending a message\n";
int nr = 110;
message msg(nr, param);
c.write(msg);
enviada = TRUE;
}
}
}
c.close();
}
void server_agent() {
parameters param;
param.load();
boost::asio::io_service io_service1;
std::list<server> servers;
tcp::endpoint endpoint(tcp::v4(), param.getNrPortaAgenteServ());
servers.emplace_back(io_service1, endpoint);
io_service1.run();
}
I used one port to client endpoint and other port to server endpoint. Is it correct? Required?
It starts looking like it's going to work. Each of the methods runs concurrently but then I get a thread allocation error at the io_service1.run (last line of the server_agent method):
boost::exception_detail::clone_impl > at memory location 0x0118C61C.
Any suggestion?
You are describing a UDP client/server application. But your implementation is bound to fail. Think of an asio server or client as always running in a single thread.
The following code is just so you get an idea. I haven't tried to compile it. Client is very similar, but may need a transmit buffer, depends on the app, obviously.
This is a shortened version, so you get the idea. In a final application you way want to add receive timeouts and the likes. The same principles hold for TCP servers, with the added async_listen call. Connected sockets can be stored in shared_ptr, and captured by the lambdas, will destroy almost magically.
Server is basically the same, except there is no constant reading going on. If running both server and client in the same process, you can rely on run() to be looping because of the server, but if not, you'd have to call run() for each connection. run() would exit at the end of the exchange.
using namespace boost::asio; // Or whichever way you like to shorten names
class Server
{
public:
Server(io_service& ios) : ios_(ios) {}
void Start()
{
// create socket
// Start listening
Read();
}
void Read()
{
rxBuffer.resize(1024)
s_.async_receive_from(
buffer(rxBuffer),
remoteEndpoint_,
[this](error_code ec, size_t n)
{
OnReceive(ec, n); // could be virtual, if done this way
});
}
void OnReceive(error_code ec, size_t n)
{
rxBuffer_.resize(n);
if (ec)
{
// error ... stops listen loop
return;
}
// grab data, put in txBuffer_
Read();
s_.async_send_to(
buffer(txBuffer_),
remoteEndpoint_,
[this, msg](error_code ec, size_t n)
{
OnTransmitDone(ec, n);
});
}
void OnTransmitDone(error_code ec, size_t n)
{
// check for error?
txBuffer_.clear();
}
protected:
io_service& ios_;
ip::udp::socket s_;
ip::udp::endpoint remoteEndpoint_; // the other's address/port
std::vector<char> rxBuffer_; // could be any data type you like
std::vector<char> txBuffer_; // idem All access is in one thread, so only
// one needed for simple ask/respond ops.
};
int main()
{
io_service ios;
Server server(ios); // could have both server and client run on same thread
// on same io service this way.
Server.Start();
ios_run();
// or std::thread ioThread([&](){ ios_.run(); });
return 0;
}

C++ Design: Multiple TCP clients, boost asio and observers

In my system, I have a juggle a bunch of TCP clients and I am bit confused on how to design it [most of my experience is in C, hence the insecurity]. I am using boost ASIO for managing connection. These are the components I have
A TCPStream class : thin wrapper over boost asio
an IPC protocol, which implement a protocol over TCP:
basically Each message starts with a type and length field
so we can read the individual messages out of the stream.
Connection classes which handle the messages
Observer class which monitors connections
I am writing pseudo C++ code to be concise. I think you will get the idea
class TCPStream {
boost::asio::socket socket_;
public:
template <typename F>
void connect (F f)
{
socket_.connect(f);
}
template <typename F>
void read (F f)
{
socket_.read(f);
}
};
class IpcProtocol : public TCPStream {
public:
template <typename F
void read (F f)
{
TCPStream::read(
[f] (buffer, err) {
while (msg = read_indvidual_message(buffer)) {
// **** this is a violation of how this pattern is
// supposed to work. Ideally there should a callback
// for individual message. Here the same callback
// is called for N no. of messages. But in our case
// its the same callback everytime so this should be
// fine - just avoids some function calls.
f(msg);
};
};
)
}
};
Lets say I have a bunch of TCP connections and there are a handler class
for each of the connection. Lets name it Connection1, Connection2 ...
class Connection {
virtual int type() = 0;
};
class Connection1 : public Connection {
shared_ptr<IpcProtocol> ipc_;
int type ()
{
return 1;
}
void start ()
{
ipc_.connect([self = shared_from_this()](){ self->connected(); });
ipc_.read(
[self = shared_from_this()](msg, err) {
if (!err)
self->process(msg);
} else {
self->error();
}
});
}
void connected ()
{
observer.notify_connected(shared_from_this());
}
void error ()
{
observer.notify_error(shared_from_this());
}
};
This pattern repeats for all connections one way or other.
messages are processed by the connection class itself. But it will let know of
other events [connect, error] to an observer. The reason -
Restart the connection, everytime it disconnect
Bunch of guys needs to know if the connection is established so that they can
send initial request/confguration to server.
There are things that needs be done based on connection status of muliple connections
Eg: if connection1 and connection2 are established, then start connection3 etc.
I added a middle Observer class is there so that the observers do have to directly connect to the connection everytime it is restarted. Each time connection breaks, the connection class is deleted and new one is created.
class Listeners {
public:
virtual void notify_error(shared_ptr<Connection>) = 0;
virtual void notify_connect(shared_ptr<Connection>) = 0;
virtual void interested(int type) = 0;
};
class Observer {
std::vector<Listeners *> listeners_;
public:
void notify_connect(shared_ptr<Connection> connection)
{
for (listener : listeners_) {
if (listener->interested(connection->type())) {
listener->notify_error(connection);
}
}
}
};
Now a rough prototype of this works. But I was wondering if this class design
any good. There are multiple streaming servers which will continuously produce states and send it to my module to program the state in h/w. This needs to be extensible as more clients will be added in future.
Threading
The legacy code had one thread per TCP connection and this worked fine. Here I am trying to handle multiple connections on same thread. Still there will be multiple threads calling ioservice. So the observer will run on multiple threads. I am planning to have a mutex per Listener, so that listeners wont get multiple events concurrently.
HTTP Implements a protocol over TCP so the HTTP Server asio examples are a good starting point for your design, especially: HTTP Server 2, HTTP Server 3 and HTTP Server 4.
Note: that connection lifetime is likely to be an issue, especially since you intend to use class member functions as handlers, see the question and answers here: How to design proper release of a boost::asio socket or wrapper thereof.

Using SSL sockets and non-SSL sockets simultaneously in Boost.Asio?

I'm in the process of converting a library to Boost.Asio (which has worked very well so far), but I've hit something of a stumbling block with regards to a design decision.
Boost.Asio provides support for SSL, but a boost::asio::ssl::stream<boost::asio::ip::tcp::socket> type must be used for the socket. My library has the option of connecting to SSL servers or connecting normally, so I've made a class with two sockets like this:
class client : public boost::enable_shared_from_this<client>
{
public:
client(boost::asio::io_service & io_service, boost::asio::ssl::context & context) : socket_(io_service), secureSocket_(io_service, context) {}
private:
boost::asio::ip::tcp::socket socket_;
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> secureSocket_;
};
And within there are a bunch of handlers that reference socket_. (For example, I have socket_.is_open() in several places, which would need to become secureSocket_.lowest_layer().is_open() for the other socket.)
Can anyone suggest the best way to go about this? I'd rather not create a separate class just for this purpose, because that would mean duplicating a lot of code.
Edit: I rephrased my original question because I misunderstood the purpose of an OpenSSL function.
I'm rather late in answering this question, but I hope this will help others. Sam's answer contains the germ of an idea, but doesn't quit go far enough in my opinion.
The idea came about from the observation that asio wraps an SSL socket in a stream. All this solution does is that it wraps the non-SSL socket similarly.
The desired result of having a uniform external interface between SSL and non-SSL sockets is done with three classes. One, the base, effectively defines the interface:
class Socket {
public:
virtual boost::asio::ip::tcp::socket &getSocketForAsio() = 0;
static Socket* create(boost::asio::io_service& iIoService, boost::asio::ssl::context *ipSslContext) {
// Obviously this has to be in a separate source file since it makes reference to subclasses
if (ipSslContext == nullptr) {
return new NonSslSocket(iIoService);
}
return new SslSocket(iIoService, *ipSslContext);
}
size_t _read(void *ipData, size_t iLength) {
return boost::asio::read(getSocketForAsio(), boost::asio::buffer(ipData, iLength));
}
size_t _write(const void *ipData, size_t iLength) {
return boost::asio::write(getSocketForAsio(), boost::asio::buffer(ipData, iLength));
}
};
Two sub-classes wrap SSL and non-SSL sockets.
typedef boost::asio::ssl::stream<boost::asio::ip::tcp::socket> SslSocket_t;
class SslSocket: public Socket, private SslSocket_t {
public:
SslSocket(boost::asio::io_service& iIoService, boost::asio::ssl::context &iSslContext) :
SslSocket_t(iIoService, iSslContext) {
}
private:
boost::asio::ip::tcp::socket &getSocketForAsio() {
return next_layer();
}
};
and
class NonSslSocket: public Socket, private Socket_t {
public:
NonSslSocket(boost::asio::io_service& iIoService) :
Socket_t(iIoService) {
}
private:
boost::asio::ip::tcp::socket &getSocketForAsio() {
return next_layer();
}
};
Every time you call an asio function use getSocketForAsio(), rather than pass a reference to the Socket object. For example:
boost::asio::async_read(pSocket->getSocketForAsio(),
boost::asio::buffer(&buffer, sizeof(buffer)),
boost::bind(&Connection::handleRead,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
Notice that the Socket is stored as pointer. I cannot think how else the polymorphism can be hidden.
The penalty (which I don't think great) is the extra level of indirection used to obtain non-SSL sockets.
There's a couple of ways you can do this. In the past, I've done something like
if ( sslEnabled )
boost::asio::async_write( secureSocket_ );
} else {
boost::asio::async_write( secureSocket_.lowest_layer() );
}
Which can get messy pretty quickly with a lot of if/else statements. You could also create an abstract class (pseudo code - oversimplified)
class Socket
{
public:
virtual void connect( ... );
virtual void accept( ... );
virtual void async_write( ... );
virtual void async_read( ... );
private:
boost::asio::ip::tcp::socket socket_;
};
Then create a derived class SecureSocket to operate on a secureSocket_ instead of socket_. I don't think it would be duplicating a lot of code, and it's probably cleaner than if/else whenever you need to async_read or async_write.
The problem of course is that tcp::socket and the ssl "socket" don't share the any common ancestor. But most functions for using the socket once it's open share the exact same syntax. The cleanest solution is thus with templates.
template <typename SocketType>
void doStuffWithOpenSocket(SocketType socket) {
boost::asio::write(socket, ...);
boost::asio::read(socket, ...);
boost::asio::read_until(socket, ...);
// etc...
}
This function will work work with normal tcp::sockets and also secure SSL sockets:
boost::asio::ip::tcp::socket socket_;
// socket_ opened normally ...
doStuffWithOpenSocket<boost::asio::ip::tcp::socket>(socket_); // works!
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> secureSocket_;
// secureSocket_ opened normally (including handshake) ...
doStuffWithOpenSocket(secureSocket_); // also works, with (different) implicit instantiation!
// shutdown the ssl socket when done ...
It would compile with something like this:
typedef boost::asio::buffered_stream<boost::asio::ip::tcp::socket> Socket_t;