Boost asio async_read (async_write) wrapper - c++

I'm trying to code a wrapper over a boost::asio::ip::tcp::socket
Something like that :
class Socket {
public:
void async_read(AsyncReadStream & s,
const boost::asio::MutableBufferSequence & buffers,
CompletionCondition completion_condition,
ReadHandler handler) {};
};
So I would be able to use ssl and non-ssl stream seamlessly...
The only thing is that, I do not seems to find the definition of each parameters to pass them to boost::asio::async_read (namespaces, etc...)
Any help would be appreciated ! Thanks

Your main requirements seems to be "use SSL and non-SSL streams seamlessly." To do that, you can wrap a the various stream types in a way that exposes the functions you need to use.
Part of how you do that is deciding how you're going to do memory management. MutableBufferSequence is not a type, it defines a set of requirements for a type to be used on that context.
If you are going to use one of a smallish number of approaches you can just use them in the interface (as long as it meets the MutableBufferSequence/ConstBufferSequence requirements, appropriate). The downside of this is that buffer management becomes part of the interface.
If you want to maintain the asio buffer management flexibility then you could
Template your code on stream type in order to achieve the seamless SSL/non-SSL requirement.
Create a wrapper for the various stream types with templated methods on buffer type.
(Updated response; I shouldn't try to respond to a question like this when I have less than two minutes!)

Related

Why is there no asio::ssl::iostream? (and how to implement it)

I'am currently exploring the Asio library and have working code for regular TCP connections. I used asio::ip::tcp::iostream objects since stuff I want to transmit already can serialize to/deserialize from iostreams, so this was really handy and worked well for me.
I then tried to switch to SSL connections and that's when everything turned crazy. There is apparently no built-in support to get the same iostream interface that all other protocols support for a secured connection. From a design perspective this is really perplexing to me. Is there any reason why this is the case?
I am aware of the discussion in How to create a boost ssl iostream? which concludes with a wrapper class to provide iostream functionality using boost. Apart from that, according to a comment, the implementation is flawed, this also does not give the same interface as for the other protocols (a basic_socket_iostream) which also allows to e.g., set expiration times and close the connection. (I am also using asio in the non-boost version and want to avoid adding boost as an additional dependency if possible).
So, I guess my questions are:
What exactly would I need to implement to get a basic_socket_iostream for an SSL connection? I assume it would be a derivation of asio::basic_streambuf or asio::basic_socket_streambuf but I somehow can't figure out how they work and need to be tweaked.. there's just a bunch of weird pointer movement and buffer allocations and documentation to me is quite unclear on what happens when exactly to achieve what...
Why is this not already present in the first place? It seems very unreasonable to have this one protocol behave entirely different from any other and thus require major refactoring for changing a tcp::iostream based project to support secured connections
> Well, the problem I have is that the ssl::stream really does neither: I doesn't give a socket but it also doesn't give me a stream interface that would be compatible to those available from the other protocols and, yes, in that sense it behaves very differently from the others (for no apparent reason)
I don't think the stream behaves any differently from the other protocols (see
https://www.boost.org/doc/libs/1_66_0/doc/html/boost_asio/overview/core/streams.html):
Streams, Short Reads and Short Writes
Many I/O objects in Boost.Asio are stream-oriented. This means that:
There are no message boundaries. The data being transferred is a continuous sequence of bytes.
Read or write operations may transfer fewer bytes than requested. This is referred to as a short read or short write.
Objects that provide stream-oriented I/O model one or more of the following type requirements:
SyncReadStream, where synchronous read operations are performed using a member function called read_some().
AsyncReadStream, where asynchronous read operations are performed using a member function called async_read_some().
SyncWriteStream, where synchronous write operations are performed using a member function called write_some().
AsyncWriteStream, where synchronous write operations are performed using a member function called async_write_some().
Examples of stream-oriented I/O objects include ip::tcp::socket, ssl::stream<>, posix::stream_descriptor, windows::stream_handle, etc.
Perhaps the confusion is that you're comparing to the iostream interface, which is simply not the same concept (it comes from the standard library).
To the question how you could make a iostream compatible stream wrapper for the ssl stream, I cannot devise an answer without consulting the documentations more and using a compiler, which I don't have on hand at the moment.
I think there is room for improvement in the library here. If you read the ip::tcp::iostream class (i.e. basic_socket_iostream<ip::tcp>), you'll see that it has two base classes:
private detail::socket_iostream_base<ip::tcp>
public std::basic_iostream<char>
The former contains a basic_socket_streambuf<ip::tcp> (a derived class of std::streambuf and basic_socket<ip::tcp>), whose address is passed to the latter at construction-time.
For the most part, basic_socket_streambuf<ip::tcp> performs the actual socket operations via its basic_socket<ip::tcp> base class. However, there is the connect_to_endpoints() member function that jumps the abstraction and calls several low-level functions from the detail::socket_ops namespace directly on socket().native_handle(). (This seems to have been introduced in Git commit b60e92b13e.) Those functions will only work on TCP sockets, even though the class is a template for any protocol.
Until I discovered this issue, my plan to integrate SSL support as a iostream/streambuf was to provide a ssl protocol class and a basic_socket<ssl> template specialization to wrap the existing ssl::context and ssl::stream<ip::tcp::socket> classes. Something like this (won't compile):
#include <boost/asio/ip/tcp.hpp>
#include <boost/asio/basic_socket.hpp>
#include <boost/asio/ssl.hpp>
namespace boost {
namespace asio {
namespace ip {
class ssl
: public tcp // for reuse (I'm lazy!)
{
public:
typedef basic_socket_iostream<ssl> iostream;
// more things as needed ...
};
} // namespace ip
template <>
class basic_socket<ip::ssl>
{
class SslContext
{
ssl::context ctx;
public:
SslContext() : ctx(ssl::context::sslv23_client)
{
ctx.set_options(ssl::context::default_workarounds);
ctx.set_default_verify_paths();
}
ssl::context & context() { return ctx; }
} sslContext;
ssl::stream<ip::tcp::socket> sslSocket;
public:
explicit basic_socket(const executor & ex)
: sslSocket(ex, sslContext.context())
{}
executor get_executor() noexcept
{
return sslSocket.lowest_layer().get_executor();
}
void connect(const ip::tcp::endpoint & endpoint_)
{
sslSocket.next_layer().connect(endpoint_);
sslSocket.lowest_layer().set_option(ip::tcp::no_delay(true));
sslSocket.set_verify_mode(ssl::verify_peer);
sslSocket.set_verify_callback(
ssl::rfc2818_verification("TODO: pass the domain here through the stream/streambuf somehow"));
sslSocket.handshake(ssl::stream<ip::tcp::socket>::client);
}
void close()
{
sslSocket.shutdown();
sslSocket.next_layer().close();
}
};
} // namespace asio
} // namespace boost
But due to the design issue I'll have to specialize basic_socket_streambuf<ip::ssl> as well, to avoid the detail::socket_ops routines. (I should also avoid injecting the ssl protocol class into the boost::asio::ip namespace, but that's a side concern.)
Haven't spent much time on this, but it seems doable. Fixing basic_socket_streambuf<>::connect_to_endpoints() first should help greatly.

Difference between lowest_layer() and next_layer() from Boost Asio SSL Stream

The documentation doesn't seem to tell much: lowest_layer(), next_layer().
What is the difference between them and when to use each?
To answer this, first thing to remember is that boost::asio::ssl::stream is a template class. Usually it look like boost::asio::ssl::stream<boost::asio::ip::tcp::socket>. Thus is implemented using boost::asio::ip::tcp::socket. That will be next_layer for boost::asio::ssl::stream. On other side, lowest_layer is always will be a basic_socket (its described in the docs).
Its little ambiguous especially when you see in the headers tcp::socket is typedef to basic_stream_socket<Tcp>, which is directly inherited from basic_socket. And.. In OOP terms you can say "next_layer IS the lowest_layer"..
But lets take another case where you create a ssl::stream< MyOwnClass >. In this case next_layer is MyOwnClass, which should control data reads/writes. And lowest_layer will be be whatever MyOwnClass will say in its typedef.
UPD: When to use each. Use next_layer for read/writes (you don't need this for SSL connection, but before starttls session its required). And use lowest_layer to control underlying socket.

Custom C++ stream for custom type

I've read about custom streams for C++ but it seems that generally people inherit from std::streambuf, std::istream, and std::ostream. By inspecting those type's declarations it becomes clear that these are meant for characters:
typedef basic_streambuf<char> streambuf;
The docs confirm this:
Input stream objects can read and interpret input from sequences of
characters.
Obviously, that makes sense. I'm wondering what would be the correct way of implementing a stream for other types. I do not want to allow text but other forms of binary input/output (I have specific formats). The obvious step seems to be to inherit the basic variants of the above (basic_streambuf, basic_istream, and basic_ostream) and use whatever type I see fit as the template parameter. I failed to find confirmation that this would be the right procedure. So, is it?
Edit for clarification: I have a class called Segment. These streams will send/receive segments and only segments over a WiFi connection as these are used in the communication protocol. Sending anything else would break the protocol. This means that the stream cannot support other types.
This is not an answer to your question in terms of inheriting from std::basic_* with non-char types. But following the comments and given your application, I am questioning the need to reimplement the whole standard stream machinery for your Segment type, when you can simply define a class with a stream operator:
class SegmentStream
{
public:
SegmentStream& operator<< ( const Segment& s );
}
Better yet you could clarify your code by defining methods send and recv instead of >>, <<.
Or perhaps you could explain why this would not be sufficient and why you specifically want to use standard streams?

How to execute a method in another thread?

I'm looking for a solution for this problem in C or C++.
edit: To clarify. This is on a linux system. Linux-specific solutions are absolutely fine. Cross-plaform is not a concern.
I have a service that runs in its own thread. This service is a class with several methods, some of which need to run in the own service's thread rather than in the caller's thread.
Currently I'm using wrapper methods that create a structure with input and output parameters, insert the structure on a queue and either return (if a "command" is asynchronous) or wait for its execution (if a "command" is synchronous).
On the thread side, the service wakes, pops a structure from the queue, figures out what to execute and calls the appropriate method.
This implementation works but adding new methods is quite cumbersome: define wrapper, structure with parameters, and handler. I was wondering if there is a more straightforward means of coding this kind of model: a class method that executes on the class's own thread, instead of in the caller's thread.
edit - kind of conclusion:
It seems that there's no de facto way to implement what I asked that doesn't involve extra coding effort.
I'll stick with what I came up with, it ensures type safeness, minimizes locking, allows sync and async calls and the overhead it fairly modest.
On the other hand it requires a bit of extra coding and the dispatch mechanism may become bloated as the number of methods increases. Registering the dispatch methods on construction, or having the wrappers do that work seem to solve the issue, remove a bit of overhead and also remove some code.
My standard reference for this problem is here.
Implementing a Thread-Safe Queue using Condition Variables
As #John noted, this uses Boost.Thread.
I'd be careful about the synchronous case you described here. It's easy to get perf problems if the producer (the sending thread) waits for a result from the consumer (the service thread). What happens if you get 1000 async calls, filling up the queue with a backlog, followed by a sync call from each of your producer threads? Your system will 'play dead' until the queue backlog clears, freeing up those sync callers. Try to decouple them using async only, if you can.
There are several ways to achieve this, depending upon the complexity you want to accept. Complexity of the code is directly proportional to the flexibility desired. Here's a simple one (and quite well used):
Define a classes corresponding to each functionality your server exposes.
Each of these classes implements a function called execute and take a basic structure called input args and output args.
Inside the service register these methods classes at the time of initialization.
Once a request comes to the thread, it will have only two args, Input and Ouput, Which are the base classes for more specialized arguments, required by different method classes.
Then you write you service class as mere delegation which takes the incoming request and passes on to the respective method class based on ID or the name of the method (used during initial registration).
I hope it make sense, a very good example of this approach is in the XmlRpc++ (a c++ implementation of XmlRpc, you can get the source code from sourceforge).
To recap:
struct Input {
virtual ~Input () = 0;
};
struct Ouput {
virtual ~Output () = 0;
};
struct MethodInterface {
virtual int32_t execute (Input* __input, Output* __output) = 0;
};
// Write specialized method classes and taking specialized input, output classes
class MyService {
void registerMethod (std::string __method_name, MethodInterface* __method);
//external i/f
int32_t execute (std::string __method, Input* __input, Output* __output);
};
You will still be using the queue mechanism, but you won't need any wrappers.
IMHO, If you want to decouple method execution and thread context, you should use Active Object Pattern (AOP)
However, you need to use ACE Framework, which supports many OSes, e.g. Windows, Linux, VxWorks
You can find detailed information here
Also, AOP is a combination of Command, Proxy and Observer Patterns, if you know the details of them, you may implement your own AOP. Hope it helps
In addition to using Boost.Thread, I would look at boost::function and boost::bind. That said, it seems fair to have untyped (void) arguments passed to the target methods, and let those methods cast to the correct type (a typical idiom for languages like C#).
Hey now Rajivji, I think you have it upside-down. Complexity of code is inversely proportional to flexibility. The more complex your data structures and algorithms are, the more restrictions you are placing on acceptable inputs and behaviour.
To the OP: your description seems perfectly general and the only solution, although there are different encodings of it. The simplest may be to derive a class from:
struct Xqt { virtual void xqt(){} virtual ~Xqt(){} };
and then have a thread-safe queue of pointers to Xqt. The service thread then just pops the queue to px and calls px->xqt(), and then delete px. The most important derived class is this one:
struct Dxqt : Xqt {
xqt *delegate;
Dxqt(xqt *d) : delegate(d) {}
void xqt() { delegate->xqt(); }
};
because "all problems in Computer Science can be solved by one more level of indirection" and in particular this class doesn't delete the delegate. This is much better than using a flag, for example, to determine if the closure object should be deleted by the server thread.

How do I build a filtered_streambuf based on basic_streambuf?

I have a project that requires me to insert a filter into a stream so that outgoing data will be modified according to the filter. After some research, it seems that what I want to do is create a filtered_streambuf like this:
template <class StreamBuf>
class filtered_streambuf: public StreamBuf
{ ... }
And then insert a filtered_streambuf<> into whichever stream I need to be filtered. My problem is that I don't know what invariants I need to maintain while filtering a stream, in order to ensure that
Derived classes can work as expected. In particular, I may find I have filtered_streambufs built over other filtered_streambufs.
All the various stream inserters, extractors and manipulators work as expected.
The trouble is that I just can't seem to work out what the minimal interface is that I need to supply in order to guarantee that an iostream will have what it needs to work correctly.
In particular, do I need to fake the movement of the protected pointer variables, or not? Do I need a fake data buffer, or not? Can I just override the public functions, rewriting them in terms of the base streambuf, or is that too simplistic?
Boost.Iostreams may be useful to you.
From the documentation:
Boost.Iostreams has three aims:
To make it easy to create standard C++ streams and stream buffers for
accessing new Sources and Sinks.
To provide a framework for defining Filters and attaching them to standard
streams and stream buffers.
To provide a collection of ready-to-use Filters, Sources and
Sinks.
I've barely used that libary myself, so I can't comment any further.