Can I use boost asio for HTTPS requests - c++

Can I use boost asio for HTTPS requests? I can make GET and POST HTTP requests, but what about HTTPS? How can I handle it? Can somebody provide me a code snippet?

Yes you can.
http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/example/ssl/client.cpp
Simply integrate it to your HTTP request.

Asio offers basic SSL support through OpenSSL. A code example is available as part of the documentation
In general, HTTPS is quite similar to HTTP, except for the fact that you have to perform an SSL handshake to initialize the connection. Asio offers an implementation for this.
The actual communication is quite easy, as you simply encrypt your HTTP stream, the actual communication patterns are the same.
Therefore, if the functionality offered by Asio is not flexible enough, you can also write your own encryption layer on top of Asio using OpenSSL (although I would not recommend this unless you already have a fair deal of experience with encryption).

Related

Easy way to "nudge" a server to keep a connection open?

Okay, so a little context:
I have an app running on an embedded system that sends a few different requests over HTTP (using libcurl in C++) at the following intervals:
5 minutes
15 minutes
1 hour
24 hours
My goal: Reduce data consumption (runs over cellular)
We have both client and server side TLS authentication, so the handshake is costly. The idea is that we use persistent connections (at least for the shorter interval files) to avoid doing the handshake every time.
Unfortunately, after much tinkering I've figured out that the server is closing the connection before the intervals pass. Maybe this is something we can extend? I'll have to talk to the server side guys.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
My idea is this:
Have my app send a packet (as small as possible) every 2 minutes or so (however long the timeout is) to "nudge" the connection into staying open.
My questions are:
Does that make any sense?
I don't suppose there is an easy way to do this in libcurl is there?
If so, how small could we get the request?
Is there an even easier way to do it? My only issue here is that all the connection stuff "lives" in libcurl.
Thanks!
It would be easier to give a more precise answer if you gave a little more detail on your application architecture. For example, is it a RESTful API? Is the use of HTTP absolutely mandatory? If so, what HTTP server are you using (nginx, apache, ...)? Could you consider websockets as an alternative to plain HTTP?
If you are at liberty to use something other than regular HTTP or HTTPs - and to use something other than libcurl on the client side - you would have more options.
If, on the other hand, if you are constrained to both
use HTTP (rather than a raw TCP connection or websockets), and
use libcurl
then I think your task is a good bit more difficult - but maybe still possible.
One of your first challenges is that the typical timeouts for a HTTP connection are quite low (as low as a few seconds for Apache 2). If you can configure the server you can increase this.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
Your terminology is ambiguous here. Are you referring to TCP keep-alive packets or persistent HTTP connections? These don't necessarily have anything to do with each other. The former is an optional mechanism in TCP (which is disabled by default). The latter is an application-layer concept which is specific to HTTP - and may be used regardless of whether keep-alive packets are being used at the transport layer.
My only issue here is that all the connection stuff "lives" in libcurl.
The problem with using libcurl is that it first and foremost a transfer library. I don't think it is tailored for long-running, persistent TCP connections. Nonetheless, according to Daniel Stenberg (the author of libcurl), the library will automatically try to reuse existing connections where possible - as long as you re-use the same easy handle.
If so, how small could we get the request?
Assuming you are using a 'ping' endpoint on your server - which accepts no data and returns a 204 (success but no content) response, then the overhead - in the application layer - would be the size of the HTTP request headers + the size of the HTTP response headers. Maybe you could get it down to 200-300 bytes, or thereabouts.
Alternatives to (plain) HTTP
If you are using a RESTful API, this paradigm sort of goes against the idea of a persistent TCP connection - although I can not think of any reason why it would not work.
You might consider websockets as an alternative, but - again - libcurl is not ideal for this. Although I know very little about websockets, I believe they would offer some advantages.
Compared to plain HTTP, websockets offer:
significantly less overhead than HTTP per message;
the connection is automatically persistent: there is no need to send extra 'keep alive' messages to keep it open;
Compared to a raw TCP connection, the benefits of websockets are that:
you don't have to open a custom port on your server;
it automatically handles the TLS/SSL stuff for you.
(Someone who knows more about websockets is welcome to correct me on some of the above points - particularly regarding TLS/SSL and keep alive messages.)
Alternatives to libcurl
An alternative to libcurl which might be useful here is the Mongoose networking library. It would provide you with a few different alternatives:
use a plain TCP connection (and a custom application layer protocol),
use a TCP connection and handle the HTTP requests yourself manually,
use websockets - which it has very good support for (both as server and client).
Mongoose allows you to enable SSL for all of these options also.

what is a good way to secure Cap'n Proto RPC network traffic?

I would like to use Cap'n Proto RPC to communicate with a server in the cloud from a desktop box in an office. Cap'n Proto doesn't provide secure network connections through a firewall. I would prefer c++ since I have other components which require this.
I see some people have been looking at nanomsg and other transports which link directly into the application, but I was wondering whether stunnel or something similar might be satisfactory.
The stunnel application, as most know, can provide HTTPS encapsulation of TCP/IP traffic under certain conditions, as per the FAQ:
The protocol is TCP, not UDP.
The protocol doesn't use multiple connections, like ftp.
The protocol doesn't depend on Out Of Band (OOB) data,
Remote site can't use an application-specific protocol, like ssltelnet, where SSL is a negotiated option, save for those protocols already supported by the protocol argument to stunnel.
It seems like Cap'n Proto RPC might satisfy these conditions. I don't think the customer will object to installing stunnel in this case. Has anyone tried this or something similar? If so, your experiences would be appreciated. If someone knows of a faster/lighter alternative it would also be helpful.
thanks!
Yes, Cap'n Proto's two-party protocol (the only one provided currently) should work great with stunnel, since it's a simple TCP-based transport. You will need to run both a stunnel client and a server, of course, but otherwise this should be straightforward to set up. You could also use SSH port forwarding or a VPN to achieve a similar result.
(Note that stunnel itself has nothing to do with HTTPS per se, but is often used to implement HTTPS because HTTP is also a simple TCP protocol and HTTPS is the same protocol except on TLS. In the Cap'n Proto case, Cap'n Proto replaces HTTP. So you're creating Cap'nProto-S, I guess.)
Another option is to implement the kj::AsyncIoStream abstract interface directly based on a TLS library like OpenSSL, GnuTLS, etc. Cap'n Proto's RPC layer will allow you to provide an arbitrary implementation of kj::AsyncIoStream as its transport (via interfaces in capnp/rpc-twoparty.h). Unfortunately, many TLS libraries have pretty ugly interfaces and so this may be hard to get right. But if you do write something, please contribute it back to the project as this is something I'd like to have in the base library.
Eventually we plan to add an official crypto transport to Cap'n Proto designed to directly support multi-party introductions (something Cap'n Proto actually doesn't do yet, but which I expect will be a killer feature when it's ready). I expect this support will appear some time in 2016, but can't make any promises.

boost asio ssl simple encryption program

I am quite new to secure networking and I am trying to make a simple networking program with boost asio ssl and I have read all of the documentation available and also lots of questions and answers, but no one has yet asked:
Can I make a simple encrypted network using boost asio ssl without all of the certificate and verifying stuff. Just a public/private key pair on both ends, then some sort of public key exchange (handshake?) and then secure communication?
If I am wrong or mistaken about something, please do correct me. Simple examples would be much appreciated.
See here for a minimal demo client/server using Boost Asio:
Boost Asio SSL Certification on iOS
This answer adds instructions on how to create a certificate and use it:
Some OpenSSL source that mysteriously doesn't work

websocket++ using fastcgi++'s session example

I'm brand new to c++ and know next to nothing about web protocols or websockets, so this may seem ridiculous.
I make websites that are 100% ajax and want to incorporate websockets. Fastcgi++ is everything I could hope for for the ajax demands, but it doesn't have websockets, and I chose websocket++ over libwebsockets since websocket++ is more or less a simple #include, so I assumed that I could incorporate it into fastcgi++.
I think I've figured out fastcgi++, and it looks like most of the action happens in Fastcgipp::Request then Fastcgipp::Http::Sessions for session data http://www.nongnu.org/fastcgipp/doc/2.1/a00005.html; however, I think I have to do the same thing with websocket++'s server::handler for handling the websocket https://github.com/zaphoyd/websocketpp/wiki/Creating-Applications-using-WebSocket--, and now I'm lost at sea.
Enter my complete inexperience with c++: I think I have to use virtual inheritance, but I have no idea. Also, if I could even properly "subclass" both, how do I make sure that they don't run over each other?
Please show me a basic example of how websocket++ can use fastcgi++'s session management.
A WebSocket connection cannot be processed by an HTTP request/response workflow. In order to use something like fastcgi++ with both regular HTTP requests and with WebSocket requests it would need to have some way of recognizing a WebSocket handshake and piping that off to another handler instead of processing it as HTTP. I don't see an obvious pass through mode of that sort in its documentation, but I could be missing something.
If such a feature exists, WebSocket++ can be used in stream mode where it disables all of its network elements and just processes streams of bytes piped in from another networking library.
Some alternatives:
WebSocket++ supports HTTP pass through. This is essentially the opposite of what is described above. WebSocket++ would be used as the networking layer. It would process incoming WebSocket connections and would pass off HTTP requests to some other subsystem.
WebSocket++ and fastcgi++ could be run on different ports or different hostnames. This could be done in the same program or separate programs. With client side requests directed to the appropriate host/port.
Disclaimer: I am the author of WebSocket++

Advice on web services without HTTP

My company is planning on implementing a remote programming tool to configure embedded devices in the field. I assumed that these devices would have an HTTP client on them, and planned to implement some REST services for them to access. Unfortunately, I found out that they have a TCP stack but no HTTP client. One of my co-workers suggested that we try to send “soap packets” over port 80 without an HTTP client. The devices also don’t have any SOAP client. Is this possible? Would there be implications if there was a web server running on the network the devices are connected to? I’d appreciate any advice or best practices on how to implement something like this.
If your servers are serving simple files, the embedded devices really only need to send an HTTP GET request (possibly with a little extra data identifying the device, so the server can know which firmware version to send).
From there, it's pretty much a simple matter of reading the raw data coming in on the embedded device's socket -- you might need to only disregard the HTTP header on the response, or you could possibly configure your server to not send it for those requests.
you don't really need an HTTP client per-se. HTTP is a very simple text-based protocol that you can implement yourself if you need to.
That said, you probably won't need to implement it yourself. If they have a TCP stack and a standard sockets library, you can probably find a simple C library (such as this one) that wraps up HTTP or SOAP functionality for you. You could then just build that library into your application.
Basic HTTP is not a particularly difficult protocol to implement by hand. It's a text and line based protocol, save for the payload, and the servers work quite well with "primitive, ham fisted" clients, which is all a simple client needs to be.
If you can use just a subset, likely, then simply write it and be done.
You can implement a trivial http client over sockets (here is an example of how to do it in ruby: http://www.tutorialspoint.com/ruby/ruby_socket_programming.htm )
It probably depends what technology you have available on your embedded devices - if you can easily consume JSON or XML then a webservice approach using the above may work for you.