HTTPS request with Boost.Asio and OpenSSL - c++

I'm trying to read the ticker symbol at https://mtgox.com/api/0/data/ticker.php from my C++ application.
I use Boost.Asio and OpenSSL because the service requires HTTPS.
Boost version: 1.47.0
OpenSSL: 1.0.0d [8 Feb 2011] Win32
For the application; I took the example from http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/example/ssl/client.cpp
to get started and modified it as follows:
This is where I want to connect to:
boost::asio::ip::tcp::resolver::query query("mtgox.com", "443");
I set verification to none because the handshake fails otherwise. I'm not sure if this is a problem with mtgox or that this implementation is really strict because when I print the certificate to the screen it looks legit (and chrome has no problem with it when visiting the ticker page).
socket_.set_verify_mode(boost::asio::ssl::context::verify_none);
This is the request I send:
std::stringstream request_;
request_ << "GET /api/0/data/ticker.php HTTP/1.1\r\n";
request_ << "Host: mtgox.com\r\n";
request_ << "Accept-Encoding: *\r\n";
request_ << "\r\n";
boost::asio::async_write(socket_, boost::asio::buffer(request_.str()), boost::bind(&client::handle_write, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
(full code: http://pastebin.com/zRTTqZVe)
I run into the following error:
Connection OK!
Verifying:
/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Certification Authority
Sending request:
GET /api/0/data/ticker.php HTTP 1.1
Host: mtgox.com
Accept-Encoding: *
Sending request OK!
Read failed: An existing connection was forcibly closed by the remote host
Am I going in the right direction with this? The error message isn't really descriptive of the problem and I don't know which step I did wrong.
Update:
I used cURL to see what went wrong:
curl --trace-ascii out.txt https://mtgox.com/api/0/data/ticker.php
(full output: http://pastebin.com/Rzp0RnAK)
It fails during verification.
When I connect with the "unsafe" parameter
curl --trace-ascii out.txt -k https://mtgox.com/api/0/data/ticker.php
(full output: http://pastebin.com/JR43A7ux)
everything works fine.
Fix:
I fixed the typo in the HTTP headers
I added a root certificate and
turned SSL verification back on.

In short:
You send "HTTP 1.1" instead of "HTTP/1.1". That's surely enough to make the server refuse your request. There are other differences between your request and cURL's, you might need to change those params as well - even if they seem valid to me.
Maybe OpenSSL does not have the root certificate used by the server, unlike Chrome, and that's why verification is failing.
Details:
Given a working and non-working tool, you should always compare what's happening. Here you have cURL's output and your request - comparing them showed a number of differences; usually, even with an encrypted connection, you can use a powerful packet sniffer like Wireshark, which decodes as much information from the packets as possible. Here it would allow to see that the server is actually sending less packets (I expect); another possibility would have been that your client was not getting the data sent by the server (say because the client had some bug).
If I understand correctly, curl showed only why you needed to disable verification, right? The certificate looks valid for me on chrome as well, but the root certification authority is quite unknown; curl mentions the "CA cert", i.e. the certificate of the certification authority. The root certificate is trusted because it is already present in a certificate DB on the client - I think that Chrome might have a more complete DB than OpenSSL (which is used by both cURL and your program).

Related

How to enabled TLS in IXWebSocket for simple client/server application

I'm attempting to build a simple client/server application in C++ using the IXWebsocket library, using the example code as an example, as shown on this page - https://machinezone.github.io/IXWebSocket/usage/
The code works fine when using an unsecured connection (as denoted by a ws:// url), but I can't get it working at all when using a secured connection (as denoted by a wss:// url).
The website states under the "TLS Support and configuration" section that
Then, secure sockets are automatically used when connecting to a wss://* url.
Additional TLS options can be configured by passing a ix::SocketTLSOptions instance to the setTLSOptions on ix::WebSocket (or ix::WebSocketServer or ix::HttpServer)
This implies to me that simply changing the ws:// url to a wss:// url is enough to instruct the application to secure the connection, however this does not work.
When I attempt to connect using a wss:// url, the server returns the following
WebSocketServer::handleConnection() HTTP status: 400 error: Error reading HTTP request line
The website goes on to say that
Additional TLS options can be configured by passing a ix::SocketTLSOptions instance to the setTLSOptions on ix::WebSocket (or ix::WebSocketServer or ix::HttpServer)
and...
Specifying certFile and keyFile configures the certificate that will be used to communicate with TLS peers. On a client, this is only necessary for connecting to servers that require a client certificate. On a server, this is necessary for TLS support.
This implies to me that for the server to support TLS, I must provide a cert file, and a key file.
The github repo includes the script generate_certs.sh which produces a series of certificates in pem format, which should be enough to get things working. Included among them are selfsigned-client-crt.pem and selfsigned-client-key.pem, which seem like obvious candidates, however they specifically state client in the names, which suggests that they should not be used in the server application, rather they belong in the client.
The website also includes the example snippet:
webSocket.setTLSOptions({
.certFile = "path/to/cert/file.pem",
.keyFile = "path/to/key/file.pem",
.caFile = "path/to/trust/bundle/file.pem", // as a file, or in memory buffer in PEM format
.tls = true // required in server mode
});
I have attempted to populate the certFile and keyFile properties, and specified "NONE" for the caFile property as explained in the example, however this results in the server application printing SocketServer::run() tls accept failed: error in handshake : SSL - The connection indicated an EOF to the console.
What's more, the example snippet listed above states "path/to/cert/file.pem" and "path/to/key/file.pem" but doesn't explicitly state whether those should be client, or server usage.
The example doesn't come with a complete runnable implementation, and doesn't explain clearly what is needed to make TLS work in this particular form, and I'm at a bit of a loss now.
There is an example application in the github repo, however it includes a number of different variations, all of which are far more complicated than this trivial example, and it is this trivial example that I need to get working so I can understand how to implement this further.
In my server application, I have implemented the following for the TLS options:
int port = 8443;
ix::WebSocketServer server(port);
ix::SocketTLSOptions tlsOptions;
tlsOptions.certFile = "certs/selfsigned-client-crt.pem";
tlsOptions.keyFile = "certs/selfsigned-client-key.pem";
tlsOptions.caFile = "NONE";
tlsOptions.tls = true; //Required for TLS
server.setTLSOptions(tlsOptions);
I am pretty sure that the issue in in how I've set up the key and cert files. I have used the client files here, but I also tried generating and signing a server cert and key, which also did not work.
I have even tried using the trusted key and cert for both the client and server applications, and still did not get a working TLS connection (the following files were generated by the generate_cert.sh script -
selfsigned-client-crt.pem, selfsigned-client-key.pem, trusted-ca-crt.pem, trusted-ca-key.pem, trusted-client-crt.pem, trusted-client-key.pem, trusted-server-crt.pem, trusted-server-key.pem, untrusted-ca-crt.pem, untrusted-ca-key.pem, untrusted-client-crt.pem, untrusted-client-key.pem
... none of which is a self signed server cert.
What I can gather from the example page is that I need to do the following to get this working.
Generate a server cert and key
Self sign the cert
Specify the cert and key file in the tlsOptions on the server
Set the tls property in tlsOptions to true on the server
Set the caFile property in tlsOptions on the server to "NONE"
Set the url in the client to a wss:// url
But this did not work when I tried it, so there's clearly something I've missed.
All I'm aiming to do for the moment is to use self signed certs so that I can test my client and server, both running on localhost.
If anybody can steer me in the right direction, I'd be immensely grateful. I've been on this for 4 days now and I'm really lost.
Many thanks
Check this file https://github.com/machinezone/IXWebSocket/blob/master/ws/test_ws.sh / it does a full client + server encrypted exchange.
Note that on macOS there are limitations, but on windows or linux, using mbedtls and openssl everything should work fine.
ps: You will need to supply the same set of certs on the client and on the server.
https://machinezone.github.io/IXWebSocket/build/
-DUSE_TLS=1 will enable TLS support
so I do the following :
mkdir build
cd build
cmake -DUSE_TLS=1 -DUSE_WS=1 ..
works for me

ShimmerCat with reverse proxy when using "the old way"

I have used ShimmerCat with sc-tool to connect to my development sites as described here, and everything has worked always like a charm with it, but I also wanted to follow the "old way" configuring my /etc/hosts. In this case I had a small problem, the server ran ok, and I could access to my development site (let's say that I used https://www.example.com:4043/), but I'm also using a reverse proxy as described on this article, and on the config file reference. It redirects to a Django app I'm using. Let's say it is my devlove.yaml config file:
---
shimmercat-devlove:
domains:
www.example.com:
root-dir: site
consultant: 8080
cache-key: xxxxxxx
api.example.com:
port: 8080
The problem is that when I try to access to a URL that requests the API, a 404 response is sent from the API. Let me try to explain it through an example. I try to access to https://www.example.com:4043/country/, and on this page I do a request to the API: /api/<country>/towns/, then the API endpoint is returning a 404 response so it is not finding this URL, which does not happen when using Google Chrome with sc-tool. I had set both domains www.example.com, and api.example.com on my /etc/hosts. I have been trying to solve it, but without any luck, is there something I'm missing? Any help will be welcome. Thanks in advance.
With a bit more of data, we may be able to find the issue. In the meantime, here is a list of troubleshooting tips:
Possible issue: DNS is cached in browser, /etc/hosts is not being used (yet)
This can happen if somehow your browser has not done a DNS lookup since before you changed your /etc/hosts file. Then the connection is going to a domain in the Internet that may not have the API endpoint that you are calling.
Troubleshooting: Check ShimmerCat's log for the requests. If this is the issue, closing and opening the browser may solve the issue.
Possible issue: the host header is incorrect
ShimmerCat uses the Host header in HTTP/1.1 requests and the :authority header in HTTP/2 requests to distinguish the domains. It always discards any port number present in them. If these headers are not set or are set to a domain other than the ones ShimmerCat is configured to listen, the server will consider the situation so despicable that it will just close the connection.
Troubleshooting: This is not a 404 error, but a connection close (if trying to connect un-proxied, directly to the SSL port where ShimmerCat is listening), or a Socks Connection Failed (if trying to connect through ShimmerCat's built-in SOCKS5 proxy). In the former case, the server will print the message "Rejected request to Just https://some-domain-or-ip/some/path" in his log, using the actual value for the domain, or "Rejected request to Nothing", if no header was present. The second case is more complicated, because the SOCKS5 proxy is before the HTTP routing algorithm.
In any case, the browser will put a red line in the network panel of the developer tools. If you are accessing the server using curl, like this:
curl -k -H host:api.incorrect-domain.com https://127.0.0.1:4043/contents/blog/data-density/
or like
curl -k -H host:api.incorrect-domain.com
(notice the --http2 parameter in the second form), you will get a response:
curl: (56) Unexpected EOF
Extra-tip: There is a field for the network address in the browser's developer tools. Check it, it may tell you something!
Possible issue: something gets messed up when passing the request to the api back-end.
API backends are also sensitive to the host header, and to additional things like authentication cookies and request parameters.
Troubleshooting: A way to diagnose things is invoking ShimmerCat using the --show-proxied-headers command-line option. It makes ShimmerCat to report the proxied headers to the log:
Issuing request with headers :authority: api.example.com
:method: GET
:path: /my/api/endpoint/path/
:scheme: https
accept: */*
user-agent: curl/7.47.0
Possible issue: there are two instances or more of ShimmerCat running
...and they are using different configurations. ShimmerCat uses port sharing among several processes to increase availability. A downside of this is that is perfectly possible to mistakenly start ShimmerCat, forget about stopping it, and start it again after changing some configuration bit. The two instances will be running at the same time, and any of them will pick connections made to the listening port.
Troubleshooting: Shutdown all instances of ShimmerCat, then double-check there are none running by using the corresponding form of the ps command, and start the server with the configuration you want.

Request JSON Data from HTTPS with C++?

I'm writing a program in C++ that needs to download JSON data from an HTTPS URL. The program is based on wxWidgets. That URL is for the translation service at Glosbe
So I've tried multiple different libraries including:
libcurl
Boost.Asio
the http functionality included in wxWidgets
wxCurl
Urdl
However, it always throws an error saying it can't connect, or I get a reply that says "Moved Permanently".
When i copy and paste the URL I am testing it with into a browser, it returns the JSON data perfectly.
Does anyone know the correct way to do this?
Any help would be great!
301 Moved Permanently is what the server responds when you try to access the page with HTTP instead of HTTPS. Here's a complete response I just received from the server:
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 16 Jul 2015 20:25:01 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://en.glosbe.com/a-api
It means exactly that: "The content you are looking for is really at https://en.glosbe.com/a-api." Your browser simply adheres to the HTTP protocol by following the server's hint and automatically proceeding to request https://en.glosbe.com/a-api when you try to access http://en.glosbe.com/a-api. It works seamlessly for you as a user.
You will have to read more documentation to create HTTPS requests yourself. Each of the libraries you mentioned will have a different way of supporting HTTPS (or not support it at all). For example, have a look at http://www.boost.org/doc/libs/1_58_0/doc/html/boost_asio/overview/ssl.html, especially the "Notes" section where it says that "OpenSSL is required to make use of Boost.Asio's SSL support."

I can't get http code 404 with libcurl

When I send a http request using a wrong server address like 127.0.0.1 as the server address of a URL, the libcurl returns CURLE_OK and get me the http code 0. However, I get http code 404 when I send the same request with IE. Does anyone know how can I get an error code rather than 0 with libcurl when sending request like that.
libcurl returns CURLE_OK when the transfer went fine. Getting a 404 from a HTTP server is considered a fine transfer. You can make >=4xx HTTP response codes cause a libcurl error by setting the CURLOPT_FAILONERROR option.
Alternatively, and this may be the nicer way, you extract the HTTP response code after the transfer, with for example curl_easy_getinfo() to figure out the HTTP response code to see what the HTTP server thought about the resource you requested.
Try using it to visit a site that's actually running a web server, and try to retrieve a file that doesn't exist. For example, http://www.google.com/404. Your browser is almost certainly not actually getting a 404 from visiting 127.0.0.1, even if it's telling you that's what it got.

QUrl: Protocol "sftp" is unknown

I am trying to LIST my sftp server, but I’ve got a problem when sending my request:
if I print my QNetworkReply::errorString(), this is what I get:
Protocol “sftp” is unknown
This is what my QUrl looks like:
sftp://username:password#host:22
username/password is ok, the host and port are the one required for my server, so I don’t know what’s going on…
I tried with different schemes (http, https, ftp, sftp…) and it looks like only http and https is recognised…
Any Ideas?