How to enabled TLS in IXWebSocket for simple client/server application - c++

I'm attempting to build a simple client/server application in C++ using the IXWebsocket library, using the example code as an example, as shown on this page - https://machinezone.github.io/IXWebSocket/usage/
The code works fine when using an unsecured connection (as denoted by a ws:// url), but I can't get it working at all when using a secured connection (as denoted by a wss:// url).
The website states under the "TLS Support and configuration" section that
Then, secure sockets are automatically used when connecting to a wss://* url.
Additional TLS options can be configured by passing a ix::SocketTLSOptions instance to the setTLSOptions on ix::WebSocket (or ix::WebSocketServer or ix::HttpServer)
This implies to me that simply changing the ws:// url to a wss:// url is enough to instruct the application to secure the connection, however this does not work.
When I attempt to connect using a wss:// url, the server returns the following
WebSocketServer::handleConnection() HTTP status: 400 error: Error reading HTTP request line
The website goes on to say that
Additional TLS options can be configured by passing a ix::SocketTLSOptions instance to the setTLSOptions on ix::WebSocket (or ix::WebSocketServer or ix::HttpServer)
and...
Specifying certFile and keyFile configures the certificate that will be used to communicate with TLS peers. On a client, this is only necessary for connecting to servers that require a client certificate. On a server, this is necessary for TLS support.
This implies to me that for the server to support TLS, I must provide a cert file, and a key file.
The github repo includes the script generate_certs.sh which produces a series of certificates in pem format, which should be enough to get things working. Included among them are selfsigned-client-crt.pem and selfsigned-client-key.pem, which seem like obvious candidates, however they specifically state client in the names, which suggests that they should not be used in the server application, rather they belong in the client.
The website also includes the example snippet:
webSocket.setTLSOptions({
.certFile = "path/to/cert/file.pem",
.keyFile = "path/to/key/file.pem",
.caFile = "path/to/trust/bundle/file.pem", // as a file, or in memory buffer in PEM format
.tls = true // required in server mode
});
I have attempted to populate the certFile and keyFile properties, and specified "NONE" for the caFile property as explained in the example, however this results in the server application printing SocketServer::run() tls accept failed: error in handshake : SSL - The connection indicated an EOF to the console.
What's more, the example snippet listed above states "path/to/cert/file.pem" and "path/to/key/file.pem" but doesn't explicitly state whether those should be client, or server usage.
The example doesn't come with a complete runnable implementation, and doesn't explain clearly what is needed to make TLS work in this particular form, and I'm at a bit of a loss now.
There is an example application in the github repo, however it includes a number of different variations, all of which are far more complicated than this trivial example, and it is this trivial example that I need to get working so I can understand how to implement this further.
In my server application, I have implemented the following for the TLS options:
int port = 8443;
ix::WebSocketServer server(port);
ix::SocketTLSOptions tlsOptions;
tlsOptions.certFile = "certs/selfsigned-client-crt.pem";
tlsOptions.keyFile = "certs/selfsigned-client-key.pem";
tlsOptions.caFile = "NONE";
tlsOptions.tls = true; //Required for TLS
server.setTLSOptions(tlsOptions);
I am pretty sure that the issue in in how I've set up the key and cert files. I have used the client files here, but I also tried generating and signing a server cert and key, which also did not work.
I have even tried using the trusted key and cert for both the client and server applications, and still did not get a working TLS connection (the following files were generated by the generate_cert.sh script -
selfsigned-client-crt.pem, selfsigned-client-key.pem, trusted-ca-crt.pem, trusted-ca-key.pem, trusted-client-crt.pem, trusted-client-key.pem, trusted-server-crt.pem, trusted-server-key.pem, untrusted-ca-crt.pem, untrusted-ca-key.pem, untrusted-client-crt.pem, untrusted-client-key.pem
... none of which is a self signed server cert.
What I can gather from the example page is that I need to do the following to get this working.
Generate a server cert and key
Self sign the cert
Specify the cert and key file in the tlsOptions on the server
Set the tls property in tlsOptions to true on the server
Set the caFile property in tlsOptions on the server to "NONE"
Set the url in the client to a wss:// url
But this did not work when I tried it, so there's clearly something I've missed.
All I'm aiming to do for the moment is to use self signed certs so that I can test my client and server, both running on localhost.
If anybody can steer me in the right direction, I'd be immensely grateful. I've been on this for 4 days now and I'm really lost.
Many thanks

Check this file https://github.com/machinezone/IXWebSocket/blob/master/ws/test_ws.sh / it does a full client + server encrypted exchange.
Note that on macOS there are limitations, but on windows or linux, using mbedtls and openssl everything should work fine.
ps: You will need to supply the same set of certs on the client and on the server.

https://machinezone.github.io/IXWebSocket/build/
-DUSE_TLS=1 will enable TLS support
so I do the following :
mkdir build
cd build
cmake -DUSE_TLS=1 -DUSE_WS=1 ..
works for me

Related

c++ libCurl : how to accept expired certificate using libCurl

I am working on Linux embedded application which acts as client and receives data from https server using libCurl. My application's requirement is to accept expired certificate and continue to connection establishment.
I could not find any such option that can be set using curl_easy_setopt as we can ignore -
- verification of the certificate's name against host => setting CURLOPT_SSL_VERIFYHOST to 0
- verification of the authenticity of the peer's certificate => setting CURLOPT_SSL_VERIFYPEER to FALSE
Any there any other way out I can try to make it work.
You can set a callback using CURLOPT_SSL_CTX_FUNCTION (see example at https://curl.haxx.se/libcurl/c/cacertinmem.html) where you can manipulate the SSL context, clear errors, etc.
This will probably not be enough, you can experiment with setting some options of openssl itself, see https://www.openssl.org/docs/man1.1.0/ssl/SSL_CTX_set_verify.html.
I'm not sure if it is enough to set SSL_VERIFY_NONE or if you actually have to supply a validation callback function that says yes to everything.
I haven't tested this and I'm not sure it will actually work, but you can certainly try.

ShimmerCat with reverse proxy when using "the old way"

I have used ShimmerCat with sc-tool to connect to my development sites as described here, and everything has worked always like a charm with it, but I also wanted to follow the "old way" configuring my /etc/hosts. In this case I had a small problem, the server ran ok, and I could access to my development site (let's say that I used https://www.example.com:4043/), but I'm also using a reverse proxy as described on this article, and on the config file reference. It redirects to a Django app I'm using. Let's say it is my devlove.yaml config file:
---
shimmercat-devlove:
domains:
www.example.com:
root-dir: site
consultant: 8080
cache-key: xxxxxxx
api.example.com:
port: 8080
The problem is that when I try to access to a URL that requests the API, a 404 response is sent from the API. Let me try to explain it through an example. I try to access to https://www.example.com:4043/country/, and on this page I do a request to the API: /api/<country>/towns/, then the API endpoint is returning a 404 response so it is not finding this URL, which does not happen when using Google Chrome with sc-tool. I had set both domains www.example.com, and api.example.com on my /etc/hosts. I have been trying to solve it, but without any luck, is there something I'm missing? Any help will be welcome. Thanks in advance.
With a bit more of data, we may be able to find the issue. In the meantime, here is a list of troubleshooting tips:
Possible issue: DNS is cached in browser, /etc/hosts is not being used (yet)
This can happen if somehow your browser has not done a DNS lookup since before you changed your /etc/hosts file. Then the connection is going to a domain in the Internet that may not have the API endpoint that you are calling.
Troubleshooting: Check ShimmerCat's log for the requests. If this is the issue, closing and opening the browser may solve the issue.
Possible issue: the host header is incorrect
ShimmerCat uses the Host header in HTTP/1.1 requests and the :authority header in HTTP/2 requests to distinguish the domains. It always discards any port number present in them. If these headers are not set or are set to a domain other than the ones ShimmerCat is configured to listen, the server will consider the situation so despicable that it will just close the connection.
Troubleshooting: This is not a 404 error, but a connection close (if trying to connect un-proxied, directly to the SSL port where ShimmerCat is listening), or a Socks Connection Failed (if trying to connect through ShimmerCat's built-in SOCKS5 proxy). In the former case, the server will print the message "Rejected request to Just https://some-domain-or-ip/some/path" in his log, using the actual value for the domain, or "Rejected request to Nothing", if no header was present. The second case is more complicated, because the SOCKS5 proxy is before the HTTP routing algorithm.
In any case, the browser will put a red line in the network panel of the developer tools. If you are accessing the server using curl, like this:
curl -k -H host:api.incorrect-domain.com https://127.0.0.1:4043/contents/blog/data-density/
or like
curl -k -H host:api.incorrect-domain.com
(notice the --http2 parameter in the second form), you will get a response:
curl: (56) Unexpected EOF
Extra-tip: There is a field for the network address in the browser's developer tools. Check it, it may tell you something!
Possible issue: something gets messed up when passing the request to the api back-end.
API backends are also sensitive to the host header, and to additional things like authentication cookies and request parameters.
Troubleshooting: A way to diagnose things is invoking ShimmerCat using the --show-proxied-headers command-line option. It makes ShimmerCat to report the proxied headers to the log:
Issuing request with headers :authority: api.example.com
:method: GET
:path: /my/api/endpoint/path/
:scheme: https
accept: */*
user-agent: curl/7.47.0
Possible issue: there are two instances or more of ShimmerCat running
...and they are using different configurations. ShimmerCat uses port sharing among several processes to increase availability. A downside of this is that is perfectly possible to mistakenly start ShimmerCat, forget about stopping it, and start it again after changing some configuration bit. The two instances will be running at the same time, and any of them will pick connections made to the listening port.
Troubleshooting: Shutdown all instances of ShimmerCat, then double-check there are none running by using the corresponding form of the ps command, and start the server with the configuration you want.

Jetty Webservice - https protocol based address is not supported

I am using jetty version 7.5.1 .
My webservice works fine with a "http://..." endpoint, but when I change it to "https://..." things go wrong.
Endpoint e = Endpoint.create(webservice);
e.publish("https://localhost:" + serverPort + "/ws/mywebservice);
I get the following error message:
"https protocol based address is not supported".
I've tried using an SslChannelConnector, a SelectChannelConnector and the combination of both.
Connector connector = new SelectChannelConnector();
connector.setPort(59180);
SslContextFactory factory = new SslContextFactory();
factory.setKeyStore("keystore");
factory.setKeyStorePassword("password");
factory.setKeyManagerPassword("password");
factory.setTrustStore("keystore");
factory.setTrustStorePassword("password");
SslSelectChannelConnector sslConnector = new SslSelectChannelConnector(factory);
sslConnector.setPort(443);
sslConnector.setMaxIdleTime(30000);
server.setConnectors(new Connector[]{connector, sslConnector});
I also tried modifying the port in the publish path. But without success.
Could it be that something went wrong with the creation of my keystore file?
Even I put the wrong password though, it does show a different error message, explaining that my password is wrong.
My options are running out. Any ideas?
EDIT: More information:
Servlets work fine with HTTPS now. But the webservices are not. Am I maybe publishing it the wrong way ?
I found several threads on various forums with similar problems. But never found a solution. I would like to write down my solution for future victims:
The publish method only accepts the http protocol. Even if you are publishing for https, this should still be "http://...". On the other hand, you should use the port of your SSL connector.
Endpoint e = Endpoint.create(webservice);
e.publish("http://localhost:443/ws/mywebservice);
Use any other protocol and you will always get the "xxx protocol based address is not supported" exception. See source code.
Note 1: The webservice already works fine at this point. However there is a point of discussion: The generated wsdl file (at https://localhost:443/ws/mywebservice?wsdl) will reference the http://... path. You could argue if the wsdl file is a requirement or just documentation.
Correcting a hostname in a WSDL file is not that hard, but replacing the protocol is harder. The easiest solution is probably to just edit the wsdl file and host the file, which is not very "dynamic" of course.
Alternatively, I solved it by creating a WsdlServlet which replaces the address. On the other hand, it does feel bad to create an entire class just to fix 1 character. :)
Note 2: Another bug in this jetty release, is the authentication. It's impossible to offer the webservice without any authentication. The best thing you can get, after turning off all possible authentication: you will still have to use 'preemptive authentication' and enter a random username and password.

how to retrieve a ssl certificate in django?

Is it possible to retrieve the client's SSL certificate from the current connection in Django?
I don't see the certificate in the request context passed from the lighttpd.
My setup has lighttpd and django working in fastcgi mode.
Currently, I am forced to manually connect back to the client's IP to verify the certificate..
Is there a clever technique to avoid this? Thanks!
Update:
I added these lines to my lighttpd.conf:
ssl.verifyclient.exportcert = "enable"
setenv.add-request-header = (
"SSL_CLIENT_CERT" => env.SSL_CLIENT_CERT
)
Unfortunately, the env.SSL_CLIENT_CERT fails to dereference (does not exist?) and lighttpd fails to start.
If I replace the "env.SSL_CLIENT_CERT" with a static value like "1", it is successfully passed to django in the request.META fields.
Anything else, I could try? This is lighttpd 1.4.29.
Yes. Though this question is not Django specific.
Usually web servers have option to export SSL client-side certificate data as environment variables or HTTP headers. I have done this myself with Apache (not Lighttpd).
This is how I did it
On Apache, export SSL certificate data to environment variables
Then, add a new HTTP request headers containing these environment variables
Read headers in Python code
http://redmine.lighttpd.net/projects/1/wiki/Docs_SSL
Looks like the option name is ssl.verifyclient.exportcert.
Though I am not sure how to do step 2 with lighttpd, as I have little experience on it.

Error using ColdFusion cfexchangeconnection to connect to Exchange server

I am getting an error when trying to connect to an Exchange server using the cfexchangeconnection tag. First some code:
<cfexchangeconnection action="open"
server="****"
username="****"
password="****"
connection="myEX"
protocol="https"
port="443">
I know its the right server because it fails when not processing via https. I have tried:
Following all the instructions here http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec14f31-7fed.html
Prefixing username with a domain name, adding #domain name, etc and no luck.
The error I get is:
**Access to the Exchange server denied.**
Ensure that the user name and password are correct.
Any ideas
Here's an idea - this is what I needed to do to make my cfexchange connection work. Not entirely sure if it's the same problem. I think I had a 440 error, rather than your 401 error.
I'm using:
https
webdav
forms based auth
Exchange 2007
Coldfusion 8
Windows 2003 servers
Here's the connection string that worked for me. What was keeping my connection from working was the need for the formBasedAuthenticationURL. This is a poorly documented attribute by both Adobe and Microsoft.
<cfexchangeconnection action="open"
username="first.last"
password="mypassword"
mailboxname="myAcctName"
server="my.mail.server"
protocol="https"
connection="sample"
formBasedAuthentication="true"
formBasedAuthenticationURL="https://my.mail.server/owa/auth/owaauth.dll">
<cfexchangecalendar action="get" name="mycal" connection="sample">
<cfexchangefilter name="startTime" from="#theDate#" to="#theEndDate#">
</cfexchangecalendar>
<cfexchangeConnection action="close" connection="sample">
Additional notes:
IIS and WebDAV are enabled on the target Exchange server.
The username and password you're using has the appropriate permissions for
a WebDAV connection. (I'm not the Exchange admin, so I'm not sure what they
are, but I think the account needs to be allowed to connect to OWA. - Please
correct me if I am wrong.)
Optional: (don't use if you don't have to)
IF HTTPS is required, use the appropriate argument.
IF Forms Based Authentication is on in Exchange 2007 (as was my case),
you'll have to work around it using the formBasedAuthenticationURL argument.
Not sure if that's it, but I hope it is!