I'm currently writing a small HTTP server, and I would like to implement SSL.
The goal is to be able to do load multiple PEM files in boost so it can do a correct SSL handshake with a client depending on the SNI sent in TLS.
However I don't see how I can load multiple PEM files in boost and also how I can tell it to use one cert or another depending on this SNI.
For example I load the cert with:
context.use_certificate_file("cert.pem");
m_context.use_private_key_file("server.pem",
boost::asio::ssl::context::pem);
Related
I need a snippet of code for a program i am writing with Boost Asio SSL.
I have a system of two clients, that connect with each other. I require them to do a mutual authentication, so, that at the end of the handshake() command, both clients can be certain that the other client has the private key to the certificate they supplied.
Both clients have a context object, lets call them ctx1and ctx2 and each client has a public certificate and a private key.
Is it possible to set up the context objects so, that when i call socket.handshake() the clients will do a two-way authentication. If not, what would be the correct way to go about to achieve my goal ?
It looks like boost just uses the OpenSSL interface.
I don't know boost much but I've implemented lots of OpenSSL internals for Perl and came to the following conclusions after reading the relevant parts of the boost source code:
To have mutual authentication with OpenSSL you have to use SSL_VERIFY_PEER on the client side and SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT on the server side. If you use only SSL_VERIFY_PEER on the server side it will only send the certificate request to the client, but silently accept if the client sends no certificate back.
With boost this would probably be:
ctx.set_verify_mode(ssl::verify_peer); // client side
ctx.set_verify_mode(ssl::verify_peer|ssl::verify_fail_if_no_peer_cert); // server side
If you set verify_mode this way it will verify the certificates against the configured trusted CAs (set with ctx.load_verify_file or ctx.load_verify_path).
If you have full control over the CA which signed the certificates (i.e. its your own CA) it might be enough for you to accept any certificates signed by this CA. But if you use a CA which also signed certificates you don't want to accept, like typically the case with public CAs, you also need to verify the contents of the certificate. Details how to do this depends on your protocol, but for the usual internet protocols like HTTP or SMTP this involves checking commonName and/or subjectAltNames of the certificate. Details like wildcard handling vary between the protocols.
Boost provides rfc2818_verification to help you with HTTP-style validation, although from reading the code I think the implementation is slightly wrong (multiple wildcards accepted, IDN wildcards allowed - see RFC6125 for requirements).
I don't know of any standards for verifying client certificates. Often just any certificate signed by a specific (private) CA will be accepted. Other times certificates from a public CA matching a specific e-mail pattern. It looks like boost does not help you much in this case, so you probably have to get OpenSSL SSL* handle with sock.native_handle() and then use OpenSSL functions to extract certificate (SSL_get_peer_certificate) and to check the contents of the certificate (various X509_* functions).
At least if public CAs are involved you should also check the revocation status of the certificates. It looks like boost does not provide a direct interface to CRL (certificate revocation list) so you have to use ctx.native_handle() with the appropriate OpenSSL functions (X509_STORE_add_crl etc). Using OCSP (online status revocation protocol) is way more complex and relevant OpenSSL functions are mostly undocumented, which means you have to read the OpenSSL source code to use them :(
One cant force other side to authenticate against you, it is up to protocol, Ieach side autenticate only against other side. Just follow manuals as http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/overview/ssl.html
ssl::context ctx(ssl::context::sslv23);
ctx.set_verify_mode(ssl::verify_peer);
ctx.load_verify_file("ca.pem");
I am trying to write a c++ websocket server and have browser/chrome clients connect over websockets, for a multiplayer game. The websocket c++ library I'm using atm is websocketpp or websocket++. I wrote an app that allows clients to connect over ws and localhost, but when I add an ip for the address, connections don't occur at all. Now I think I have to use ssl and wss for ip connection? I tried it and there is some connection activity, but then the handshake times out. Could I be experiencing cross-orgin issues, or what, do i need ssl? I am new to websockets. Could the problem be my ssl certs I made with openssl? I can post code, or if you are familiar with a c++ library to do websockets, what is it? Is this even a possible thing to do?
There could be multiple reasons why it won't connect over ip.
The first is port forwarding. On a local network it's not necessary but running a server over a remote network, portforwarding has to be done. You can just run your server then use a simple port checker (there's many websites for them) to see if a connection can be established.
The other reason could be as you said ssl. If you are running your client on a web host, the host may require a connection to be made over ssl/wss for websockets. If your server isn't running a valid ssl certificate then this could prevent the client from connecting to your server. I know for exampe Github pages requires the server to be running wss or valid ssl certificates on the server side in order for a client connection to be established; however, if you use a custom domain name for Github pages then you can disable the need for ssl.
In order to get valid ssl certificates you would need to register a domain for your ip address then either buy certificates or use free certificates from zerossl or other distributors.
Here is a game I have written which connects to a c++ server which I'm running on my own machine with its own domain with valid ssl certificates and the client is running on github pages with a custom domain I have registered.
It's basically multiplayer minesweeper where the objective is to locate the flags rather than avoid them.
I'm using a local server for django dev and ngrok tunnel for webhooks. I've seen other localtunnel services like serveo. Can these services see your source code? Are they forwarding your local files to the ngrok server or just handling requests on a public domain and then securely fetching from your local server?
I've read about how ngrok creates a proxy and handles requests, but I still don't understand what exactly tunneling involves
It depends.
They certainly don't copy your django code and run it on their own server and they're not going to maliciously grab files off of your machine.
They just read from a network socket, but they do vary as to how encrypted they are or aren't.
Telebit
Telebit always uses end-to-end encryption via SSL, TLS, HTTPS, or Secure Web Socket (WSS)
TLS certs happen on the clients, not the relay
Works with SSH, OpenVPN, etc - but requires a ProxyCommand / secure client
(i.e. sclient, stunnel, or openssh s_client)
Can work with other, normally-unencrypted, TCP protocols (requires a secure client)
There is a poorly documented and deprecated feature for raw TCP, which can be seen, if used.
Serveo
serveo uses ssh port forwarding, which encrypts between the local server and the relay, but not the relay and the remote client
the origin traffic may be encrypted or unencrypted
ngrok
ngrok used to decrypt on their server, with an option to specify SSL certs manually they may have switched to full encryption since
A deeper dive
If you want to know more about their workings, you may (or may not) find this other answer I wrote informative and digestible: https://stackoverflow.com/a/52614266/151312
I found vortex is good fit
Just download and run
https://www.vtxhub.com/
I'm attempting to write a simple HTTP/HTTPS proxy using Boost ASIO. HTTP is working fine, but I'm having some issues with HTTPS. For the record this is a local proxy. Anyway so here is an example of how a transaction works with my setup.
Browser asks for Google.com
I lie to the browser and tell it to go to 127.0.0.1:443
Browser socket connects to my local server on 443I attempt to read the headers so I can do a real host lookup and open a second upstream socket so I can simply forward out the requests.
This is where things fail immediately. When I try to print out the headers of the incoming socket, it appears that they are already encrypted by the browser making the request. I thought at first that perhaps the jumbled console output was just that the headers were compressed, but after some thorough testing this is not the case.
So I'm wondering if anyone can point me in the right direction, perhaps to some reading material where I can better understand what is happening here. Why are the headers immediately encrypted before the connection to the "server" (my proxy) even completes and has a chance to communicate with the client? Is it a temp key? Do I need to ignore the initial headers and send some command back telling the client what temporary key to use or not to compress/encrypt at all? Thanks so much in advance for any help, I've been stuck on this for a while.
HTTPS passes all HTTP traffic, headers and all, over a secure SSL connection. This is by design to prevent exactly what you're trying to do which is essentially a man-in-the-middle attack. In order to succeed, you'll have to come up with a way to defeat SSL security.
One way to do this is to provide an SSL certificate that the browser will accept. There are a couple common reasons the browser complains about a certificate: (1) the certificate is not signed by an authority that the browser trusts and (2) the certificate common name (CN) does not match the URL host.
As long as you control the browser environment then (1) is easily fixed by creating your own certificate authority (CA) and installing its certificate as trusted in your operating system and/or browser. Then in your proxy you supply a certificate signed by your CA. You're basically telling the browser that it's okay to trust certificates that your proxy provides.
(2) will be more difficult because you have to supply the certificate with the correct CN before you can read the HTTP headers to determine the host the browser was trying to reach. Furthermore, unless you already know the hosts that might be requested you will have to generate (and sign) a matching certificate dynamically. Perhaps you could use a pool of IP addresses for your proxy and coordinate with your spoofing DNS service so that you know which certificate should be presented on which connection.
Generally HTTPS proxies are not a good idea. I would discourage it because you'll really be working against the grain of browser security.
I liked this book as a SSL/TLS reference. You can use a tool like OpenSSL to create and sign your own certificates.
I have two computer systems each having an apache server. One machine is a client machine and the other is a server machine. I want both the client request and the server response to be encrypted thus making the data transfer safe.
Could someone please give pointers/steps on how I could make progress in this front.
The communication doesn't involve any GUI components meaning the communication is purely a backend one.
Both the client and the server are coded in java. I am using Axis2 and jaxws for the communication.
Currently I am able to send the client request and receive the server response without SSL enabled. Now If I enable SSL does it mean that I should also modify the existing code according to the SSL or the current working code still holds good.
You have many options here. Since you mention SSL...
On each server generate an asymmetric key-pair (RSA 2048 is a safe choice). Then create a self signed certificate on each server. Then copy each certificate to the other machine and mark it as trusted by the Java environment that apache is using and that NONE OTHER are trusted. Configure SSL/TLS on each of the apaches to use a good symmetric cypher (3DES is a safe choice, but there are other newer ciphers if you want leading edge). Next ensure that all access between Tomcat servers is via https URLs and you should be in decent shape.
An alternative is to use IPSEC to establish a static tunnel between the two servers using certificates or other trust bases.
One fairly simple option is to use stunnel, which is available via the standard package-manager on most *NIX systems. You configure an stunnel as a client (and server if you with) on one server and then another as the server (and client if you wish) and then configure your Tomcat instance(s) to connect to localhost:XYZ where XYZ is the port where stunnel is listening.
The nice part about using stunnel is that you can use it to tunnel any protocol: it is neither a Tomcat-specific nor a Java-specific technique, so you can use it for other applications in the same environment if you want.