I have an framework application which connects to different servers depending on how it is used. For https connections openssl is used. My problem is, that I need to know if the server I am connecting to is using SSL or TLS, so I can create the right SSL context. Currently if I use the wrong context trying to establish a connection times out.
For TLS I use:
SSL_CTX *sslContext = SSL_CTX_new(TLSv1_client_method());
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
So is there a way to know which protocol a server is running before establishing a connection?
Edit: So as I understand it now it should work either way, since the SSLv23_client_method() also contains the TLS protocol. So the question is why does it not? What could be the reason for a timeout with one client method but not the other?
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
TLS is just the current name for the former SSL protocol, i.e. TLS1.0 is actually SSL3.1 etc. SSLv23_client_method is actually the most compatible way to establish SSL/TLS connections and will use the best protocol available. That means it will also create TLS1.2 connections if the server supports that. See also in the documentation of SSL_CTX_new:
SSLv23_method(void), SSLv23_server_method(void), SSLv23_client_method(void)
A TLS/SSL connection established with these methods may understand the SSLv2, SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.
... a client will send out TLSv1 client hello messages including extensions and will indicate that it also understands TLSv1.1, TLSv1.2 and permits a fallback to SSLv3. A server will support SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols. This is the best choice when compatibility is a concern.
Any protocols you don't want (like SSL3.0) you can disable with SSL_OP_NO_SSLv3 etc and using SSL_CTX_set_options.
Currently if I use the wrong context trying to establish a connection times out.
Then either the server or your code is broken. If a server gets a connection with a protocol it does not understand it should return "unknown protocol" alert. Other servers simply close the connection. Timeout will usually only happen with a broken server or middlebox like an old F5 Big IP load balancer.
So is there a way to know which protocol a server is running before establishing a connection?
No. But you should now presume its "TLS 1.0 and above".
As Steffen pointed out, you use SSLv23_method and context options to realize "TLS 1.0 and above". Here's the full code. You can use it in a client or a server:
const SSL_METHOD* method = SSLv23_method();
if(method == NULL) handleFailure();
SSL_CTX* ctx = SSL_CTX_new(method);
if(ctx == NULL) handleFailure();
const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION;
SSL_CTX_set_options(ctx, flags);
Now, there's an implicit assumption here that's not readily apparent; and that assumption is wrong. That assumption is there is a "TLS min" and "TLS max" version.
What happens is there's a underlying SSL/TLS record layer that carries the protocol payloads. The TLS record layer is independent from the protocol layer, and it has its own version. People interpret TLS record layer version as "TLS min" version; and the protocol version as the "TLS max" version. Most clients servers, sites and services use it that way.
However, the IETF does not specify it that way, and browser's don't use it that way. Because of that, we recently got TLS Fallback Signaling Cipher Suite Value (SCSV).
The browser are correct. Here's how its supposed to be done:
try TLS 1.2, use Fallback Signalling to detect downgrade attacks
if TLS 1.2 fails, then try TLS 1.1, use Fallback Signalling to detect downgrade attacks
if TLS 1.1 fails, then try TLS 1.0, use Fallback Signalling to detect downgrade attacks
Many give up after TLS 1.0 fails. Some user agents may continue with SSLv3.
Why has the IETF not moved to give us "TLS min" and "TLS max"? That's still a mystery. I think the effective argument given is "suppose a client want to use TLS 1.0, 1.2 and 1.3, but not 1.1". I don't know anyone who drops a protocol version like that, so its just a strawman to me. (This is one of those times when I wonder if law enforcement or a national interest, like the NSA, is tampering with standards).
The issue was recently brought up again on the TLS Working Group. From TLS: prohibit <1.2 support on 1.3+ servers (but allow clients) (May 21, 2015):
Now might be a good time to add a (3) for TLS 1.3: have a client
specify both the least TLS version they are willing to use, and the
greatest TLS they desire to use. And MAC or derive from it it so it
can't be tampered or downgraded.
You can still provide the the TLS record layer version, and you can
keep it un-MAC'd so it can be tampered with to cause a disclosure or
crash :)
Effectively, that's how the versions in the record layer and client
protocol are being used. It stops those silly dances the browsers and
other user agents perform without the need for TLS Fallback SCSV.
If part of the IETF's mission is to document existing practices, then the IETF is not fulfilling its mission.
Related
We have an existing legacy C++ app which uses TCP via Berkeley C sockets.
We need to continue using Berkeley sockets in our existing environment but for a second, new environment we need to use SSL/TLS.
I've programmed with openssl before but this application code is.... significant, not easy to change.
Is there a way to achieve SSL without making code changes? (networking, proxies etc)
The answer depends on your needs. To send HTTP requests, you can implement a proxy server that will redirect all incoming requests to their destination by adding ssl / tls functionality. For this method, most likely it will not be possible to find a solution "out of the box". I would recommend using Poco, ServerApplication, HTTPs Client session.
To use protocols that support a permanent connection, it is more logical to set up a secure tunnel, which will add ssl / tls to all incoming packets, and remove it from outgoing packets. For this method it is quite possible to find a solution out of the box. Try using the stunnel / ghostunnel from the recommendations. Also, this option can definitely be implemented using the same Poco library, or using low-level posix sockets and openssl encryption.
Both solutions can be set up on the secondary node by sending packets from the source server to the address of the new node, or on the local machine by sending packets to the loopback address.
Okay, so a little context:
I have an app running on an embedded system that sends a few different requests over HTTP (using libcurl in C++) at the following intervals:
5 minutes
15 minutes
1 hour
24 hours
My goal: Reduce data consumption (runs over cellular)
We have both client and server side TLS authentication, so the handshake is costly. The idea is that we use persistent connections (at least for the shorter interval files) to avoid doing the handshake every time.
Unfortunately, after much tinkering I've figured out that the server is closing the connection before the intervals pass. Maybe this is something we can extend? I'll have to talk to the server side guys.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
My idea is this:
Have my app send a packet (as small as possible) every 2 minutes or so (however long the timeout is) to "nudge" the connection into staying open.
My questions are:
Does that make any sense?
I don't suppose there is an easy way to do this in libcurl is there?
If so, how small could we get the request?
Is there an even easier way to do it? My only issue here is that all the connection stuff "lives" in libcurl.
Thanks!
It would be easier to give a more precise answer if you gave a little more detail on your application architecture. For example, is it a RESTful API? Is the use of HTTP absolutely mandatory? If so, what HTTP server are you using (nginx, apache, ...)? Could you consider websockets as an alternative to plain HTTP?
If you are at liberty to use something other than regular HTTP or HTTPs - and to use something other than libcurl on the client side - you would have more options.
If, on the other hand, if you are constrained to both
use HTTP (rather than a raw TCP connection or websockets), and
use libcurl
then I think your task is a good bit more difficult - but maybe still possible.
One of your first challenges is that the typical timeouts for a HTTP connection are quite low (as low as a few seconds for Apache 2). If you can configure the server you can increase this.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
Your terminology is ambiguous here. Are you referring to TCP keep-alive packets or persistent HTTP connections? These don't necessarily have anything to do with each other. The former is an optional mechanism in TCP (which is disabled by default). The latter is an application-layer concept which is specific to HTTP - and may be used regardless of whether keep-alive packets are being used at the transport layer.
My only issue here is that all the connection stuff "lives" in libcurl.
The problem with using libcurl is that it first and foremost a transfer library. I don't think it is tailored for long-running, persistent TCP connections. Nonetheless, according to Daniel Stenberg (the author of libcurl), the library will automatically try to reuse existing connections where possible - as long as you re-use the same easy handle.
If so, how small could we get the request?
Assuming you are using a 'ping' endpoint on your server - which accepts no data and returns a 204 (success but no content) response, then the overhead - in the application layer - would be the size of the HTTP request headers + the size of the HTTP response headers. Maybe you could get it down to 200-300 bytes, or thereabouts.
Alternatives to (plain) HTTP
If you are using a RESTful API, this paradigm sort of goes against the idea of a persistent TCP connection - although I can not think of any reason why it would not work.
You might consider websockets as an alternative, but - again - libcurl is not ideal for this. Although I know very little about websockets, I believe they would offer some advantages.
Compared to plain HTTP, websockets offer:
significantly less overhead than HTTP per message;
the connection is automatically persistent: there is no need to send extra 'keep alive' messages to keep it open;
Compared to a raw TCP connection, the benefits of websockets are that:
you don't have to open a custom port on your server;
it automatically handles the TLS/SSL stuff for you.
(Someone who knows more about websockets is welcome to correct me on some of the above points - particularly regarding TLS/SSL and keep alive messages.)
Alternatives to libcurl
An alternative to libcurl which might be useful here is the Mongoose networking library. It would provide you with a few different alternatives:
use a plain TCP connection (and a custom application layer protocol),
use a TCP connection and handle the HTTP requests yourself manually,
use websockets - which it has very good support for (both as server and client).
Mongoose allows you to enable SSL for all of these options also.
I'm using openssl in a non web application to commute a socket to TLS (STARTTLS protocol). Everything works fine, but I would also like to know which version of TLS among the allowed ones was actually negociated.
Is there any way to find this information using openssl API ?
Note: with openssl 1.1 and later the information is likely returned by the function SSL_SESSION_get_protocol_version() but I need to also find this information for previous openssl library versions (code in the wild, performing a major update of openssl for mere logging purpose is not an option).
You can use SSL_get_version():
SSL_get_version() returns the name of the protocol used for the connection ssl. It should only be called after the initial handshake has been completed. Prior to that the results returned from this function may be unreliable.
RETURN VALUES
The following strings can be returned:
SSLv2
The connection uses the SSLv2 protocol.
SSLv3
The connection uses the SSLv3 protocol.
TLSv1
The connection uses the TLSv1.0 protocol.
TLSv1.1
The connection uses the TLSv1.1 protocol.
TLSv1.2
The connection uses the TLSv1.2 protocol.
unknown
This indicates an unknown protocol version.
I begin learn the information protection and start from OpenSSL. But I read on the Wikipedia that SSL have trouble with security that still not solved and anyone must use TLS instead. Is it true? Is it mean that SSL now obsolete? (because there appears other way of information protection instead fixing SSL)
TLS is just the newer name for the protocol formerly named SSL. If you look at the protocol level you see that TLS 1.0 is practically SSL 3.1, TLS 1.1 is SSL 3.2 etc. Versions up to and including version SSL 3.0 are considered broken and should not be used any longer.
Because of this naming in practice "SSL" and "TLS" are often used to mean the same protocol group and often you find also "SSL/TLS" to refer to this protocol group. Usually only if a version number is added they refer to this version only. Libraries like OpenSSL, PolarSSL, MatrixSSL etc implement the protocol group, i.e. SSL and TLS.
To add to this naming confusion "SSL" is often used together with protocols like SMTP (send mail) or IMAP (access mail) to mean a secure connection from start while "TLS" is used in this context to mean secure connection after issuing a specific STARTTLS command. It is better to use "implicit" and "explicit" SSL/TLS instead.
SSL as a protocol is insecure, and TLS should be used instead. However, people often use the name 'SSL' to refer to SSL as well as TLS. Furthermore, it is often included in many systems as a fallback measure, so that connections can have some security, at least, if TLS isn't available. The merits of doing this are questionable, as some people feel that this allows developers to be lazy, or not even realize they're using an insecure method.
I would like to implement the server side of a licence management software. I use C++ in LINUX OS.
When the software starts it must connect to a server that checks privileges and allows/disallow running of some features.
My question is about the implementation of the communication between client and server across internet:
The server will have a static IP on internet so is it enough to use a simple TCP/IP socket client that will connect to a TCP/IP socket server ( providing IP/PORT) ?
I am familiar with socket communication , but less with communication across internet so my question is whether this is the right approach or do I need to use a different mechanism like a http client server or other.
Regards
AFG
Here are some benefits to using HTTP as a transport:
easier to get right, more likely to work in production: Yes, you will probably have to add additional dependencies to deal with HTTP (client and server side), but it's still preferable to yet another homegrown protocol, which you have to implement, maintain, care about backwards compatibility, deal with multiplatform issues (eg. endianness), etc. In terms of implementation ease, using an HTTP based solution should be far easier in the common case (especially true if you build a REST style service API for license checking).
More help available: HTTP as the foundation of the web is one of the most widely used technologies today. Most (all?) problems you will run into are probably publicly documented with solutions/workarounds.
Encryption 'for free': Encryption is already a solved problem (HTTPS/SSL), both with regard to transport as well as with regard to what you have to implement on your end, and it's just a matter of setting it up.
Server Authentication 'for free': HTTPS/SSL doesn't only solve encryption but also server authentication, so that the client can verify whether it's actually talking to the right service.
Guaranteed to work on the internet: HTTP/HTTPS traffic is common on the internet, so you won't run into routing problems or firewalls which are hard to traverse. This might be a problem when using your own protocol.
Flexibility out of the box: You also put less constraints on clients communicating with your server, as it's very simple to build a client in many different environments, as long as they can talk HTTP (and maybe SSL), and they know how to issue the request to your server (ie. what your service API looks like).
Easy to integrate with administrative webapp: If you want to allow users to manage their accounts associated with licenses in some way (update contact info etc.), then you might even combine the license server with that application. You can also build the license administration UI part into the same app if that's useful.
And as a last remark (this puts additional constraints on your client side HTTPS/SSL implementation): you can even use client side SSL certificates, which essentially allow authenticating the client to the server. Depending on how you use them, client side certificates are harder to manage, but they can be eg. expired, or revoked, so to some extent they actually are licenses (to connect to the server).
HTTP is not a different mechanism. It is a protocol operated over TCP/IP connections.
Internet uses IP transport exclusively. You can use UDP, TCP or SCTP session (well, UDP is not much of a session) layer on top of it. TCP is the general choice.
Sockets are operating system interface. They are the only interface to network in most systems, but some systems have different interface. Nothing to do with the transport itself.
IP addresses are in practice tied to network topology, so I strongly discourage hardcoding the IP address into the server. If you have to change network provider for any reason, you won't be getting the same IP address. Use DNS, it's just one gethostbyname call.
And don't forget to authenticate the server; even with hardcoded IP it's too easy to redirect it.