I'm using openssl in a non web application to commute a socket to TLS (STARTTLS protocol). Everything works fine, but I would also like to know which version of TLS among the allowed ones was actually negociated.
Is there any way to find this information using openssl API ?
Note: with openssl 1.1 and later the information is likely returned by the function SSL_SESSION_get_protocol_version() but I need to also find this information for previous openssl library versions (code in the wild, performing a major update of openssl for mere logging purpose is not an option).
You can use SSL_get_version():
SSL_get_version() returns the name of the protocol used for the connection ssl. It should only be called after the initial handshake has been completed. Prior to that the results returned from this function may be unreliable.
RETURN VALUES
The following strings can be returned:
SSLv2
The connection uses the SSLv2 protocol.
SSLv3
The connection uses the SSLv3 protocol.
TLSv1
The connection uses the TLSv1.0 protocol.
TLSv1.1
The connection uses the TLSv1.1 protocol.
TLSv1.2
The connection uses the TLSv1.2 protocol.
unknown
This indicates an unknown protocol version.
Related
Is there any functional difference between BIO_do_connect and BIO_do_handshake?
Both are defined as the same macro:
/* BIO_s_accept() and BIO_s_connect() */
# define BIO_do_connect(b) BIO_do_handshake(b)
# define BIO_do_accept(b) BIO_do_handshake(b)
# endif /* OPENSSL_NO_SOCK */
# define BIO_do_handshake(b) BIO_ctrl(b,BIO_C_DO_STATE_MACHINE,0,NULL)
Most of the examples on writing a TLS client call BIO_do_handshake after BIO_do_connect to initiate an SSL handshake and open an SSL connection. But from what I have seen from analyzing network traffic using Wireshark, BIO_do_connect does both the TCP handshake and TLS handshake and opens an SSL connection.
Calling BIO_do_handshake afterwards has no effect.
Is there some state that the state machine transitions to after BIO_do_connect that necessitates calling BIO_do_handshake?
Is this some holdover from previous versions of OpenSSL and calling both BIO_do_connect and BIO_do_handshake is necessary to support backward compatibility?
These questions came up after I asked this question on SE.
I came to you today because I've got a problem with my Client+Server app. I built a server and client app which were working fine with QTcpSocket but I thought about adding some security and going for QSslSocket with delayed handshake. The problem is that my client is acting really weirdly. Here is the situation :
If I use connectToHostEncrypted() in my Client and call startServerEncryption() just after geting the socket in my incomingConnection slot it works fine.
But if I delay the handshake (by doing some read/write in the socket) and call later startServerEncryption(), I got the error : Error during SSL handshake: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number on server side. I've also tried to use startClientEncryption() on client side but I get the same error.
The certificate is self-signed and loaded and applied to the socket correctly on server (I checked by testing the first situation with openssl s_client and the server accept all the protocol that I can test with (SSLv3, TLSv1, TLSv1.1, TLSv1.2).I cannot test with SSLv2 : on the Client, using this protocol with QSslSocket::setProtocol(QSsl::SslV2) show the error unsupported protocol and openssl s_client won't connect with the -ssl2 argument, it just show the available arguments.
I'm using Qt 5.9.1 and I've installed OpenSSL-Win32 v1.0.2L to get the dlls. I'm also compiling with msvc2015 32bit on Windows 7 64bit.
I hope that you can help me (and sorry for my bad english), Nicolas.
I've just found my error : I was using a readyRead slot which was using socket->readAll() and that prevented the SSL handshake ! I've also discovered that you need to call both startClientEncryption and startServerEncryption to make a SSL handshake.
I developed a SSLProxy as a man in the middle between client and server. Handshaking between client-proxy and proxy-server doing well. I receive a message from client and decrypt it with client_side SSL. Then encrypt it with server_side SSL. All thing is good except on thing: OpenSSL received all the message data in one SSL record but it sent them in 2 SSL records.
Question: How can I force OpenSSL to send data in 1 SSL record, because server configured only to use 1 SSL record?
Wireshark Screen:
192.168.2.127 is client.
192.168.0.230 is server.
Update: I need something like this. I tried to use them but I faced with this error:
error: ‘SSL_CTX_set_split_send_fragment’ was not declared in this scope
With SQL Server 2008 R2 SP3 and update KB3144114 to support TLSv1.2 my problem solved. But In SQL Server 2008 R2 (with TLSv1) problem still remain same as before and I couldn't find any solution for it.
You must use OpenSSL 1.1.X series for using "SSL_CTX_set_split_send_fragment function", that function is not available in older versions.
I am searching for a client TLS connection example in C++. Best for Visual Studio, but honestly it can be any compiler. I found several C samples. But no one worked. I started with this sample in C:
https://wiki.openssl.org/index.php/SSL/TLS_Client
But it failes on
res = BIO_do_connect(web);
with "system library" if I want to connect to my own node.js server (using the direct ip address) or with "bad hostname lookup" using encrypted.google.com as url.
Both with libressl and Visual Studio 2013.
Next stop: http://fm4dd.com/openssl/sslconnect.htm
Here the program runs successful. But any attempt to write to the SSL connection at the end with:
std::string json = "{'test':'huhu'}";
char buff[1024];
sprintf(buff, "POST /test.de HTTP/1.1 \nHost: test.de\nContent-Type: application/json\nContent-Length: %d\n\n", json.length());
std::string post = buff;
int snd = SSL_write(ssl, post.data(), post.length());
snd = SSL_write(ssl, json.data(), json.length());
forces the server to close the connection (I do not see exactly what happend as I do not now how I can tell node.js to tell me more).
So I search for a working sample or how to get a TLS connection with own certificate running in C++
I am searching for a client TLS connection example in C++.
I think there are a couple of ports of OpenSSL to C++. They try to do the full class wrapper thing. See openssl++ class on Google.
When I use it in C++, I use unique pointers for cleanup. See, for example, How to properly print RSA* as string in C++?. I use it primarily to ensure cleanup. I think its similar to Resource Acquisition Is Initialization pattern.
OpenSSL also provides a page for similar libraries and frameworks. See the Related Links page on the OpenSSL wiki.
But it fails on
res = BIO_do_connect(web);
with "system library" if I want to connect to my own node.js server (using the > direct ip address) or with "bad hostname lookup"
My guess here would be the name in the certificate does not match the name used in the URL to connect.
You can make the names work by adding an entry in your host file. Effectively, this is your local DNS override. See Microsoft TCP/IP Host Name Resolution Order.
Or, you can generate a certificate with all the required names. For that, see How to create a self-signed certificate with openssl?
forces the server to close the connection (I do not see exactly what happend as I do not now how I can tell node.js to tell me more).
"POST /test.de HTTP/1.1 \nHost: test.de\nContent-Type:
application/json\nContent-Length: %d\n\n"
Since you lack the Connection: close request header, the server is probably following RFC 7230, HTTP/1.1 Message Syntax and Routing, Section 6.1:
A server that does not support persistent connections MUST send the
"close" connection option in every response message that does not
have a 1xx (Informational) status code.
Also, that should probably be:
"POST /test.de HTTP/1.1\r\nHost: test.de\r\nContent-Type:
application/json\r\nContent-Length:%d\r\n\r\n"
\r\n is used as new line, not \r and not \n. A double \r\n is used to terminate the header. You can quickly verify be searching for "CRLF" in the standard. You will land in a discussion of the ABNF grammar.
So I search for a working sample or how to get a TLS connection with own certificate running in C++
The trick here is creating a well-formed certificate. For that, see How to create a self-signed certificate with openssl?
Here's an updated example for LibreSSL using pinned cert bundle: C++ libtls example on github
I have an framework application which connects to different servers depending on how it is used. For https connections openssl is used. My problem is, that I need to know if the server I am connecting to is using SSL or TLS, so I can create the right SSL context. Currently if I use the wrong context trying to establish a connection times out.
For TLS I use:
SSL_CTX *sslContext = SSL_CTX_new(TLSv1_client_method());
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
So is there a way to know which protocol a server is running before establishing a connection?
Edit: So as I understand it now it should work either way, since the SSLv23_client_method() also contains the TLS protocol. So the question is why does it not? What could be the reason for a timeout with one client method but not the other?
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
TLS is just the current name for the former SSL protocol, i.e. TLS1.0 is actually SSL3.1 etc. SSLv23_client_method is actually the most compatible way to establish SSL/TLS connections and will use the best protocol available. That means it will also create TLS1.2 connections if the server supports that. See also in the documentation of SSL_CTX_new:
SSLv23_method(void), SSLv23_server_method(void), SSLv23_client_method(void)
A TLS/SSL connection established with these methods may understand the SSLv2, SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.
... a client will send out TLSv1 client hello messages including extensions and will indicate that it also understands TLSv1.1, TLSv1.2 and permits a fallback to SSLv3. A server will support SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols. This is the best choice when compatibility is a concern.
Any protocols you don't want (like SSL3.0) you can disable with SSL_OP_NO_SSLv3 etc and using SSL_CTX_set_options.
Currently if I use the wrong context trying to establish a connection times out.
Then either the server or your code is broken. If a server gets a connection with a protocol it does not understand it should return "unknown protocol" alert. Other servers simply close the connection. Timeout will usually only happen with a broken server or middlebox like an old F5 Big IP load balancer.
So is there a way to know which protocol a server is running before establishing a connection?
No. But you should now presume its "TLS 1.0 and above".
As Steffen pointed out, you use SSLv23_method and context options to realize "TLS 1.0 and above". Here's the full code. You can use it in a client or a server:
const SSL_METHOD* method = SSLv23_method();
if(method == NULL) handleFailure();
SSL_CTX* ctx = SSL_CTX_new(method);
if(ctx == NULL) handleFailure();
const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION;
SSL_CTX_set_options(ctx, flags);
Now, there's an implicit assumption here that's not readily apparent; and that assumption is wrong. That assumption is there is a "TLS min" and "TLS max" version.
What happens is there's a underlying SSL/TLS record layer that carries the protocol payloads. The TLS record layer is independent from the protocol layer, and it has its own version. People interpret TLS record layer version as "TLS min" version; and the protocol version as the "TLS max" version. Most clients servers, sites and services use it that way.
However, the IETF does not specify it that way, and browser's don't use it that way. Because of that, we recently got TLS Fallback Signaling Cipher Suite Value (SCSV).
The browser are correct. Here's how its supposed to be done:
try TLS 1.2, use Fallback Signalling to detect downgrade attacks
if TLS 1.2 fails, then try TLS 1.1, use Fallback Signalling to detect downgrade attacks
if TLS 1.1 fails, then try TLS 1.0, use Fallback Signalling to detect downgrade attacks
Many give up after TLS 1.0 fails. Some user agents may continue with SSLv3.
Why has the IETF not moved to give us "TLS min" and "TLS max"? That's still a mystery. I think the effective argument given is "suppose a client want to use TLS 1.0, 1.2 and 1.3, but not 1.1". I don't know anyone who drops a protocol version like that, so its just a strawman to me. (This is one of those times when I wonder if law enforcement or a national interest, like the NSA, is tampering with standards).
The issue was recently brought up again on the TLS Working Group. From TLS: prohibit <1.2 support on 1.3+ servers (but allow clients) (May 21, 2015):
Now might be a good time to add a (3) for TLS 1.3: have a client
specify both the least TLS version they are willing to use, and the
greatest TLS they desire to use. And MAC or derive from it it so it
can't be tampered or downgraded.
You can still provide the the TLS record layer version, and you can
keep it un-MAC'd so it can be tampered with to cause a disclosure or
crash :)
Effectively, that's how the versions in the record layer and client
protocol are being used. It stops those silly dances the browsers and
other user agents perform without the need for TLS Fallback SCSV.
If part of the IETF's mission is to document existing practices, then the IETF is not fulfilling its mission.