Is there any functional difference between BIO_do_connect and BIO_do_handshake?
Both are defined as the same macro:
/* BIO_s_accept() and BIO_s_connect() */
# define BIO_do_connect(b) BIO_do_handshake(b)
# define BIO_do_accept(b) BIO_do_handshake(b)
# endif /* OPENSSL_NO_SOCK */
# define BIO_do_handshake(b) BIO_ctrl(b,BIO_C_DO_STATE_MACHINE,0,NULL)
Most of the examples on writing a TLS client call BIO_do_handshake after BIO_do_connect to initiate an SSL handshake and open an SSL connection. But from what I have seen from analyzing network traffic using Wireshark, BIO_do_connect does both the TCP handshake and TLS handshake and opens an SSL connection.
Calling BIO_do_handshake afterwards has no effect.
Is there some state that the state machine transitions to after BIO_do_connect that necessitates calling BIO_do_handshake?
Is this some holdover from previous versions of OpenSSL and calling both BIO_do_connect and BIO_do_handshake is necessary to support backward compatibility?
These questions came up after I asked this question on SE.
Related
I have created a client-server application that uses TLS for communicating with each other. I have used non-blocking sockets and using the generic OpenSSL library functions for establishing TLS channel and for IO iperations, i.e. not using BIO explicitly anywhere in my application. The application is working normally without calling SSL_do_handshake() method..
I am fairly new to OpenSSL, and recently came accross SSL__do_handshake() method while browsing the documentation. I understand the action performed by the SSL_do_handshake() method. However, it is not clear under what situation do I need to call it...
As of my understanding, SSL_accept() kicks-off the TLS handshake at the first time. And calling SSL_read() and SSL_write() internally re-negotiates the TLS handshake whenever necessary.
If my above statements are correct, then, why do we need to call SSL_do_handshake() method explicitly at all?
SSL_do_handshake need to be invoked when the TLS handshake should be done. When using SSL_accept (server) or SSL_connect (client) one does not need to call SSL_do_handshake explicitly, since it is already done internally. Similar if SSL_do_handshake should used there is no need to use SSL_accept or SSL_connect, just set the SSL state accordingly (i.e. client or server side).
In other words: it is just a slightly different API with a slightly different level of control. One can use it but one does not need to use it.
I'm using openssl in a non web application to commute a socket to TLS (STARTTLS protocol). Everything works fine, but I would also like to know which version of TLS among the allowed ones was actually negociated.
Is there any way to find this information using openssl API ?
Note: with openssl 1.1 and later the information is likely returned by the function SSL_SESSION_get_protocol_version() but I need to also find this information for previous openssl library versions (code in the wild, performing a major update of openssl for mere logging purpose is not an option).
You can use SSL_get_version():
SSL_get_version() returns the name of the protocol used for the connection ssl. It should only be called after the initial handshake has been completed. Prior to that the results returned from this function may be unreliable.
RETURN VALUES
The following strings can be returned:
SSLv2
The connection uses the SSLv2 protocol.
SSLv3
The connection uses the SSLv3 protocol.
TLSv1
The connection uses the TLSv1.0 protocol.
TLSv1.1
The connection uses the TLSv1.1 protocol.
TLSv1.2
The connection uses the TLSv1.2 protocol.
unknown
This indicates an unknown protocol version.
I'm developing a client and a Threaded server in C++ but I'm facing problems with OpenSSL/TLS integration.
So far, I've followed the ThriftServer.cpp and ThriftClient.cpp but I'm getting random errors which cause the crash of my application.
Specifically, the crash happens when a client tries to call the defined thrift interface on the server (already live)
/* server init with PEM public/private certificates
* and trusted certificates, socketFactory->accept(true),
* transport->open() */
myServer->start(); //running on separated thread, calling thriftserver->serve();
/* client init with PEM public/private certificates
* and trusted certificates, socketFactory->accept(true),
* transport->open() */
myClient->beginSession(); //Thrift API call - crash
The crashes are really generic: sometimes it gives me
TConnectedClient died: SSL_accept: error 0
and sometimes
TConnectedClient died: SSL_accept: parse tlsext
and both ending with SIGSEV.
I am running a Debian 8.1 x64 with latest OpenSSL 1.0.2d compiled from sources and flag enable-tlsext, thrift from github/trunk and libevent from github/trunk.
I've tried my custom self-signed certificates and the testing certificates shipped with Thrift: in both cases it doesn't work, but they are working with openssl s_client and openssl s_server
Any idea about the cause of these errors?
EDIT
I've compiled OpenSSL with Thread support (threads flag on ./configure) and now my application triggers always the error
SSL_shutdown: broken pipe
when the client tries to contact the server. Digging more in details, the openssl s_client triggers a
sslv3 alert handshake failure
using TLSv1.2 as protocol. I've checked this other Stackoverflow question but it didn't help, as long as I'm using the latest OpenSSL snapshot already
Regarding the SSL_shutdown problem, according to this document, you are supposed ignore the SIGPIPE signal to avoid server crashes:
SIGPIPE signal
Applications running OpenSSL over network connections may crash if SIGPIPE is not ignored. This happens when they receive a connection reset by remote peer exception, which somehow triggers a SIGPIPE signal. If not handled, this signal would kill the application.
This can be done with:
#include <csignal>
// ...
signal(SIGPIPE, SIG_IGN);
I have a server running in async and a client running in sync.
The client and server do a handshake, and then they do the SSL handshake. The client sends a message to the server, the server reads the message (and I can print it out correctly) and then the server sends back a response boost::async_write. The response leaves the server and the reads are being executed on the client boost::read() but the client never returns from the read command. Eventually the request times out and throws an exception (request timed out).
The server is asynchronous and the client is synchronous.
Please note that without SSL, everything works correctly, but with SSL the scenario above unfolds. I have viewed in Wireshark that the handshake works correctly and both the SSL and TCP handshake are correct. Plus, when the client sends the first message boost::write(), the server can decrypt and read it perfectly (boost::read_async). The server sends back the message perfectly(boost:write_async)... but the client for some reason never returns from reading!! (ONLY in the SSL case, normal TCP works correctly).
Boost version is 1.48 ... and I am truly puzzled how TCP can work fine and SSL is not working (but the as per the scenario above it has nothing to do with the encryption aspect). Is there something I have to do in boost differently than I currently have?
the issue was for some reason the header of one of the messages i was passing in was out of scope. Declaring a header on the stack of a function, and then passing that into an async Send will NOT guarantee the memory of that header has been passed entirely into the function. The header has to have a more persistent scope (such as a heap, member variable etc) in the async case.
I have an framework application which connects to different servers depending on how it is used. For https connections openssl is used. My problem is, that I need to know if the server I am connecting to is using SSL or TLS, so I can create the right SSL context. Currently if I use the wrong context trying to establish a connection times out.
For TLS I use:
SSL_CTX *sslContext = SSL_CTX_new(TLSv1_client_method());
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
So is there a way to know which protocol a server is running before establishing a connection?
Edit: So as I understand it now it should work either way, since the SSLv23_client_method() also contains the TLS protocol. So the question is why does it not? What could be the reason for a timeout with one client method but not the other?
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
TLS is just the current name for the former SSL protocol, i.e. TLS1.0 is actually SSL3.1 etc. SSLv23_client_method is actually the most compatible way to establish SSL/TLS connections and will use the best protocol available. That means it will also create TLS1.2 connections if the server supports that. See also in the documentation of SSL_CTX_new:
SSLv23_method(void), SSLv23_server_method(void), SSLv23_client_method(void)
A TLS/SSL connection established with these methods may understand the SSLv2, SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.
... a client will send out TLSv1 client hello messages including extensions and will indicate that it also understands TLSv1.1, TLSv1.2 and permits a fallback to SSLv3. A server will support SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols. This is the best choice when compatibility is a concern.
Any protocols you don't want (like SSL3.0) you can disable with SSL_OP_NO_SSLv3 etc and using SSL_CTX_set_options.
Currently if I use the wrong context trying to establish a connection times out.
Then either the server or your code is broken. If a server gets a connection with a protocol it does not understand it should return "unknown protocol" alert. Other servers simply close the connection. Timeout will usually only happen with a broken server or middlebox like an old F5 Big IP load balancer.
So is there a way to know which protocol a server is running before establishing a connection?
No. But you should now presume its "TLS 1.0 and above".
As Steffen pointed out, you use SSLv23_method and context options to realize "TLS 1.0 and above". Here's the full code. You can use it in a client or a server:
const SSL_METHOD* method = SSLv23_method();
if(method == NULL) handleFailure();
SSL_CTX* ctx = SSL_CTX_new(method);
if(ctx == NULL) handleFailure();
const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION;
SSL_CTX_set_options(ctx, flags);
Now, there's an implicit assumption here that's not readily apparent; and that assumption is wrong. That assumption is there is a "TLS min" and "TLS max" version.
What happens is there's a underlying SSL/TLS record layer that carries the protocol payloads. The TLS record layer is independent from the protocol layer, and it has its own version. People interpret TLS record layer version as "TLS min" version; and the protocol version as the "TLS max" version. Most clients servers, sites and services use it that way.
However, the IETF does not specify it that way, and browser's don't use it that way. Because of that, we recently got TLS Fallback Signaling Cipher Suite Value (SCSV).
The browser are correct. Here's how its supposed to be done:
try TLS 1.2, use Fallback Signalling to detect downgrade attacks
if TLS 1.2 fails, then try TLS 1.1, use Fallback Signalling to detect downgrade attacks
if TLS 1.1 fails, then try TLS 1.0, use Fallback Signalling to detect downgrade attacks
Many give up after TLS 1.0 fails. Some user agents may continue with SSLv3.
Why has the IETF not moved to give us "TLS min" and "TLS max"? That's still a mystery. I think the effective argument given is "suppose a client want to use TLS 1.0, 1.2 and 1.3, but not 1.1". I don't know anyone who drops a protocol version like that, so its just a strawman to me. (This is one of those times when I wonder if law enforcement or a national interest, like the NSA, is tampering with standards).
The issue was recently brought up again on the TLS Working Group. From TLS: prohibit <1.2 support on 1.3+ servers (but allow clients) (May 21, 2015):
Now might be a good time to add a (3) for TLS 1.3: have a client
specify both the least TLS version they are willing to use, and the
greatest TLS they desire to use. And MAC or derive from it it so it
can't be tampered or downgraded.
You can still provide the the TLS record layer version, and you can
keep it un-MAC'd so it can be tampered with to cause a disclosure or
crash :)
Effectively, that's how the versions in the record layer and client
protocol are being used. It stops those silly dances the browsers and
other user agents perform without the need for TLS Fallback SCSV.
If part of the IETF's mission is to document existing practices, then the IETF is not fulfilling its mission.