I begin learn the information protection and start from OpenSSL. But I read on the Wikipedia that SSL have trouble with security that still not solved and anyone must use TLS instead. Is it true? Is it mean that SSL now obsolete? (because there appears other way of information protection instead fixing SSL)
TLS is just the newer name for the protocol formerly named SSL. If you look at the protocol level you see that TLS 1.0 is practically SSL 3.1, TLS 1.1 is SSL 3.2 etc. Versions up to and including version SSL 3.0 are considered broken and should not be used any longer.
Because of this naming in practice "SSL" and "TLS" are often used to mean the same protocol group and often you find also "SSL/TLS" to refer to this protocol group. Usually only if a version number is added they refer to this version only. Libraries like OpenSSL, PolarSSL, MatrixSSL etc implement the protocol group, i.e. SSL and TLS.
To add to this naming confusion "SSL" is often used together with protocols like SMTP (send mail) or IMAP (access mail) to mean a secure connection from start while "TLS" is used in this context to mean secure connection after issuing a specific STARTTLS command. It is better to use "implicit" and "explicit" SSL/TLS instead.
SSL as a protocol is insecure, and TLS should be used instead. However, people often use the name 'SSL' to refer to SSL as well as TLS. Furthermore, it is often included in many systems as a fallback measure, so that connections can have some security, at least, if TLS isn't available. The merits of doing this are questionable, as some people feel that this allows developers to be lazy, or not even realize they're using an insecure method.
Related
We have an existing legacy C++ app which uses TCP via Berkeley C sockets.
We need to continue using Berkeley sockets in our existing environment but for a second, new environment we need to use SSL/TLS.
I've programmed with openssl before but this application code is.... significant, not easy to change.
Is there a way to achieve SSL without making code changes? (networking, proxies etc)
The answer depends on your needs. To send HTTP requests, you can implement a proxy server that will redirect all incoming requests to their destination by adding ssl / tls functionality. For this method, most likely it will not be possible to find a solution "out of the box". I would recommend using Poco, ServerApplication, HTTPs Client session.
To use protocols that support a permanent connection, it is more logical to set up a secure tunnel, which will add ssl / tls to all incoming packets, and remove it from outgoing packets. For this method it is quite possible to find a solution out of the box. Try using the stunnel / ghostunnel from the recommendations. Also, this option can definitely be implemented using the same Poco library, or using low-level posix sockets and openssl encryption.
Both solutions can be set up on the secondary node by sending packets from the source server to the address of the new node, or on the local machine by sending packets to the loopback address.
I would like to use Cap'n Proto RPC to communicate with a server in the cloud from a desktop box in an office. Cap'n Proto doesn't provide secure network connections through a firewall. I would prefer c++ since I have other components which require this.
I see some people have been looking at nanomsg and other transports which link directly into the application, but I was wondering whether stunnel or something similar might be satisfactory.
The stunnel application, as most know, can provide HTTPS encapsulation of TCP/IP traffic under certain conditions, as per the FAQ:
The protocol is TCP, not UDP.
The protocol doesn't use multiple connections, like ftp.
The protocol doesn't depend on Out Of Band (OOB) data,
Remote site can't use an application-specific protocol, like ssltelnet, where SSL is a negotiated option, save for those protocols already supported by the protocol argument to stunnel.
It seems like Cap'n Proto RPC might satisfy these conditions. I don't think the customer will object to installing stunnel in this case. Has anyone tried this or something similar? If so, your experiences would be appreciated. If someone knows of a faster/lighter alternative it would also be helpful.
thanks!
Yes, Cap'n Proto's two-party protocol (the only one provided currently) should work great with stunnel, since it's a simple TCP-based transport. You will need to run both a stunnel client and a server, of course, but otherwise this should be straightforward to set up. You could also use SSH port forwarding or a VPN to achieve a similar result.
(Note that stunnel itself has nothing to do with HTTPS per se, but is often used to implement HTTPS because HTTP is also a simple TCP protocol and HTTPS is the same protocol except on TLS. In the Cap'n Proto case, Cap'n Proto replaces HTTP. So you're creating Cap'nProto-S, I guess.)
Another option is to implement the kj::AsyncIoStream abstract interface directly based on a TLS library like OpenSSL, GnuTLS, etc. Cap'n Proto's RPC layer will allow you to provide an arbitrary implementation of kj::AsyncIoStream as its transport (via interfaces in capnp/rpc-twoparty.h). Unfortunately, many TLS libraries have pretty ugly interfaces and so this may be hard to get right. But if you do write something, please contribute it back to the project as this is something I'd like to have in the base library.
Eventually we plan to add an official crypto transport to Cap'n Proto designed to directly support multi-party introductions (something Cap'n Proto actually doesn't do yet, but which I expect will be a killer feature when it's ready). I expect this support will appear some time in 2016, but can't make any promises.
I have an framework application which connects to different servers depending on how it is used. For https connections openssl is used. My problem is, that I need to know if the server I am connecting to is using SSL or TLS, so I can create the right SSL context. Currently if I use the wrong context trying to establish a connection times out.
For TLS I use:
SSL_CTX *sslContext = SSL_CTX_new(TLSv1_client_method());
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
So is there a way to know which protocol a server is running before establishing a connection?
Edit: So as I understand it now it should work either way, since the SSLv23_client_method() also contains the TLS protocol. So the question is why does it not? What could be the reason for a timeout with one client method but not the other?
For SSL I use:
SSL_CTX *sslContext = SSL_CTX_new(SSLv23_client_method());
TLS is just the current name for the former SSL protocol, i.e. TLS1.0 is actually SSL3.1 etc. SSLv23_client_method is actually the most compatible way to establish SSL/TLS connections and will use the best protocol available. That means it will also create TLS1.2 connections if the server supports that. See also in the documentation of SSL_CTX_new:
SSLv23_method(void), SSLv23_server_method(void), SSLv23_client_method(void)
A TLS/SSL connection established with these methods may understand the SSLv2, SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.
... a client will send out TLSv1 client hello messages including extensions and will indicate that it also understands TLSv1.1, TLSv1.2 and permits a fallback to SSLv3. A server will support SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols. This is the best choice when compatibility is a concern.
Any protocols you don't want (like SSL3.0) you can disable with SSL_OP_NO_SSLv3 etc and using SSL_CTX_set_options.
Currently if I use the wrong context trying to establish a connection times out.
Then either the server or your code is broken. If a server gets a connection with a protocol it does not understand it should return "unknown protocol" alert. Other servers simply close the connection. Timeout will usually only happen with a broken server or middlebox like an old F5 Big IP load balancer.
So is there a way to know which protocol a server is running before establishing a connection?
No. But you should now presume its "TLS 1.0 and above".
As Steffen pointed out, you use SSLv23_method and context options to realize "TLS 1.0 and above". Here's the full code. You can use it in a client or a server:
const SSL_METHOD* method = SSLv23_method();
if(method == NULL) handleFailure();
SSL_CTX* ctx = SSL_CTX_new(method);
if(ctx == NULL) handleFailure();
const long flags = SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_COMPRESSION;
SSL_CTX_set_options(ctx, flags);
Now, there's an implicit assumption here that's not readily apparent; and that assumption is wrong. That assumption is there is a "TLS min" and "TLS max" version.
What happens is there's a underlying SSL/TLS record layer that carries the protocol payloads. The TLS record layer is independent from the protocol layer, and it has its own version. People interpret TLS record layer version as "TLS min" version; and the protocol version as the "TLS max" version. Most clients servers, sites and services use it that way.
However, the IETF does not specify it that way, and browser's don't use it that way. Because of that, we recently got TLS Fallback Signaling Cipher Suite Value (SCSV).
The browser are correct. Here's how its supposed to be done:
try TLS 1.2, use Fallback Signalling to detect downgrade attacks
if TLS 1.2 fails, then try TLS 1.1, use Fallback Signalling to detect downgrade attacks
if TLS 1.1 fails, then try TLS 1.0, use Fallback Signalling to detect downgrade attacks
Many give up after TLS 1.0 fails. Some user agents may continue with SSLv3.
Why has the IETF not moved to give us "TLS min" and "TLS max"? That's still a mystery. I think the effective argument given is "suppose a client want to use TLS 1.0, 1.2 and 1.3, but not 1.1". I don't know anyone who drops a protocol version like that, so its just a strawman to me. (This is one of those times when I wonder if law enforcement or a national interest, like the NSA, is tampering with standards).
The issue was recently brought up again on the TLS Working Group. From TLS: prohibit <1.2 support on 1.3+ servers (but allow clients) (May 21, 2015):
Now might be a good time to add a (3) for TLS 1.3: have a client
specify both the least TLS version they are willing to use, and the
greatest TLS they desire to use. And MAC or derive from it it so it
can't be tampered or downgraded.
You can still provide the the TLS record layer version, and you can
keep it un-MAC'd so it can be tampered with to cause a disclosure or
crash :)
Effectively, that's how the versions in the record layer and client
protocol are being used. It stops those silly dances the browsers and
other user agents perform without the need for TLS Fallback SCSV.
If part of the IETF's mission is to document existing practices, then the IETF is not fulfilling its mission.
I would like to implement the server side of a licence management software. I use C++ in LINUX OS.
When the software starts it must connect to a server that checks privileges and allows/disallow running of some features.
My question is about the implementation of the communication between client and server across internet:
The server will have a static IP on internet so is it enough to use a simple TCP/IP socket client that will connect to a TCP/IP socket server ( providing IP/PORT) ?
I am familiar with socket communication , but less with communication across internet so my question is whether this is the right approach or do I need to use a different mechanism like a http client server or other.
Regards
AFG
Here are some benefits to using HTTP as a transport:
easier to get right, more likely to work in production: Yes, you will probably have to add additional dependencies to deal with HTTP (client and server side), but it's still preferable to yet another homegrown protocol, which you have to implement, maintain, care about backwards compatibility, deal with multiplatform issues (eg. endianness), etc. In terms of implementation ease, using an HTTP based solution should be far easier in the common case (especially true if you build a REST style service API for license checking).
More help available: HTTP as the foundation of the web is one of the most widely used technologies today. Most (all?) problems you will run into are probably publicly documented with solutions/workarounds.
Encryption 'for free': Encryption is already a solved problem (HTTPS/SSL), both with regard to transport as well as with regard to what you have to implement on your end, and it's just a matter of setting it up.
Server Authentication 'for free': HTTPS/SSL doesn't only solve encryption but also server authentication, so that the client can verify whether it's actually talking to the right service.
Guaranteed to work on the internet: HTTP/HTTPS traffic is common on the internet, so you won't run into routing problems or firewalls which are hard to traverse. This might be a problem when using your own protocol.
Flexibility out of the box: You also put less constraints on clients communicating with your server, as it's very simple to build a client in many different environments, as long as they can talk HTTP (and maybe SSL), and they know how to issue the request to your server (ie. what your service API looks like).
Easy to integrate with administrative webapp: If you want to allow users to manage their accounts associated with licenses in some way (update contact info etc.), then you might even combine the license server with that application. You can also build the license administration UI part into the same app if that's useful.
And as a last remark (this puts additional constraints on your client side HTTPS/SSL implementation): you can even use client side SSL certificates, which essentially allow authenticating the client to the server. Depending on how you use them, client side certificates are harder to manage, but they can be eg. expired, or revoked, so to some extent they actually are licenses (to connect to the server).
HTTP is not a different mechanism. It is a protocol operated over TCP/IP connections.
Internet uses IP transport exclusively. You can use UDP, TCP or SCTP session (well, UDP is not much of a session) layer on top of it. TCP is the general choice.
Sockets are operating system interface. They are the only interface to network in most systems, but some systems have different interface. Nothing to do with the transport itself.
IP addresses are in practice tied to network topology, so I strongly discourage hardcoding the IP address into the server. If you have to change network provider for any reason, you won't be getting the same IP address. Use DNS, it's just one gethostbyname call.
And don't forget to authenticate the server; even with hardcoded IP it's too easy to redirect it.
I am using iptables string match + libnetfilter_queue library to monitor http requests and responses. But later on I realized that string match fails in case of https protocol as iptable captures packets at layer 3.
Now, i am reimplementing it using libpcap. So, is it possible to see what is in header/packet using libpcap in case of https protocol
HTTPS uses the SSL protocol which encrypts information at the application layer (the highest layer in the OSI model). As such, the answer is no, libpcap will not help you see the contents. If it were possible, it would pretty much defeat the purpose of using SSL in the first place.
No. If it was possible, HTTPS wouldn't be secure, which is its only reason for existence.
If you're watching the traffic between your machine and another machine, you may be able to decrypt the SSL traffic (after all, the browser on your machine can do so), but it's not easy - Wireshark can do it if it has the necessary key information, but the code to do that is somewhat complicated (I won't be able to help you figure it out, so you're on your own there), and it might not always be able to do the decryption.
If you're watching the traffic between two other machines, you'd need to get the keys from those machines (if you could do it without those keys, then, as others have noted, SSL wouldn't be very useful, as its whole purpose is to hide traffic from other people).