Using SSL when connection to server is not retaining - c++

I have a simple question - obviously SSL is an additional overhead and processing time, since there is bunch of stuff happening behind the scene when connecting, handshaking etc.
When connection is established and it is secure, you are good to go, BUT what about architectures when you can't(don't want to) simply retain the connection?
Imagine client connects to server, sends request, gets the response and immediately disconnects.
In this type of architecture SSL might add very significant overhead when connecting/disconnecting with each client request.
Please explain, what I am missing and what might be the alternative here?
Update from comments to make it clear:
I hope there is a clever solution for this, like session might be "remembered" and with next request not all the initial things will need to happen from scratch. So I hope to find an optimisation of SSL usage for not retaining connections.
Thank you All in advance!

RFC5077 (and prior to that, RFC4507) provide an extension for TLS "tickets" that allow for a shortcut renegotiation between client and server. When initially connecting, the server can return a ticket that can be used for later connections.
It is possible the client and/or server don't support this, so in that case you fall back to the full negotiation each time.

SSL has a session feature which means that multiple connections can use the session negotiated by the first of them. The handshake when rejoining is considerably less expensive than the initial handshake (or a re-handshake that creates a new session).

Related

Easy way to "nudge" a server to keep a connection open?

Okay, so a little context:
I have an app running on an embedded system that sends a few different requests over HTTP (using libcurl in C++) at the following intervals:
5 minutes
15 minutes
1 hour
24 hours
My goal: Reduce data consumption (runs over cellular)
We have both client and server side TLS authentication, so the handshake is costly. The idea is that we use persistent connections (at least for the shorter interval files) to avoid doing the handshake every time.
Unfortunately, after much tinkering I've figured out that the server is closing the connection before the intervals pass. Maybe this is something we can extend? I'll have to talk to the server side guys.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
My idea is this:
Have my app send a packet (as small as possible) every 2 minutes or so (however long the timeout is) to "nudge" the connection into staying open.
My questions are:
Does that make any sense?
I don't suppose there is an easy way to do this in libcurl is there?
If so, how small could we get the request?
Is there an even easier way to do it? My only issue here is that all the connection stuff "lives" in libcurl.
Thanks!
It would be easier to give a more precise answer if you gave a little more detail on your application architecture. For example, is it a RESTful API? Is the use of HTTP absolutely mandatory? If so, what HTTP server are you using (nginx, apache, ...)? Could you consider websockets as an alternative to plain HTTP?
If you are at liberty to use something other than regular HTTP or HTTPs - and to use something other than libcurl on the client side - you would have more options.
If, on the other hand, if you are constrained to both
use HTTP (rather than a raw TCP connection or websockets), and
use libcurl
then I think your task is a good bit more difficult - but maybe still possible.
One of your first challenges is that the typical timeouts for a HTTP connection are quite low (as low as a few seconds for Apache 2). If you can configure the server you can increase this.
I was under the impression that was the reason the "TCP keep-alive" packets existed, but supposedly those "check the connection" not "keep it open" like the name suggests.
Your terminology is ambiguous here. Are you referring to TCP keep-alive packets or persistent HTTP connections? These don't necessarily have anything to do with each other. The former is an optional mechanism in TCP (which is disabled by default). The latter is an application-layer concept which is specific to HTTP - and may be used regardless of whether keep-alive packets are being used at the transport layer.
My only issue here is that all the connection stuff "lives" in libcurl.
The problem with using libcurl is that it first and foremost a transfer library. I don't think it is tailored for long-running, persistent TCP connections. Nonetheless, according to Daniel Stenberg (the author of libcurl), the library will automatically try to reuse existing connections where possible - as long as you re-use the same easy handle.
If so, how small could we get the request?
Assuming you are using a 'ping' endpoint on your server - which accepts no data and returns a 204 (success but no content) response, then the overhead - in the application layer - would be the size of the HTTP request headers + the size of the HTTP response headers. Maybe you could get it down to 200-300 bytes, or thereabouts.
Alternatives to (plain) HTTP
If you are using a RESTful API, this paradigm sort of goes against the idea of a persistent TCP connection - although I can not think of any reason why it would not work.
You might consider websockets as an alternative, but - again - libcurl is not ideal for this. Although I know very little about websockets, I believe they would offer some advantages.
Compared to plain HTTP, websockets offer:
significantly less overhead than HTTP per message;
the connection is automatically persistent: there is no need to send extra 'keep alive' messages to keep it open;
Compared to a raw TCP connection, the benefits of websockets are that:
you don't have to open a custom port on your server;
it automatically handles the TLS/SSL stuff for you.
(Someone who knows more about websockets is welcome to correct me on some of the above points - particularly regarding TLS/SSL and keep alive messages.)
Alternatives to libcurl
An alternative to libcurl which might be useful here is the Mongoose networking library. It would provide you with a few different alternatives:
use a plain TCP connection (and a custom application layer protocol),
use a TCP connection and handle the HTTP requests yourself manually,
use websockets - which it has very good support for (both as server and client).
Mongoose allows you to enable SSL for all of these options also.

Is a TCP socket secure or should I always check the user

I have a C++ app that connects to a nodeJS server through a TCP socket.
On socket 'handshake' the client authenticates itself with a UUID known by the server, the server then associates the account to this recognised UUID
Once a TCP socket is open, the app sends requests and the server answers through the same socket.
Is it necessary to add passphrase to every request to be sure the request comes from the client? Or is a socket supposed to be in place and remain in place?
So should I be sure the client is the client:
Only when opening the socket?
Every time a request is made?
The UUID known to the server is normally called a token. And it can be used for your scenario. However it should never be done unencrypted.
What you need to make sure is the following:
An external party (not one of the 2 members of the communication) should not be able to read the token.
The client should not connect to anything but YOUR server.
This is typically accomplished using TLS. (This is what makes HTTPS secure.)
I suggest you do some research into token-based authentication/authorization and TLS/SSL.
One last advice: do not implement the encryption code yourself but use a well used library that has as a result had a lot of testing and has good maintenance.
No, it's not "secure". Your scheme is susceptible to, just off the top of my head, replay attacks, man-in-the-middle attacks, eavesdropping, subsequent impersonation ...
A socket isn't like an actual physical pipe or tunnel. A socket is just an agreement that data marked with a certain source and destination port pair (these are just numbers) are to be treated as belonging to a particular logical data channel. This is determined by handshake and trust. There is no verification.
What you're specifically asking is whether man-in-the-middle attacks exist. Yes, yes they do.
Will requiring a passphrase be given in each packet fix that problem? No, it won't. It will be trivial to intercept and then replay. You're just giving the man in the middle the passphrase.
This is why people use encryption and other clever security schemes. If you're concerned about message authenticity and integrity, you'll need a basic grounding in communications security principles; providing one is out of the scope of this answer.

Increasing SSL handshaking performance

I've got a short-lived client process that talks to a server over SSL. The process is invoked frequently and only runs for a short time (typically for less than 1 second). This process is intended to be used as part of a shell script used to perform larger tasks and may be invoked pretty frequently.
The SSL handshaking it performs each time it starts up is showing up as a significant performance bottleneck in my tests and I'd like to reduce this if possible.
One thing that comes to mind is taking the session id and storing it somewhere (kind of like a cookie), and then re-using this on the next invocation, however this is making me feel uneasy as I think there would be some security concerns around doing this.
So, I've got a couple of questions,
Is this a bad idea?
Is this even possible using OpenSSL?
Are there any better ways to speed up the SSL handshaking process?
After the handshake, you can get the SSL session information from your connection with SSL_get_session(). You can then use i2d_SSL_SESSION() to serialise it into a form that can be written to disk.
When you next want to connect to the same server, you can load the session information from disk, then unserialise it with d2i_SSL_SESSION() and use SSL_set_session() to set it (prior to SSL_connect()).
The on-disk SSL session should be readable only by the user that the tool runs as, and stale sessions should be overwritten and removed frequently.
You should be able to use a session cache securely (which OpenSSL supports), see the documentation on SSL_CTX_set_session_cache_mode, SSL_set_session and SSL_session_reused for more information on how this is achieved.
Could you perhaps use a persistent connection, so the setup is a one-time cost?
You could abstract away the connection logic so your client code still thinks its doing a connect/process/disconnect cycle.
Interestingly enough I encountered an issue with OpenSSL handshakes just today. The implementation of RAND_poll, on Windows, uses the Windows heap APIs as a source of random entropy.
Unfortunately, due to a "bug fix" in Windows 7 (and Server 2008) the heap enumeration APIs (which are debugging APIs afterall) now can take over a second per call once the heap is full of allocations. Which means that both SSL connects and accepts can take anywhere from 1 seconds to more than a few minutes.
The Ticket contains some good suggestions on how to patch openssl to achieve far FAR faster handshakes.

How do I get through proxy server environments for non-standard services?

I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment.
I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences.
Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche.
My first reaction is "why can't they just add my server's addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days.
My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there.
Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me.
Thanks for reading my long-winded question,
Ripred
If your software can make an SSL connection on port 443, then you are 99% of the way there.
Typically HTTP proxies are set up to proxy SSL-on-443 (for the purposes of HTTPS). You just need to teach your software to use the HTTP proxy. Check the HTTP RFCs for the full details, but the Cliffs Notes version is:
Connect to the HTTP proxy on the proxy port;
Send to the proxy:
.
CONNECT your.real.server:443 HTTP/1.1\r\n
Host: your.real.server:443\r\n
User-Agent: YourSoftware/1.234\r\n
\r\n
Then parse the proxy response, which will start with a HTTP status code, followed by HTTP headers, followed by a blank line. You'll then be talking with your destination (if the status code indicated success, anyway), and can start talking SSL.
In many corporate environments you'll have to authenticate with the proxy - this is almost always HTTP Basic Authentication, which is pretty easy - again, see the RFCs.

TCP/IP and designing networking application

i'm reading about way to implemnt client-server in the most efficient manner, and i bumped into that link :
http://msdn.microsoft.com/en-us/library/ms740550(VS.85).aspx
saying :
"Concurrent connections should not exceed two, except in special purpose applications. Exceeding two concurrent connections results in wasted resources. A good rule is to have up to four short lived connections, or two persistent connections per destination "
i can't quite get what they mean by 2... and what do they mean by persistent?
let's say i have a server who listens to many clients , whom suppose to do some work with the server, how can i keep just 2 connections open ?
what's the best way to implement it anyway ? i read a little about completion port , but couldn't find a good examples of code, or at least a decent explanation.
thanks
Did you read the last sentence:
A good rule is to have up to four
short lived connections, or two
persistent connections per
destination.
Hard to say from the article, but by destination I think they mean client. This isn't a very good article.
A persistent connection is where a client connects to the server and then performs all its actions without ever dropping the connection. Even if the client has periods of time when it does not need the server, it maintains its connection to the server ready for when it might need it again.
A short lived connection would be one where the client connects, performs its action and then disconnects. If it needs more help from the server it would re-connect to the server and perform another single action.
As the server implementing the listening end of the connection, you can set options in the listening TCP/IP socket to limit the number of connections that will be held at the socket level and decide how many of those connections you wish to accept - this would allow you to accept 2 persistent connections or 4 short lived connections as required.
What they mean by, "persistent," is a connection that is opened, and then held open. It's pretty common problem to determine whether it's more expensive to tie up resources with an "always on" connection, or suffer the overhead of opening and closing a connection every time you need it.
It may be worth taking a step back, though.
If you have a server that has to listen for requests from a bunch of clients, you may have a perfect use case for a message-based architecture. If you use tightly-coupled connections like those made with TCP/IP, your clients and servers are going to have to know a lot about each other, and you're going to have to write a lot of low-level connection code.
Under a message-based architecture, your clients could place messages on a queue. The server could then monitor that queue. It could take messages off the queue, perform work, and place the responses back on the queue, where the clients could pick them up.
With such a design, the clients and servers wouldn't have to know anything about each other. As long as they could place properly-formed messages on the queue, and connect to the queue, they could be implemented in totally different languages, and run on different OS's.
Messaging-oriented-middleware like Apache ActiveMQ and Weblogic offer API's you could use from C++ to manage and use queues, and other messaging objects. ActiveMQ is open source, and Weblogic is sold by Oracle (who bought BEA). There are many other great messaging servers out there, so use these as examples, to get you started, if messaging sounds like it's worth exploring.
I think key words are "per destination". Single tcp connection tries to accelerate up to available bandwidth. So if you allow more connections to same destination, they have to share same bandwidth.
This means that each transfer will be slower than it could be and server has to allocate more resources for longer time - data structures for each connection.
Because establishing tcp connection is "time consuming", it makes sense to allow establish second connection in time when you are serving first one, so they are overlapping each other. for short connections setup time could be same as for serving the connection itself (see poor performance example), so more connections are needed for filling all bandwidth effectively.
(sorry I cannot post hyperlinks yet)
here msdn.microsoft.com/en-us/library/ms738559%28VS.85%29.aspx you can see, what is poor performance.
here msdn.microsoft.com/en-us/magazine/cc300760.aspx is some example of threaded server what performs reasonably well.
you can limit number of open connections by limiting number of accept() calls. you can limit number of connections from same source just by canceling connection when you find out, that you allready have more then two connections from this location (just count them).
For example SMTP works in similar way. When there are too many connections, it returns 4xx code and closes your connection.
Also see this question:
What is the best epoll/kqueue/select equvalient on Windows?