HTTP2 over AWS ELB under TCP Mode - amazon-web-services

Does anyone have the experience using HTTP2 server behind AWS ELB running in TCP Mode?
As I know, AWS ELB does not support HTTP2 now, however, by using TCP mode, it should pass the request to the backend server transparently.
Does someone have the experience for sharing?
Thank you.

Yes, TCP port 443 works to bypass ELB's HTTPS, but there's no way to do session stickiness since ELB can't read the cookies over the wire.
You may also consider using h2c (HTTP/2 over cleartext).

Supposedly the new Application Load Balancer supports HTTP/2. I'm a little unclear whether it's useful, however, if CloudFront doesn't support it yet:
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/

ELB has no way of pipelining connections. Therefore you cannot trick it into do http2. Maybe with the new version coming out, but not sure.

Related

Does an AWS Application Load Balancer always terminate HTTPS connections (or is it configurable)?

We use an Application Load Balancer behind which we have an nginx server. Our client has asked us to implement mTLS but I don't think that works if the ALB terminates TLS connections.
I know that our ALB currently swaps out the self-signed certificate of our nginx server and replaces it with its own, which is a pretty good indication that it terminates TLS connections.
If we can't change that we'd have to switch to an NLB instead.
Can an ALB be configured to work without terminating TLS connections in AWS, or is that impossible?
You are correct. ALB unfortunately does not support mTLS at this time, (I really wish AWS would add that feature). And since ALB needs to terminate the SSL connection in order to do all the things it does like path forwarding, etc.. there is no way for them to add TCP pass-through to the ALB. You will need to switch to an NLB, and handle all the SSL certificate stuff on your server.

How to use HTTP2 behind AWS Network Load Balancer terminating SSL

I have the following setup:
client --> AWS NLB (terminates SSL) --> nginx --> webserver
How can I get nginx to serve content over HTTP2? Enabling it on the nginx server config just causes the browser to download a file when accessing a page.
Browsers use ALPN as part of the TLS negotiation to decide to sue the HTTP/2 protocol.
As your TLS termination is happening at the NLB it must announce this HTTP/2 support and the pass on the unencrypted HTTP/2 data to Nginx.
I can’t see anything to suggest that NLB supports setting of ALPN so not sure this is possible. You will need to ask AWS if this is supported as nothing in their documentation on it, but that in itself probably gives you the answer that you don’t want.
Not sure why it’s downloading a file. Does the same thing happen if you connect directly to Nginx?
The solution I ultimately arrived at was this:
client --> AWS NLB --> AWS ALB (terminates SSL) --> nginx --> webserver
The trick was to use TCP on port 443 on the NLB at creation time! The web ui does not permit you to add a TCP listener on 443 afterward — it requires you to use the TLS choice on 443 and select a cert for TLS termination. The only reason I'm using NLB is because it supports static IP association. TCP passthrough to the ALB works for my use case.
Since the ALB terminates TLS and also supports HTTP/2 this setup works.

Deploying an HTTP/2 web-server with Kubernetes on AWS

I have a Go server that is currently running with Kubernetes on AWS. The website sits under a route-53 and an ELB that manages the SSL termination.
Now, I want to support HTTP/2 in my web-server in order to push resources to the clients, and I saw that HTTP/2 requires that the web-server will use HTTPS. I have a few questions according to that.
HTTP/2 requires HTTPS - In my case the HTTPS logic is in the ELB and it manages for me the SSL termination. My application gets the decrypted data as a simple HTTP request. Do I need to remove the ELB in order to enable HTTP/2 in my web-server?
Is there any way to leave the ELB there and enable HTTP/2 in my web-server?
In my local development I use openssl to generate certificate. If I deploy the web-server I need to get the CA certificate from AWS and store it somewhere in the Kubernetes certificate-manager and inject to my web-server in the initialization. What is the recommended way to do this?
I feel like I miss something, so I'll appreciate any help. Thanks
The new ELB supports HTTP/2 (https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/) but not the Push attribute (https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration): “You can't use the server-push feature of HTTP/2”
If you want to use Push you can use the ELB as a level four TCP LoadBalancer and enable this at your webserver. For HaProxy it is also possible to still offset SSL/TLS with this set up (HTTP/2 behind reverse proxy) but not sure if similar is possible under ELB (probably not). This is because while HTTP/2 requires HTTPS from all the major browsers it is not a requirement of the protocol itself so load balancer -> server can be over HTTP/2 without HTTPS (called h2c).
However I would say that HTTP/2 Push is very complicated to get right - read this excellent post by Jake Archibald of Google on this: https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/. It’s generally been found to benefit in a few cases and cause no change in most and even cause degradation in performance in others. Ultimately it’s a bit of a let down in HTTP/2 features, though personally I don’t think it’s been explored enough so may be some positives to come out of it yet.
So if you don’t want Push then is there still a point in upgrading to HTTP/2 on the front end? Yes in my opinion as detailed in my answer here: HTTP2 with node.js behind nginx proxy. This also shows that there is no real need to have HTTP/2 on the backend from LB to webserver meaning you could leave it as a HTTPS offloading loaf balancer.
It should be noted that there are some use cases where HTTP/2 is slower:
Under heavy packet loss (i.e. a very bad Internet connection). Here the single TCP connection used by HTTP/2 and it’s TCP Head of Line Blocking means the connection suffers more than 6 individual HTTP/1 connections. QUIC which is a even newer protocol then HTTP/2 (so new it’s not even out yet, so not really available except on Google servers) addresses this.
For large packets due to AWS’s specific implementation. Interesting post here on that: https://medium.com/#ptforsberg/on-the-aws-application-load-balancer-http-2-support-fad4bc67b21a. This is only really an issue for truely large downloads most likely for APIs and shouldn’t be an issue for most websites (and if it is then you should optimise your website cause HTTP/2 won’t be able to help much anyway!). Could be easily fixed by upgrading the HTTP/2 window size setting but looks like ELB does not allow you to set this.
There is no benefit to deploying HTTP2 on an AWS load balancer if your backend is not HTTP2 also. Technically HTTP2 does not require HTTPS, but nobody implements HTTP2 for HTTP. HTTP2 is a protocol optimization (simple viewpoint) that removes round trips in the SSL negotiation, improves pipelining, etc. If the load balancer is communicating with your backend via HTTP, there will not be any improvement. The load balancer will see a small decrease in load due to reduced round trips during HTTPS setup.
I recommend that you configure your backend services to only use HTTPS (redirect clients to HTTPS) and use an SSL certificate. Then configure HTTP2, which is not easy by the way. You can use Let's Encrypt for SSL which works very well. You can also use OpenSSL self-signed certificates (which I don't recommend). You cannot use AWS services to create SSL certificates for your backend services, only for AWS managed services (CloudFront, ALB, etc.).
You can also setup the load balancer with Layer 4 (TCP) listeners. This is what I do when I setup HTTP2 on my backend servers. Now the entire path from client to backend is using HTTP2 without double SSL encryption / decryption layers.
One of the nice features of load balancers is called "SSL offloading". This means that you enable SSL on the load balancer and only enable HTTP on your backend web servers. This goes against HTTP2. Therefore think thru what you really want to accomplish and then design your services to meet those objectives.
Another point to consider. Since you are looking into HTTP2, at the same time remove support in your services for the older TLS versions and unsafe encryption and hashing algorithms. Dropping TLS 1.0 should be mandatory today and I recommend dropping TLS 1.1 also. Unless you really need to support ancient browsers or custom low-end hardware, TLS 1.2 should be the standard today. Your logfiles can tell you if clients are connecting via older protocols.

How to setup an external kubernetes service in AWS using https

I would like to setup a public kubernetes service in AWS that listens on https.
I know that kubernetes services currently only support TCP and UDP, but is there a way to make this work with the current version of kubernetes and AWS ELBs?
I found this. http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html
Is that the best way at the moment?
Https usually runs over TCP, so you can simply run your service with Type=Nodeport/LoadBalancer and manage the certs in the service. This example might help [1], nginx is listening on :443 through a NodePort for ingress traffic. See [2] for a better explanation of the example.
[1] https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8
[2] http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html
Since 1.3, you can use annotations along with a type=LoadBalancer service:
https://github.com/kubernetes/kubernetes/issues/24978
service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
service.beta.kubernetes.io/aws-load-balancer-ssl-ports=* (or e.g. https)
The first annotation is the only one you need if all you want is to support HTTPS, on any number of ports. If you also want to support HTTP on one or more additional ports, you need to use the second annotation to specify explicitly which ports will use encryption (the others will use plain HTTP).
In my case I setup an elb in aws and setup the ssl cert on that, choosing https and http for the connection types in the elb and that worked great. I setup the elb wroth kubectl expose.

ELB for Websockets SSL

Does AWS support websockets with SSL ?
Can EWS ELB be used for websockets over SSL ?
What happens when a EC2 instance(machine) is added or removed to this ELB. Especially removed; what if a machine goes down. are the existing sockets routed to some other machine or reseted to connected.
can ELB be a bottleneck at any point in time.
any other alternatives .. let me know
This link might prove partially helpful for you - it would appear that you can do web sockets over SSL, but currently I'm struggling to implement it.
StackOverflow - Websocket with Tomcat 7 on AWS Elastic Beanstalk
Currently AWS ELB doesn't support Websocket balancing, there is a trick to do it via SSL, but it has some limitation and depends on your app logic. So if websocket connection is used only as server-client communication, it will work. But if you have more advanced logic when clients must communicate with each other via a server then this solution won't work. For example one client has established connection for a chatroom, then other clients can connect to the established chatroom and communicate with each other.
Then only possible way to use HA-proxy http://blog.haproxy.com/2012/11/07/websockets-load-balancing-with-haproxy/
But shown example just shows how to configure HA-proxy base on two servers. So if you do not use Amazon Autoscalling Group, the solution is good. But if you will need use ASG, the question about add/remove instances to ha-proxy config is other challenge.