Deploying an HTTP/2 web-server with Kubernetes on AWS - amazon-web-services

I have a Go server that is currently running with Kubernetes on AWS. The website sits under a route-53 and an ELB that manages the SSL termination.
Now, I want to support HTTP/2 in my web-server in order to push resources to the clients, and I saw that HTTP/2 requires that the web-server will use HTTPS. I have a few questions according to that.
HTTP/2 requires HTTPS - In my case the HTTPS logic is in the ELB and it manages for me the SSL termination. My application gets the decrypted data as a simple HTTP request. Do I need to remove the ELB in order to enable HTTP/2 in my web-server?
Is there any way to leave the ELB there and enable HTTP/2 in my web-server?
In my local development I use openssl to generate certificate. If I deploy the web-server I need to get the CA certificate from AWS and store it somewhere in the Kubernetes certificate-manager and inject to my web-server in the initialization. What is the recommended way to do this?
I feel like I miss something, so I'll appreciate any help. Thanks

The new ELB supports HTTP/2 (https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/) but not the Push attribute (https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-configuration): “You can't use the server-push feature of HTTP/2”
If you want to use Push you can use the ELB as a level four TCP LoadBalancer and enable this at your webserver. For HaProxy it is also possible to still offset SSL/TLS with this set up (HTTP/2 behind reverse proxy) but not sure if similar is possible under ELB (probably not). This is because while HTTP/2 requires HTTPS from all the major browsers it is not a requirement of the protocol itself so load balancer -> server can be over HTTP/2 without HTTPS (called h2c).
However I would say that HTTP/2 Push is very complicated to get right - read this excellent post by Jake Archibald of Google on this: https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/. It’s generally been found to benefit in a few cases and cause no change in most and even cause degradation in performance in others. Ultimately it’s a bit of a let down in HTTP/2 features, though personally I don’t think it’s been explored enough so may be some positives to come out of it yet.
So if you don’t want Push then is there still a point in upgrading to HTTP/2 on the front end? Yes in my opinion as detailed in my answer here: HTTP2 with node.js behind nginx proxy. This also shows that there is no real need to have HTTP/2 on the backend from LB to webserver meaning you could leave it as a HTTPS offloading loaf balancer.
It should be noted that there are some use cases where HTTP/2 is slower:
Under heavy packet loss (i.e. a very bad Internet connection). Here the single TCP connection used by HTTP/2 and it’s TCP Head of Line Blocking means the connection suffers more than 6 individual HTTP/1 connections. QUIC which is a even newer protocol then HTTP/2 (so new it’s not even out yet, so not really available except on Google servers) addresses this.
For large packets due to AWS’s specific implementation. Interesting post here on that: https://medium.com/#ptforsberg/on-the-aws-application-load-balancer-http-2-support-fad4bc67b21a. This is only really an issue for truely large downloads most likely for APIs and shouldn’t be an issue for most websites (and if it is then you should optimise your website cause HTTP/2 won’t be able to help much anyway!). Could be easily fixed by upgrading the HTTP/2 window size setting but looks like ELB does not allow you to set this.

There is no benefit to deploying HTTP2 on an AWS load balancer if your backend is not HTTP2 also. Technically HTTP2 does not require HTTPS, but nobody implements HTTP2 for HTTP. HTTP2 is a protocol optimization (simple viewpoint) that removes round trips in the SSL negotiation, improves pipelining, etc. If the load balancer is communicating with your backend via HTTP, there will not be any improvement. The load balancer will see a small decrease in load due to reduced round trips during HTTPS setup.
I recommend that you configure your backend services to only use HTTPS (redirect clients to HTTPS) and use an SSL certificate. Then configure HTTP2, which is not easy by the way. You can use Let's Encrypt for SSL which works very well. You can also use OpenSSL self-signed certificates (which I don't recommend). You cannot use AWS services to create SSL certificates for your backend services, only for AWS managed services (CloudFront, ALB, etc.).
You can also setup the load balancer with Layer 4 (TCP) listeners. This is what I do when I setup HTTP2 on my backend servers. Now the entire path from client to backend is using HTTP2 without double SSL encryption / decryption layers.
One of the nice features of load balancers is called "SSL offloading". This means that you enable SSL on the load balancer and only enable HTTP on your backend web servers. This goes against HTTP2. Therefore think thru what you really want to accomplish and then design your services to meet those objectives.
Another point to consider. Since you are looking into HTTP2, at the same time remove support in your services for the older TLS versions and unsafe encryption and hashing algorithms. Dropping TLS 1.0 should be mandatory today and I recommend dropping TLS 1.1 also. Unless you really need to support ancient browsers or custom low-end hardware, TLS 1.2 should be the standard today. Your logfiles can tell you if clients are connecting via older protocols.

Related

Enforce AES-256 to AWS elastic beanstalk

Disclaimer: I am fully aware that AES-128 is considered secure but we have wierd governmental requirements.
We run a server that provides a websocket interface with our clients as an elastic beanstalk application on AWS. It has an application load balancer in front of it which handles the HTTPS termination. We have a strange requirement on our system where all channels need to have > 200 bits encryption.
When our clients (which are IoT devices) establishes the connection the agreed on encryption becomes AES-128 (because all security policies in AWS accepts AES-128 and the devices do to).
The only way to, on the server-side, enforce AES-256 is to use the classic load balancer and add the ciphers ourselves. However, the classic load balancer does not support websocket.
Is there any possible way of circumventing this? Or do we need to add our own encryption to our channel to fulfill the requirements.
I believe that the best you could do with an Application Load Balancer (ALB) is to configure it to use the FIPS ELBSecurityPolicy-FS-1-2-Res-2020-10 security policy, however it will still be possible to negotiate ECDHE-ECDSA-AES128-GCM-SHA256 and ECDHE-RSA-AES128-GCM-SHA256. based on the table in the docs which will allow AES-128 as encryption method.
Another option would be to put an WebSocket API Gateway but the ciphers are pretty much the same and you might need to deal with the throttling in that case which is probably not the best thing to do considering the IoT clients.
Putting CloudFront in front of the ALB is not going to cut it either, as it has the same approach and the ciphers in the security policies for it are essentially the same
The security policy of the Network Load Balancer (NLB) is actually the same as the one of the ALB.
Essentially all possible AWS services are relying on the same security policies.
Which leads us to the two final options:
trying somehow to force it on client ends, which is most likely not possible
or replacing the ALB with a Network Load Balancer (which supports WebSockets) as suggested by #Mark B, setting up TCP listeners on it and handling the SSL yourself server side in your EB application which varies based on your application platform, but you should be able to enforce stricter (AES256) ciphers.

How to get HTTP/2 working in a Kubernetes cluster using ingress-nginx

We have a number of services behind an API gateway which is itself behind ingress-nginx. We're trying to use HTTP/2 to speed up data transfer to the front-end but all of our connections are still being done with HTTP/1.1.
The connection from client to nginx is over HTTPS, but nginx communicates with our API gateway using HTTP, and the gateway also uses HTTP to communicate with the backend services.
Do we need to use HTTPS from end-to-end to get HTTP/2 to work? If so, what's the best way to set this up re: using certificates? If not, what could be causing the connection to drop to HTTP/1.1?
We are using ingress-nginx version 0.21.0, which has nginx 1.15.6 and OpenSSL 1.1.1, which should be sufficient to support TLS 1.3/ALPN/HTTP2. Our nginx-configuration configmap has use-http2 set to true and I can see that the pod's /etc/nginx.conf has a listen ... http2; line.
Edit 10/05/2019:
Further to the comments of #Barry Pollard and #Rico, I've found out that AWS Elastic Load Balancer, which sits in front of our ingress-nginx controller, doesn't support HTTP/2. I've cut nginx out of the stack and our API Gateway is being provisioned its own Network Load Balancer. However, we're still on HTTP/1.1. It looks like ASP.Net Core 2.2's HTTP server Kestrel uses HTTP/2 by default, so I'm not sure why the connection is still dropping to 1.1.
Like #BarryPollard said you shouldn't need HTTP/2 end to end to establish HTTP/2 connections on your browser.
It sounds like whatever you are using for a client is dropping to HTTP/1.1, make sure you try with one of the following:
Chrome 51
Firefox 53
Edge 12
Internet Explorer 11
Opera 38
You didn't specify what architecture is fronting your nginx. Is it connected directly to the internet? or it's going through cloud load balancer? CDN? You can also test with Wireshark as described here.

Having load balancer before Akka Http multiple applications

I have multiple identical Scala Akka-HTTP applications, each one is installed on a dedicated server (around 10 apps), responding to HTTP requests on port 80. in front of this setup I am using single HAproxy instance that receives all the incoming traffic and balances the workload to these 10 servers.
We would like to change the HAproxy (we suspect that it causes us latency problems) and to use a different load balancer. the requirement is to adopt a different 3rd party load balancer or to develop a simple one using scala that round robin each http request to the backend akka http apps and to proxy back the response.
Is there another recommended load balancer (open source) that I can use to load balance / proxy the http incoming requests to the multiple apps other than HAproxy (maybe APACHE httpd) ?
Does it make sense to write a simple akka http application route as the loadbalancer, register the backend apps hosts in some configuration file, and to roundrobin the requests to them?
maybe I should consider Akka cluster to that purpose ? the thing is, that the applications are already standalone akka http services with no cluster support. and maybe it would be too much to go for clustering. (would like to keep it simple)
What is the best practice to load balance requests to http apps (especially akka http scala apps) as I might be missing something here?
Note - having back pressure is something that we also would like to have, meaning that if the servers are busy, we would like to response with 204 or some status code so our clients wont have timeouts in case my back end is busy.
Although Akka HTTP performance is quite impressive, I would not use it for writing a simple reverse proxy since there are tons of others out there in the community.
I am not sure where you deploy your app, but, the best (and more secure) approach is to use a LB provided by your cloud provider. Most of them has one and usually it has a good benefit-cost.
If your cloud provider does not provide one or you are hosting yourself your app, then first you should take a look on your HAProxy. Did you run tests on HAProxy in an isolated way to see it still has the same latency issues? Are you sure the config optimised for what you want? Does your HAProxy has enough resources (cpu and memory) to operate? Is your HAProxy in the same DataCenter as your deployed app?
If you follow and check all of these questions and still are having latency issues, then I would recommend you to choose another one. There are tons out there, such as Envoy and NGINX. I really like Envoy and I've been using it at work for a few months now without any complains.
Hope I could help.
[]'s

HTTP2 over AWS ELB under TCP Mode

Does anyone have the experience using HTTP2 server behind AWS ELB running in TCP Mode?
As I know, AWS ELB does not support HTTP2 now, however, by using TCP mode, it should pass the request to the backend server transparently.
Does someone have the experience for sharing?
Thank you.
Yes, TCP port 443 works to bypass ELB's HTTPS, but there's no way to do session stickiness since ELB can't read the cookies over the wire.
You may also consider using h2c (HTTP/2 over cleartext).
Supposedly the new Application Load Balancer supports HTTP/2. I'm a little unclear whether it's useful, however, if CloudFront doesn't support it yet:
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
ELB has no way of pipelining connections. Therefore you cannot trick it into do http2. Maybe with the new version coming out, but not sure.

Does Kinesis support HTTP (not HTTPS)?

Now I try out Kinesis REST API with HTTPS and it's work fine. But I want to build it with only HTTP, not HTTPS. Does Kinesis support HTTP without SSL?
No, it doesn't. According to the Regions and Endpoints documentation the Kinesis endpoints only support HTTPS.
http://docs.aws.amazon.com/general/latest/gr/rande.html#ak_region
If you are in a situation where you need to communicate with an API that only supports HTTPS but you are, for some significant reason, constrained to HTTP only, you might find that you could use a proxy that can accept unencrypted connections and originate encrypted connections to the final endpoint. On some of my legacy systems, I have accomplished this with HAProxy 1.5 or higher (previous versions do not have built-in openssl integration)... or Stunnel4, which I used before HAProxy 1.5 was released. Apparently there is now an Stunnel "5."
Of course, this is only viable if the network between the legacy system and your SSL client offloading is trusted.