I have the following setup:
client --> AWS NLB (terminates SSL) --> nginx --> webserver
How can I get nginx to serve content over HTTP2? Enabling it on the nginx server config just causes the browser to download a file when accessing a page.
Browsers use ALPN as part of the TLS negotiation to decide to sue the HTTP/2 protocol.
As your TLS termination is happening at the NLB it must announce this HTTP/2 support and the pass on the unencrypted HTTP/2 data to Nginx.
I can’t see anything to suggest that NLB supports setting of ALPN so not sure this is possible. You will need to ask AWS if this is supported as nothing in their documentation on it, but that in itself probably gives you the answer that you don’t want.
Not sure why it’s downloading a file. Does the same thing happen if you connect directly to Nginx?
The solution I ultimately arrived at was this:
client --> AWS NLB --> AWS ALB (terminates SSL) --> nginx --> webserver
The trick was to use TCP on port 443 on the NLB at creation time! The web ui does not permit you to add a TCP listener on 443 afterward — it requires you to use the TLS choice on 443 and select a cert for TLS termination. The only reason I'm using NLB is because it supports static IP association. TCP passthrough to the ALB works for my use case.
Since the ALB terminates TLS and also supports HTTP/2 this setup works.
Related
We use an Application Load Balancer behind which we have an nginx server. Our client has asked us to implement mTLS but I don't think that works if the ALB terminates TLS connections.
I know that our ALB currently swaps out the self-signed certificate of our nginx server and replaces it with its own, which is a pretty good indication that it terminates TLS connections.
If we can't change that we'd have to switch to an NLB instead.
Can an ALB be configured to work without terminating TLS connections in AWS, or is that impossible?
You are correct. ALB unfortunately does not support mTLS at this time, (I really wish AWS would add that feature). And since ALB needs to terminate the SSL connection in order to do all the things it does like path forwarding, etc.. there is no way for them to add TCP pass-through to the ALB. You will need to switch to an NLB, and handle all the SSL certificate stuff on your server.
According to the AWS documentation, "WebSockets and Secure WebSockets support is available natively and ready for use on an Application Load Balancer."
However, when I select Application Load Balancer in EC2, I don't have any option other than HTTP and HTTPS:
I would like to use the secure websocket protocol (wss://) which I believe would be over TLS:8888.
How can I input this option?
The solution was to use HTTPS for the listener protocol, even though the browser is making requests to wss://.
For port number, configuring both the listener and environment instance to port 8888 works.
We have a number of services behind an API gateway which is itself behind ingress-nginx. We're trying to use HTTP/2 to speed up data transfer to the front-end but all of our connections are still being done with HTTP/1.1.
The connection from client to nginx is over HTTPS, but nginx communicates with our API gateway using HTTP, and the gateway also uses HTTP to communicate with the backend services.
Do we need to use HTTPS from end-to-end to get HTTP/2 to work? If so, what's the best way to set this up re: using certificates? If not, what could be causing the connection to drop to HTTP/1.1?
We are using ingress-nginx version 0.21.0, which has nginx 1.15.6 and OpenSSL 1.1.1, which should be sufficient to support TLS 1.3/ALPN/HTTP2. Our nginx-configuration configmap has use-http2 set to true and I can see that the pod's /etc/nginx.conf has a listen ... http2; line.
Edit 10/05/2019:
Further to the comments of #Barry Pollard and #Rico, I've found out that AWS Elastic Load Balancer, which sits in front of our ingress-nginx controller, doesn't support HTTP/2. I've cut nginx out of the stack and our API Gateway is being provisioned its own Network Load Balancer. However, we're still on HTTP/1.1. It looks like ASP.Net Core 2.2's HTTP server Kestrel uses HTTP/2 by default, so I'm not sure why the connection is still dropping to 1.1.
Like #BarryPollard said you shouldn't need HTTP/2 end to end to establish HTTP/2 connections on your browser.
It sounds like whatever you are using for a client is dropping to HTTP/1.1, make sure you try with one of the following:
Chrome 51
Firefox 53
Edge 12
Internet Explorer 11
Opera 38
You didn't specify what architecture is fronting your nginx. Is it connected directly to the internet? or it's going through cloud load balancer? CDN? You can also test with Wireshark as described here.
I have tried unsuccessfully to configure SSL for my project.
My AWS load balancer is configured correctly and accepts the certificate keys. I have configured the listeners to route both port 80 traffic and port 443 traffic to my port 80 on the instance.
I would imagine that no further modification is necessary on the instance (Nginx and Puma) since everything is routed to port 80 on the instance. I have seen examples where the certificate is installed on the instances but I understand the load balancer is the SSL termination point so this is not necessary.
When accessing via http://www.example.com eveything works fine. However, accessing via https://www.example.com times out.
I would appreciate some help with the proper high-level setup.
Edit: I have not received any response to this question. I assume it is too general?
I would appreciate confirmation that the high level reasoning I am using is the right one. I should install the certificate in the load balancer only and configure the load balancer to accept connections on the 443 port, BUT route everything on the 80 port internally to the web server instances.
I just stumbled over this question as I had the same problem: All requests to https://myapp.com timed-out and I could not figure out why. Here in short how I could achieve (forced) HTTPS in a Rails app on AWS:
My app:
Rails 5 with enabled config.force_ssl = true (production.rb) - so all connections coming from HTTP will get re-routed to HTTPS in the Rails App. No need to set-up difficult nginx rules. The same app used the gem 'rack-ssl-enforcer' as it was on Rails 4.2.
Side note: AWS LoadBalancers used in the past HTTP GET requests to check the health of the instances (today they support HTTPS). Therefore exception rules had to be defined for the SSL enforcement: Rails 5: config.ssl_options = { redirect: { exclude: -> request { request.path =~ /health-check/ } } } (in production.rb) with a respective route to a controller in the Rails App.
Side note to side note: In Rails 5, the initializer new_framework_defaults.rb has already defined "ssl_options". Make sure to deactivate this before using the "ssl_options" rule in production.rb.
AWS:
Elastic Beanstalk set-up on AWS with a valid cert on the Load Balancer using two Listener rules:
HTTP 80 requests on LB get directed to HTTP 80 on the instances
HTTPS 443 requests on LB get directed to HTTP 80 on the instances (here the certificate needs to be applied)
You see that the Load Balancer is the termination point of SSL. All requests coming from HTTP will go through the LB and will then be redirected by the Rails App to HTTPS automatically.
The thing no one tells you
With this in place, the HTTPS request will still time-out (here I spent days! to figure out why). In the end it was an extremely simple issue: The Security Group of the LoadBalancer (in AWS console -> EC2 -> Security Groups) only accepts requests on Port 80 (HTTP) -> Just activate Port 443 (HTTPS) as well. It should then work (at least it did for me).
I don't know if you managed your problem but for whoever may find this question here is what I did to get it working.
I've been all day reading and found a mix of two configurations that at this moment are working
Basically you need to configure nginx to redirect to https, but some of the recommended configurations do nothing to the nginx config.
Basically I'm using this gist configuration:
https://gist.github.com/petelacey/e35c98f9a35063a89fa9
But from this configuration I added the command to restart the nginx server:
https://gist.github.com/KeithP/f8534c04d20c2b4e4b1d
My take on this is that when the eb deploy process manages to copy the config files nginx has already started(?) making those changes useless. Hence the need to manually restarted, if some has a better approach let us know
Michael Fehr's answer worked and should be the accepted answer. I had the same problem, adding the config.force_ssl = true is what I missed. With the remark that you don't need to add the ebs configuration file they say you have to add if you are using the load balancer. That can be misleading and they do not specify it in the docs
We've been using nginx compiled with the spdy module for some time now and despite only being draft 2 of the specs are quite pleased with its performance.
However we now have the need to horizontally scale and have put our EC2 instances behind an Elastic Load Balancer.
Since ELB doesn't support the NPN protocol we have set the listeners to the following:
SSL 443 -> SSL 443
We have also enabled the new proxy-protocol as described here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
Everything works completely fine with this configuration. Our app is successfuly loadbalanced across our instances.
However when running http://spdycheck.org/ it reports that SPDY is not enabled. Yet if I point spdycheck to the elastic IP of a single instance, it correctly reports SPDY as being enabled.
Any help would be greatly appreciated.
Doing SSL -> SSL doesnt send the whole TCP packets to your webserver.
AWS decypts the packets using the certificate and re-encrypt it. Your backend only receives the modified packets.
The viable option is to change the protocols to TCP but you will need nginx proxy patch for http headers or to work better.
I'm having same problem as well and waiting for either AWS to enable NPN negotiaition on ELBs or nginx add the accept-proxy patch to its module.
We just released it last night at https://www.ritani.com.
You'll need a version of nginx that supports spdy and proxy_protocol. We are on 1.6.2.
Through the AWS CLI add and attach the proxy_protocol to your ELB.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html#enable-proxy-protocol-cli
Through the AWS Web UI for that ELB, remove any 443 listeners. Add a new listener as TCP 443 -> TCP 443.
In your nginx config server block:
listen 443 ssl spdy proxy_protocol;
add_header Alternate-Protocol 443:npn-spdy/3;
all the standard ssl directives...
To get ocsp stapling to work I had to use three certs. The standard way of concatenating my.crt and my.intermediate.crt didn't work. I had to break them out as follows.
ssl_certificate /etc/nginx/ssl/my.crt;
ssl_certificate_key /etc/nginx/ssl/my.private.key;
ssl_trusted_certificate /etc/nginx/ssl/my.intermediate.crt;
Lastly, swap any instances of $remote_addr with $proxy_protocol_addr. $remote_addr is now the elb and $proxy_protocol_addr is the remote client's ip.