Setup: Play Framework application deployed on Amazon EC2 instances via ECS, Elastic Load Balancer in front. I want to allow only HTTPS requests for the application.
I found several ways to use HTTPS with Play, but what are the pros and cons, or which one is best practice for a (dockerized) Play app?
Enable HTTPS directly within Play (with -Dhttps.port or https.port in config file).
Set up a front-end web server (e.g. Nginx) and let it handle the HTTP->HTTPS rewrite (example).
Implement a request filter in Play and redirect the requests within the application (as described here).
I'm not so keen to use the first version as I would have to manage the certificates separately on each instance, but I listed it for the sake of completeness.
One advantage I can think of for the third approach must be that the system architecture is simpler than in the second version and requires less configuration. Are there any disadvantages (e.g. performance) to using the third approach?
If you are using a load balancer then you should request a free SSL certificate from the Amazon Certificate Manager service and then attach that certificate to the load balancer.
To enable HTTP to HTTPS redirects you simply need to check the x-forwarded-proto header that the load balancer passes to the server. If it is http return a 301 with https. The article you linked covers this part.
Related
My instance is a single instance, no load balancer.
I cannot seem to add a load balancer to my existing app instance.
Other recommendations regarding Elastic Load Balancer are obsolete - there seems to be no such service in AWS.
I do not need caching or edge delivery - my application is entirely transactional APIs, so probably don't need CloudFront.
I have a domain name and a name server (external to AWS). I have a certificate (generated in Certificate Manager).
How do I enable HTTPS for my Elastic Beanstalk Java application?
CloudFront is the easiest and cheapest way to add SSL termination, because AWS will handle it all for you through its integration with certificate manager.
If you add an ELB, you have to run it 24/7 and it will double the cost of a single instance server.
If you want to support SSL termination on the server itself, you're going to have to do that yourself (using your web container, such as apache, nginx, tomcat or whatever you're running). Its not easy to setup.
Even if you don't need caching, CloudFront is going to be worth it just for handling your certificate (which is as simple as selecting the certificate from a drop-down).
I ended up using CloudFront.
That created a problem that cookies were not being passed through.
I created a custom Caching Policy to allow the cookies, and in doing so, I also changed the caching TTLs to be very low. This served my purposes.
This is the first time that I am using load balancer... I have spent quite a bit of time going through documentation and I am still quite confused.
I want to host my website. My website supports HTTPS only. I want to put my backend servers behind an Application Load Balancer.
I am using AWS' default VPC, I have created an ALB (myALB) and installed my SSL certificate on it. I have also created 2 EC2 instances (myBackEndServer1 & myBackEndServer2).
Questions:
Should the communication between backend servers and myALB be
through HTTP or HTTPS?
I have created an HTTPS listener on myALB, do I also need an HTTP
listener on myALB? what I want is to redirect any HTTP request to
HTTPS (I believe this should happen on myALB)?
I want to use External ID login (using Facebook). I have set up Facebook
login to work with HTTPS only. Does the communication between
Facebook and my backend servers go through myALB? I mean, I either
need HTTPS on my backend servers, or the communication with facebook
should go through myALB.
I would appreciate any general advice.
You can use both HTTP and HTTPS listeners.
Yes, you can achieve that with ALB. You can add a rule to it that says that any request that is coming to port 80 will be redirected to port 443 on a permanent basis. Check out rules for ALB.
If you make a request from your instances to Facebook - it depends on Facebook, whether your communication will be encrypted, because in such case you are a client. However if you set up some webhook, Facebook is now a client and to communicate with you, you're gonna give your load balancer's DNS name. And due to the point 2 in this list, Facebook will be forced to use TLS.
I'm not sure I fully understood your question number three, but here's something you may also find useful. ALB has some features that allows to authenticate users with Cognito. It explicitly says that your EC2 instances can be abstracted away from any authentication, also if it makes use of Facebook ID or Google Id or whatever. Never tried though.
To my knowledge Google Cloud Load Balancer is not supporting HTTP to HTTPS redirect out of the box and it's a known issue: https://issuetracker.google.com/issues/35904733
Currently, I'm sending certain requests to GKE backend where I run Kubernetes apps and I have GCS-backed backends. I'm also using Apache in the default backend where I force HTTPS.
Problem with this approach is that, if any request match the criteria for GKE backend, I have no way to force HTTPS. I'm thinking to use Apache backend for all requests (?) and somehow proxy some of them to GKE backend. This way Apache backend becomes a bottleneck and I'm not sure if it's a good solution at all.
How would you approach this problem? Thanks in advance!
Seems that the only way is to send HTTP traffic to custom backend (it can be apache/nginx) and force the HTTPS upgrade there.
I find this answer useful if you're using GKE backend with an Ingress.
How to force SSL for Kubernetes Ingress on GKE
To force SSL traffic from Load Balancer to GKE backend (pod), you need to expose port 443 (or similar) on the pod and configure SSL there.
We have a web application that serves both secure and public endpoints. We are currently deploying it with elastic beanstalk.
From now on, we want to apply client certification for secure endpoints. i.e. for some endpoints, certification check is needed.
However, elastic load balancer has not any configuration to assign different ssl certificates for different routes.
The only solution that we found is; setting up nginx instances before the application load balancer and check certificates in here.
Is there a way to achive this on AWS?
Although I have not personally used one yet, I believe the new Application Load Balancers might be able to handle this. You can do different types of listeners depending on the request. So it's definitely worth looking into before you go the nginx route:
https://aws.amazon.com/elasticloadbalancing/
You can test one out by going into your EC2 services panel, and create a new load balancer. Choose the Application Load Balancer type and see if you can configure it as needed.
Authenticating clients with client certificates require all of the SSL to be handled by the instances themselves.
Load balancing such a setup requires either a Classic ELB in TCP mode (transparent, no HTTP interpretation, with SSL not configured on the balancer)... or a Network Load Balancer, which would probably be the optimal configuration since it is handled by the network infrastructure itself, and is essentially infinitely scalable with no warm-up required.
Elastic Beanstalk recently announced support for Network Load Balancer.
I have some RESTfull APIs deployed on AWS, mostly on Elasticbeanstalk.
My company is gradually adopting a Microservices architecture, and, therefore, I want to start managing these APIs in a more professional and automated way. Hence, I want to adopt some kind of API Manager to provide standard functionalities such as routing and discovery.
In addition, I wish to use such API Manager to expose some of my APIs to the Internet. The manager would be exposed to the Internet through SSL only and should require some sort of authentication from external consumers before routing their requests to the internal APIs. For my use case, a simple API Key in the Authorization header of every request would suffice.
I'm currently considering two products as API Managers: WSO2 and Kong. The former is a somewhat new open source project hosted on Github.
In all the deployment scenarios that I am considering, the API Managers would have to be deployed on AWS EC2 instances. Moreover, they would have to be deployed on, at least, two different availability zones and behind an Elastic Load Balancer (ELB) to provide high availability to the managed APIs.
Most of my APIs adhere to the HATEOAS constraints. Therefore, many of their JSON responses contain links to other resources, which must be built dynamically based on the original request.
For instance:
If a user sent a request from the Internet through the exposed API Manager, the URL would look like:
https://apimanager.mycompany.com/accounts/123
As a result, the user should receive a JSON response containing an Account resource with a link to, let's say, a Subscription resource.
The link URL should be based on the protocol, host and port of the original request, and, therefore, would look like: https://apimanager.mycompany.com/subscriptions/789.
In order to meet the dynamic link generation requirements mentioned above, my APIs rely on the X-Forwarded-Proto, X-Forwarded-Host and X-Forwarded-Port HTTP headers. These should contain the protocol (http or https), the host name and the port used by the consumer in the original request, in spite of how many proxies the request passed through.
However, I noticed that when requests pass through ELBs, the X-Forwarded-Proto and X-Forwarded-Port headers are changed to values that refer to the last ELB the request passed through, instead of the values that were in the original request.
For instance: If the original request hits the API Manager through HTTPS, the Manager forwards the request to the internal API through HTTP; thus, when the request hits the second ELB, the ELB changes the X-Forwarded-Proto header to "http". As a result, the original "https" value of the X-Forwarded-Proto header is lost. Hence, the API is unable to build proper links with the "https" protocol in the URLs.
Apparently, ELBs can't be configured to behave in any other way. I couldn't find any setting that could affect this behavior in AWS's documentation.
Moreover, there doesn't seem to be any better alternative to AWS's ELBs. If I choose to use another product like HAProxy, or do the load balancing through the API Manager itself, I would have to install it on a regular EC2 instance, and, therefore, create a single point of failure.
I'm including an informal diagram to better convey my point of view.
Furthermore, I couldn't find any relevant discussion about deployment scenarios for WSO2 or Kong that would address these matters in any way. It's not clear to me how these products should relate to AWS's ELBs.
Comments from others with similar environments will be very welcome.
Thank you.
Interesting question/challenge - I'm not aware of a way to configure an Elastic Load Balancer's X-Forwarded-* header behavior. However, you might be able to work around this by leveraging ELB's different listener types for the two supported network layers of the OSI Model:
TCP/SSL Listener without Proxy Protocol
Rather than using an HTTP listener (OSI layer 7), which makes sense for terminating SSL etc., you could just use the non intrusive TCP/SSL listener (OSI layer 4) for your internal load balancers, see Protocols:
When you use TCP (layer 4) for both front-end and back-end
connections, your load balancer forwards the request to the back-end
instances without modifying the headers. [...] [emphasis mine]
I haven't tried this, but would expect the X-Forwarded-* headers added by the external HTTP/HTTPS load balancer to be passed through unmodified by the internal TCP/SSL load balancer in this scenario.
TCP/SSL Listener with Proxy Protocol
Alternatively, you could also leverage the more advanced/recent Proxy Protocol Support for Your Load Balancer right away, see the introductory blog post Elastic Load Balancing adds Support for Proxy Protocol for more on this:
Until today, ELB allowed you to obtain the clients IP address only if
you used HTTP(S) load balancing, which adds this information in the
X-Forwarded-For headers. Since X-Forwarded-For is used in HTTP headers
only, you could not obtain the clients IP address if the ELB was
configured for TCP load balancing. Many of you told us that you wanted
similar functionality for TCP traffic, so we added support for Proxy
Protocol. It simply prepends a human readable header with the clients
connection information to the TCP data sent to your server. [...] Proxy Protocol is
useful when you are serving non-HTTP traffic. Alternatively, you can
use it if you are sending HTTPS requests and do not want to terminate
the SSL connection on the load balancer. [...]
Other than the X-Forwarded-* headers, you can enable and disable proxy protocol handling. On the flip-side, your backend layers might not yet facilitate proxy protocol automatically and need to be adapted accordingly.