This is the first time that I am using load balancer... I have spent quite a bit of time going through documentation and I am still quite confused.
I want to host my website. My website supports HTTPS only. I want to put my backend servers behind an Application Load Balancer.
I am using AWS' default VPC, I have created an ALB (myALB) and installed my SSL certificate on it. I have also created 2 EC2 instances (myBackEndServer1 & myBackEndServer2).
Questions:
Should the communication between backend servers and myALB be
through HTTP or HTTPS?
I have created an HTTPS listener on myALB, do I also need an HTTP
listener on myALB? what I want is to redirect any HTTP request to
HTTPS (I believe this should happen on myALB)?
I want to use External ID login (using Facebook). I have set up Facebook
login to work with HTTPS only. Does the communication between
Facebook and my backend servers go through myALB? I mean, I either
need HTTPS on my backend servers, or the communication with facebook
should go through myALB.
I would appreciate any general advice.
You can use both HTTP and HTTPS listeners.
Yes, you can achieve that with ALB. You can add a rule to it that says that any request that is coming to port 80 will be redirected to port 443 on a permanent basis. Check out rules for ALB.
If you make a request from your instances to Facebook - it depends on Facebook, whether your communication will be encrypted, because in such case you are a client. However if you set up some webhook, Facebook is now a client and to communicate with you, you're gonna give your load balancer's DNS name. And due to the point 2 in this list, Facebook will be forced to use TLS.
I'm not sure I fully understood your question number three, but here's something you may also find useful. ALB has some features that allows to authenticate users with Cognito. It explicitly says that your EC2 instances can be abstracted away from any authentication, also if it makes use of Facebook ID or Google Id or whatever. Never tried though.
Related
First of all, I'm in no way an expert at security or networking, so any advice would be appreciated.
I'm developing an IOS app that communicates with an API hosted on an AWS EC2 linux machine.
The API is deployed using **FastAPI + Docker**.
Currently, I'm able to communicate with my remote API using HTTP requests to my server's public IP address (after opening port 80 for TCP) and transfer data between the client and my server.
One of my app's features requires sending a private cookie from the client to the server.
Since having the cookie allows potential attackers to make requests on behalf of the client, I intend to transfer the cookie securely with HTTPS.
I have several questions:
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
The FastAPI "Deploy with Docker" docs recommend this article for implementing TLS for the server (using Docker Swarm Mode and Traefik).Is that guide relevant for my use-case?
In that article, it says Define a server name using a subdomain of a domain you own. Do I really need to own a domain to implement HTTPS? Can't I just keep using the server's IP address to communicate with it?
Thanks!
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
With HTTP all traffic between your clients and the ec2 is in plain text. With HTTPS the traffic is encrypted, so it is secure.
FastAPI "Deploy with Docker"
Sadly can't comment on the article.
Do I really need to own a domain to implement HTTPS?
Yes. The SSL certificates can only be registered for domains that you own. You can't get the certificate for domain that is not yours.
I have a classic load balancer that reads from both port 80 (http) and 433 (https) for my elastic beanstalk application. I have correctly setup an SSL certificate and everything works properly with my custom domain.
However, when I search for my domain: mydomain.com it automatically uses http instead of https. How can I allow it to use https automatically?
Edit: I am trying to deploy a django application.
New to this so let me know if I am leaving out some information.
If you're not tied to a Classic Load Balancer, I recommend switching to an Application Load Balancer: it will give you much more control over how you route requests, and supports HTTP->HTTPS redirect out of the box (doc here although you'll probably want to read the rest of that page and work through one of the tutorials to understand the context).
If you are tied to the Classic ELB then you'll have to do the redirect in your client code. I'm not familiar with Django, but it seems straightforward. The prior answer on that question shows how to do it if you have nginx in front of Django.
I try to serve Django app on aws fargate in https.
I connected my fargate service with network loadbalancers which use secure tcp connection certificated by ACM. And then I configured route 53 record set to connect load-balacer as alias target which made https connection possible.
It made my https connection possible however it is too slow to use this api in production. It is wokring much more slowly than http requests made with DNS name of loadbalancer. It seems like I have some problem between loadbalancers and route 53 setting but I don't know how to figure this out?
Generally there is no real difference between http and https requests. Could you post you results of http vs https requests. Maybe test it through jmeter while running one fargate service through http, and another through https which the same version of the app running in both places.
Once you get your results, put logs in the tasks to see how fast the actual each request is processed in server side so you'll know for sure which one is faster and slower. It would be a lot more easier for us to help if we had that information.
I need a scalable and cost effective architecture for a web design service. (multiple clients). I'm following the architecture below. I would like to know the shortcomings of it.
Background: Nuxt.js based server rendered application that is fronted by nginx reverse proxy.
The app container and the proxy containers are deployed onto AWS ECS instances. The proxy containers are registered to an ALB (application load balancer) via listeners that map from a dynamic container port to a static ELB port.
So, suppose we have two clients: www.client-1.com and www-client-2.com
When a request is made to www.client-1.com, the request is 301 redirected (with masking) to PORT 80 of the ALB. When the request hits ALB:80 it maps to instance_ip:3322 (where 3322 is a dynamic container port) via the listener-for-client-1 that is configured. And the response is sent back to the client.
When a request is made to www.client-2.com, the request is 301 redirected (with masking) to PORT 81 of the ALB. When the request hits ALB:81 it maps to instance_ip:3855 (where 3855 is a dynamic container port) via the listener-for-client-2 that is configured.
As you can see, this model allows me to share an elb across multiple clients. This model is tested and working for me.
Do you think the domain forwarding 301 is terrible idea ? Can you recommend an alternative that is affordable without requiring an ELB per client basis.
What other downsides do you see ?
Thanks!
Domain masking is always a terrible idea. Problems are inevitable, particularly when the browser is expected to access a non-standard port.
But none of this is necessary. ALB supports multiple applications (customers) on a single balancer.
You can now create Application Load Balancer rules that route incoming traffic based on the domain name specified in the Host header. Requests to api.example.com can be sent to one target group, requests to mobile.example.com to another, and all others (by way of a default rule) can be sent to a third.
https://aws.amazon.com/blogs/aws/new-host-based-routing-support-for-aws-application-load-balancers/
Despite the fact that this example uses subdomains (of http://example.com), ALB has no restrictions requiring that the domains be related. You can attach 26 different SSL certificates to a single ALB and route, by hostname, from the standard ports 80 and 443 to unique backend targets for each request Host header -- up to 100 rules per balancer.
Setup: Play Framework application deployed on Amazon EC2 instances via ECS, Elastic Load Balancer in front. I want to allow only HTTPS requests for the application.
I found several ways to use HTTPS with Play, but what are the pros and cons, or which one is best practice for a (dockerized) Play app?
Enable HTTPS directly within Play (with -Dhttps.port or https.port in config file).
Set up a front-end web server (e.g. Nginx) and let it handle the HTTP->HTTPS rewrite (example).
Implement a request filter in Play and redirect the requests within the application (as described here).
I'm not so keen to use the first version as I would have to manage the certificates separately on each instance, but I listed it for the sake of completeness.
One advantage I can think of for the third approach must be that the system architecture is simpler than in the second version and requires less configuration. Are there any disadvantages (e.g. performance) to using the third approach?
If you are using a load balancer then you should request a free SSL certificate from the Amazon Certificate Manager service and then attach that certificate to the load balancer.
To enable HTTP to HTTPS redirects you simply need to check the x-forwarded-proto header that the load balancer passes to the server. If it is http return a 301 with https. The article you linked covers this part.