kubernetes on aws: Exposing multiple domain names (ingress vs ELB) - amazon-web-services

I am experimenting with a kubernetes cluster on aws.
At the end of the day, I want to expose 2 urls:
production.somesite.com
staging.somesite.com
When exposing 1 url, things (at least in the cloud landscape) seem to be easy.
You make the service LoadBalancer type --> aws provisions an ELB --> you assign an A type alias record (e.g. whatever.somesite.com) to ELB's dns name and boom, there is your service publicly available via the hostname you like.
I assume one easy (and I guess not best-pracise-wise) way of going about this is to expose 2 ELBs.
Is Ingress the (good) alternative?
If so, what is the Route53 record I should create?
For what that matters (and in case this may be a dealbreaker for Ingress):
production.somesite.com will be publicly available
staging.somesite.com will have restrictive acces

Ingress is for sure one possible solution.
You need to deploy in your cluster an Ingress controller (for instance https://github.com/kubernetes/ingress-nginx) than expose it with a Service of type LoadBalancer as you did previously.
In route53, you need to point any domain names you want to be served by your ingress controller to ELB's name, exactly as you did previously.
The last thing you need to do is create an Ingress resource for every domain you want your ingress controller to be aware of (more on this here: https://kubernetes.io/docs/concepts/services-networking/ingress/).
That being said, if you plan to only have 2 public URLs in your cluster I'd use 2 ELBs. Ingress controller is another component to be maintained/monitored in your cluster, so take this into account when evaluating the tradeoffs.

Related

Remove ECS container name from record name on AWS Route 53

I have a little architecture with two services running on a EC2 cluster of AWS ECS, they're healthy and I can access them via browser through two ALBs, pointing to frontend and backend respectively. My frontend container can configure its backend base url so I want to connect it to the backend with a proper namespace with Route 53 Service Discovery (and not using ALB dns name).
My problem is I configured the tasks with awspvc mode and pointed them to the unique port I want to expose, but the EC2 instances (and the containers when I access via ssh) can't reach the short namespace, I have to add the name of the container and its port, but I can't abstract them (I think they're the original containers because the names does not match in pictures 2 and 3 but they're still accesable). When I used Fargate I could reach the containers only providing the service name and namespace, but now I can't with EC2.
I'll attach some pictures I believe they're useful (red is the same name for all texts):
service discovery of backend
route 53 records
active containers

AWS Load Balancer Path Based Routing

I am running a microservice application off of AWS ECS. Each microservice currently has its own Load balancer.
There is one main public facing service which the rest of the services communicate with via gateways. Having each service have its own ELB is currently too expensive, is there some way to have only 1 ELB for the public facing service that will route to the other services based off of path. Is this possible without actually having the other service names in the URL. Could a reverse proxy work?
I know this is a broad question but any help would be appreciated
Inside your EC2 panel go to loadbalancers section, choose a loadbalancer and then in listeners tab, there is a button named view/edit rules, there you set conditions to use a single loadbalancer for different clusters/instances of your app. note that for each container you need a target group defined.
You can config loadbalancer to route based on:
Http Headers
Path i.e: www.example.com/a or www.example.com/b
Host Header(hostname)
Query strings
or even source Ip.
That's it! cheers.

ALB ingress mixed private and internet facing paths

I have a set of containerized microservices behind an ALB serving as endpoints for my API. The ALB ingress is internet-facing and I have set up my path routing accordingly. Suddenly the need appeared for some additional (new) containerized microservices to be private (aka not accessible through the internet) but still be reachable from, and able to communicate with, the ones that are public (internally).
Is there a way to configure path based routing , or modify the ingress with some annotation to keep certain paths private?
If not, would a second ingress (an internal one this time) under the same ALB do the trick for what I want?
Thanks,
George
Turns out that (at least for my case) the solution is to ignore the internet-facing Ingress and let it do its thing. Internal facing REST API paths that should not be otherwise accessible can be used through their pods' Service specification.
Implementing a Service per microservice will allow internal access in their : without the need to modify anything in the initial Ingress which will continue to handle internet-facing API(s).

How to configure with terraform two ECS clusters in the same VPC?

I am trying to add a new cluster in my vpc by using terraform , this cluster will handle heavy calculation and un other activites. But i have no idea how to configure redirection for request from internet Cluster1 must be sensitive to a cluser1.mydomain.com and cluster to cluster2.mydomain.com
General steps are following:
Register mydomain.com in Route53 (R53) for example.
Create two A alias records (cluser1.mydomain.com and cluser2.mydomain.com) in R53 pointing to ALB DNS.
Setup two target groups (TGs) for the ALB. First TG (TG1) for ECS service one, while the TG2 for service two.
On the ALB setup single listener (e.g. HTTP 80), with two different rules. For example:
Rule one will be based on Host header equal to cluser1.mydomain.com which forwards to TG1.
Rule one will be based on Host header equal to cluser2.mydomain.com which forwards to TG2.
A compulsory default rule (or use 2 as default rule).

Exposing various ports behind a load balancer on Rancher/AWS

I am setting up a Rancher environment.
The Rancher server is behind a classic ELB (since ALBs are not recommended per Rancher guidelines).
I also want to make available Prometheus and Grafana services.
These are offered via Rancher catalogue and will run as container services, being exposed on Rancher host ports 3000 and 9090.
Since Rancher server (per their recommendations) requires ELB, I wanted to explore the options on how to make available the two services above using the most minimal possible setup.
If the server is available on say rancher.mydomain.com, ideally I would like to have the other two on grafana.mydomain.com and prometheus.mydomain.com.
Can I at least combine the later two behind an ALB?
If so, how do I map them?
Do I place <my_rancher_host_public_IP>:3000 and <my_rancher_host_public_IP>:9090 behind an ALB?
You could do this a couple (maybe more) ways:
use an external dns updater like the route 53 infra catalog item. That will automatically map dns directly to the public ip of the host that houses the services. Modify the dns template so it prepends the service name to the domain.
register your targets and map the ports, then set a dns entry to the ALB.
The first way will allow for dns to update in case the service shifts across hosts in your environment. You could leverage the second way and force containers to specific hosts.