What should I use as entry point for multiple cloud application? - amazon-web-services

I am deploying two K8S clusters, one in AWS EKS, the other is on GCP GKE. Each cluster exports an ingress which is a load balancer for consumers to connect. They work as high availability application in case one cloud is down, the other cloud can pick up the traffic.
Now I need to create another entry point which point to these two load balancers so that clients can consume one entry point. I have two options here, DNS like route53, or another load balancer.
But I don't know which one is better and the main different between them. I'd like to achieve
high availability goal which mean if one region is down, clients can still use my application from the other cloud.
both clusters from two clouds are active. The entry point can route the traffic based on some rules like weighted, random, geolocation etc.

Related

ECS Fargate cross microservice communication options

I have been looking into different ways of connecting multiple miscorservices within their own services/tasks using ECS Fargate.
Normally, if all microservices are defined in the same task definition, we can just use the local ip with corresponding ports but this means we cannot scale individual microservices. From what I can tell there are two 'main' ways of enabling this communication when we break these out into multiple services:
Add a load balancer to each service and use the loadbalancers public ip as the single point of access from one service to another.
Questions I have on this is:
a. Do all the services that need to communicate need to sit in the same VPC and have the
service's incoming rules set to the security group of the load balancer?
b. Say we have now provisioned the entire set up and need to set one of the loadbalancer public DNS's in one microservices code base, whats the best way of attaining this, im guessing some sort of terraform script that 'assumes' the public DNS that will be added to it?
Making use of AWS Service Discovery, meaning we can query service to service with a simple built up identifier.
Question I have for this is:
a. Can we still attach load balancers to the services and STILL use service discovery? Or
does service discovery have an under the hood load balancer already configured?
Many thanks in advance for any help!
1.a All services in the same VPC and their security groups (SGs)
I assume that you are talking about case where each service will have its own load balancer (LB). Since the LBs are public, they can be in any VPC, region or account.
SGs are generally setup so that incoming rules to services allow only connections from the SG of the LB.
1.b DNS
Each task can have environmental variables. This is a good way to pass the DNS values. If you are taking about terraform (TF), then TF would provision the LBs and then create the tasks and set the env variables with DNS values of the LBs. Thus, you would know the DNS of LBs as they would have been created before your services.
2.a Service discovery (SD)
SD is only for private communication between services. No internet is involved, so everything must be in same VPC or peered-VPCs. So its basically oposite of the first solution with LBs.
I think you should be able to also use public LB along with SD.
SD does not use a LB. Instead when you query a DNS of a service through SD you will get private IP addresses of the tasks in random order. So the random order approximates load balancing of connections between tasks in a service.

google cloud platform Cloud locations for Kubernetes(GKE)

Do you know where to look for information, regarding setting up for distributing containers and volumes into locations?
The following link, states that Kubernetes Engine is an available product for locations:
"Products available by location"
"Deploy resources in specific zones, regions and multi-regions."
https://cloud.google.com/about/locations
Just to add more details to what #Harsh Manvar mentioned and to answer your question. The global load balancing offerings will route traffic to the backend instances in the zone closest to the client. That said there are some scenarios where this may not occur 100% of the time.
In general users traffic is directed to a single IP address. Points of Presence (PoPs) terminate traffic as near as possible to your users and direct load-balanced traffic to the closest healthy backend that has capacity. More information here
‘nodeselector’ basically forces a Pod to run only on Nodes in that node pool. Depending on the resource availability, traffic will be sent from the Load balancer to the aforementioned nodes where you attached the label.
I hope this helps

GCP internal load balancer between two VMs (Compute instances)

Is it possible in GCP to create an internal load balancer that balances the traffic between two Compute Instances in different regions?
Two instances (written NAT on them) are in different regions (e.g one in us-central1 and other in asia-south1) serving something on the same ports and the internal load balancer (e.g with IP: 170.0.0.4) is accepting requests from the clients and forwarding them to these VMs.
This would help in creating a highly available service (NAT in this case) that will work even when one VM or the service or region is down.
EDIT:
Adding some more details here:
Both VMs and the Load Balancer have internal IPs.
Both VMs and the Load Balancer are in the same VPC network
I need a layer 7 (HTTP(S)) internal lLoad balancer.
Internal Load balancing is only regional and since you want to have back-end in different regions it will still work but you will have to set up one by yourself.
It's not possible "out of the box".
You can have a look at the Internal Load Balacing documentation which explains how this works and why. Here's also a table with available Load balancing options.
If you want to configure your own LB then maybe try Envoy Proxy (or Nginx, or any solution you want).
In essence - unless you set up your own load balancer GCP doesn't have the functionality.
You could also use external load balancer (which is risky) and use it to load balance traffic and restrict external traffic to your instances (just a bunch of known IP's or ranges).
With only two endpoints, it is impossible to use a load-balancer, because there is nothing to balance. You could only put both of them behind a load balancer, then it could balance their traffic. Moving both into the same region might be the only option to use the internal load balancer, but also here, the servers need to be put behind it.

Networking Between Tasks in AWS ECS Fargate

I'm trying to setup a cluster with several different tasks that need to be able to communicate with each other. I have turned on Service Discovery for each task and I see all of the Route 53 DNS entries in my PrivateHosted Zone get updated as I spin up new tasks, but for whatever reason when I try to use the domain name of a service (wordpress.local) My other container cannot resolve it. They are all on the same availability zone and the same subnet. I'm not totally certain what else I need to do in order to get these tasks to be able to communicate with each other aside from setting up a target group in my load balancer which seems unnecessary as I have Service Discovery turned on...

AWS alternative to DNS failover?

I recently started reading about and playing around with AWS. I have particular interest in the different high availability architectures that can be acheived using the platform. Specifically, I am looking for a reliable poor man's solution that can be implemented using the least amount of servers.
So far, I am satisfied with solutions for the main HA concerns: load balancing, redundancy, auto recovery, scalability ...
The only sticking point I have is with failover solutions.
Using an ELB might seem great, however ELB actually uses DNS balancing under the hood. See Is AWS's Elastic Load Balancer a single point of failure?. Also from a Netflix blog post: Lessons Netflix Learned from the AWS Outage
This is because the ELB is a two tier load balancing scheme. The first tier consists of basic DNS based round robin load balancing. This gets a client to an ELB endpoint in the cloud that is in one of the zones that your ELB is configured to use.
Now, I have learned DNS failover is not an ideal solution, as others have pointed out, mainly because of unpredictable DNS caching. See for example: Why is DNS failover not recommended?.
Other than ELBs, it seems to me that most AWS HA architectures rely on DNS failover using route 53.
Finally, the floating IP/Elastic IP (EIP) strategy has popped up in a very small number of articles, such as Leveraging Multiple IP Addresses for Virtual IP Address Fail-over and I'm having a hard time figuring out if this is a viable solution for production systems. Also, all examples I came across implemented this using a set of active-passive instances. It seems like a waste to have a passive for every active to achieve this.
In light of this, I would like to ask you what is a faster and more reliable way to perform failover?
More specifically, please discuss how to perform failover without using DNS for the following 2 setups:
2 active-active EC2 instances in seperate AZs. Active-active, because this is a budget setup, were we can't afford to have an instance sitting around.
1 ELB with 2 EC2 instances in region A, 1 ELB with 2 EC2 instances in region B. Again, both regions are active and serving traffic. How do you handle the failover from 1 ELB to the other?
You'll understand ELB better by playing with it, if you are the inquisitive type, as I am.
"1" ELB provisioned in 2 availability zones is billed as 1 but deployed as 2. There are 2 IP addresses assigned, one to each balancer, and 2 A records auto-created, one for each, with very short TTLs.
Each of these 2 balancers will forward traffic to the instance in its same AZ, or you can enable cross-AZ load balancing (and you should, if you only have 1 server instance in each AZ).
These IP addresses do not change often and though it stands to reason that ELBs fail like anything else, I have maybe 30 of them and have never knowingly had a dead one on my hands, presumably because the ELB infrastructure will replace a dead instance and change the DNS without your intervention.
For 2 regions, you have little choice other than using DNS at some level. Latency-based routing from Route 53 can send people to the closest site in normal operations and route all traffic to the other site in the event of an outage of an entire region (as detected by Route 53 health checks), but with this is somewhat more likely to encounter issues with DNS caching when the entire region is unavailable.
Of course, part of the active/passive dilemma in a single region using Elastic IP is easily remedied with HAProxy on both app servers. It's an http request router and load balancer like ELB, but with a broader set of features. The code is so tight that you can likely run it on your app servers with negligible CPU consumption. The instance with the EIP would then balance traffic between its local app server and the peer. Across regions, HAProxy behind ELB could forward traffic to a mate in a remote region, if the local region is up but for whatever reason the application can't serve requests from the local region. (I have used such a setup to increase availability of external services, by bouncing the request to a remote AWS region when the direct Internet path from the local region is not working.)