EKS Load Balancer Best Pratices - amazon-web-services

We are looking to switch over to Kubernetes for our deployment on AWS. One area of concern is setting up the load balancer for the frontend application.
It appears recommended to use the "LoadBalancer" type service in the cluster. However, I'm worried about this because there seems to be no way to specify the load balancer used, so any redeployment of the service would necessarily change DNS name used, resulting in downtime.
Is there a recommended practical way to stay on the same load balancer? Am I overthinking this, and this is acceptable for a generic SaaS appliation?

Well, the approach taken generically is this way --
Using nginx or Traefik (L7 load balancers), being static part of architecture ( rarely changed except for upgradations).
You can add ingress rules, which carry the binding of a DNS to a service ( say frontend service in your case is bound to www.example-dns.com), the frontend service will be having multiple Pods in backend where the traffic will be thrown.
Now there are multiple ways to do Loadbalancing at Pod level, horizontal Pod autoscaler can be used for each service individually.
The nginx and Traefik will appear under the EKS boundaries only.

Related

Can an API Gateway point to multiple Application Load Balancers?

Having a hard time figuring out a microservices architecture.
Right now I have an ECS Cluster with two services (TodoService, CategoriesService) running in containers. Both of the services have their own Load Balancer. I'm trying to build an API Gateway where /todos would route to the Todo-app-load-balancer and /categories would route to the Categories-app-load-balancer.
First, is this a good approach to microservices? And second, question from the title.
First, is this a good approach to microservices?
Yes, there is nothing wrong with this approach.
Can an API Gateway point to multiple Application Load Balancers?
Yes, you can point each method from the API gateway to an entirely different backend resource.
In case of an Application Load Balancer, there are multiple ways of doing this. Probably the easiest is to have a public Application Load Balancer and to create HTTP integration for it. You have to specify the DNS name for the application load balancer as the endpoint. For more information, see this support page.
Other option would be to use VPC Links, which would integration with private load balancers. While this would be recommended for production, it is a bit more complex to set it up.
Is it a good or bad approach is moreover an architectural decision, But I can suggest using one ALB(Ingress) with different rules can solve your problem, Also in API GATEWAY only allow to add ELB services directly ALB will not but still there is a workaround by adding direct DNS. Here I'm attaching two screenshots for your reference.
Direct integration is not allowed on ALB, but you can use the DNS name manually.

Are there two levels of load balancing when using Istio Destination Rules?

As far as I understood, Istio Destination Rules can define load balancing policies to reach a subset of a service, e.g. subset based on different versions of the service. So the Destination Rules are the first level of load balancing.
The request will eventually reach a K8s service which is generally implemented by kube-proxy. Kube-proxy does a simple load-balancing with the pods in its back-end. Here is the second level of load balancing.
Is there a way to remove the second load-balancer? For example, could we create a lot of services instances that offer the same service and can be load-balanced by Destination Rules and then have only one pod per service instance, so that kube-proxy does not apply load-balancing?
According to istio documentation:
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
Istio’s traffic management model relies on the Envoy proxies that are deployed along with your services. All traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the architecture overview. The rest of this guide introduces Istio’s traffic management features.
This means that the istio service mesh is communicating via envoy proxy which in turn relies on kubernetes networking.
We can have an example where a VirtualService that is using istio ingress gateway load-balances it's traffic to two different services based on labels. Then those services can have multiple pods.
Istio load-balancing in this case works only on (layer 7) which results with route to specific endpoint (one of the services) and relies on kubernetes to handle connections and the rest including service round-robin load-balancing (layer 4) in case of multiple pods.
The advantage of having single service with multiple pods is obviously easier configuration and management. In case of 1 pod per service, each service would need to be reconfigured separately and loses all of its ability to scale features.
There is a great video on Youtube which partially covers this topic:
Life of a packet through Istio by Matt Turner.
I highly recommend watching as it explains how istio works on a fundamental level.

ASP.NET Core on AWS Fargate with Reverse Proxy and ALB

We are looking to migrate our .NET Core applications to AWS. For some background information; At the moment we host our applications on VM's behind IIS, which with the .NET Core Hosting module, is very straight forward. Our applications are a combination of both intranet and externally facing applications, nothing with very high traffic demand.
After some research it seems like AWS ECS Fargate is a good option. The plan is to Dockerize our applications and deploy them to ECS Fargate at this point.
My consern is mainly about the topic of reverse proxies.
For now I have got an Identityserver application successfully running on ECS Fargate behind an Application Load Balancer. The ALB does TLS termination, and forwards traffic to the container running under ECS Fargate on http. It's a very straight forward setup, but I worry I am missing something as this really is not my field of expertise.
My question is, would the above setup sound sufficient? My current headache is if it would be worth to add Nginx (or similar) reverse proxies to the pipeline? In that case we'd have 2 scenarios as I understand it:
Keep the ALB and add another reverse proxy (say Nginx). The ALB still does TLS termination and forwards the traffic to Nginx which again forwards the traffic to the container running the application itself. I am maybe not seeing the benefits of this, however I fear I might be wrong. I feel it's adding unnecessary complexity to the setup.
Skip the ALB all together and expose Nginx (or another reverse proxy) publicly. The Nginx instance would stand for TLS termination, load balancing and so on. While I can see the benefit of more control with this scenario, again - the additional setup makes me think it might not be worth it, seeing we are a small team with limited hosting experience.
So - my main question would be if the original scenario would sound plausible for a production environment? Any other feedback is of course also highly appreciated if someone can contribute with some feedback.

Spring Boot - Different systems( eureka , zuul, ribbon, nginx,) used for what?

I have been working with spring and now would like to learn spring boot and microservices. I understand what microservice is all about and how it works. While going through docs i came across many things used to develop microservices along with spring boot which i am very much confused.
I have listed the systems below.and the questions:
Netflix Eureka - I understand this is service discovery platform.
All services will be registered to eureka server and all
microservices are eureka clients. Now my doubt is , without having
an API gateway is there any use with this service registry ? This is
to understand the actual use of service registry.
ZUULApi gateway- I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate
microservice corresponding to request URL. iS that assumption
correct? will the api gateway interact with Eureka for getting the
appropriate microservice?
NGINX - I have read NGINX can also be used as API gateway? Is that possible? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both? I know nginx is a webserver and reverse proxies can be powerfully configured.
AWS api gateway - Is this can also be used as an alternate for ZUUL?
RIBBON - for what ribbon is used? I didn't understand !
AWS ALB- This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
Please help
without having an API gateway is there any use with this service registry ?
Yes. For example you can use it to locate (IP and port) of all your microservices. This comes in handy for devops type work. For example, at one project I worked on, we used Eureka to find all instances of our microservices and ping them for their status (/health, /info).
I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate microservice corresponding to request URL. iS that assumption correct?
Yes but it can do a lot more. Essentially because Zuul is more of a framework/library that you turn into a microservice, you can code it to implement any kind of routing logic you can come up with. It is very powerful in that sense. For example, lets say you want to change how you route based on time of day or any other external factors, with Zuul you can do it.
will the api gateway interact with Eureka for getting the appropriate microservice?
Yes. You configure Zuul to point to Eureka. It becomes a client to Eureka and even subscribes to Eureka for realtime updates (which instances have joined or left).
I have read NGINX can also be used as API gateway? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both?
Nginx is pretty powerful and can do API gateway type work. But there are some major differences. AFAIK, microservices cannot dynamically register with Nginx, please correct me if I am wrong... as they can with Eureka. Second, while I know Nginx is highly (very highly) configurable, I suspect its configuration abilities do not come close to Zuul's routing capabilities (due to having the whole Java language at your disposal within Zuul to code your routing logic). It could be the case that there are service discovery solutions that work with Nginx. So Nginx will take care of the routing and such, but service discovery will still require a solution.
Is this can also be used as an alternate for ZUUL?
Yes AWS API Gateway can be used as a Zuul replacement of sorts. The issue here, just like Nginx, is service discovery. AWS API Gateway lets you apply logic to your routing... though not as open ended as Zuul.
for what ribbon is used?
While you can use the Ribbon library directly, for the most part consider it as an internal dependency of Zuul. It helps Zuul do the simple load balancing that it does. Please note that this project is in maintenance mode and not recommended any more.
This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB?
You can use ALB with ECS (elastic container service) to replace Eureka/Zuul. ECS will take care of the service discover for you and will map all instances of a particular service to a Target Group. Your ALB routing table can then route to Target Groups based on simple routing rules. The routing rules in ALB are very simple though, but improving over time.
Different systems which can be used for the working of microservices, that comes along with spring boot:
Eureka:
Probably the first microservice to be UP. Eureka is a service registry, means , it knows which ever microservices are running and in which port. Eureka is deploying as a sperate application and we can use #EnableEurekaServer annotation along with #SpringBootAPplication to make that app a eureka server. So our eureka service registery is UP and running. From now on all microservices will be registered in this eureka server by using #EnableDiscoveryClient annotation along with #SpringBootAPplication in all deployed microservices.
Zuul: ZUUL is a load balancer , routing application and reverse proxy server as well. That is before we were using apache for reverse proxy things , now , for microservices we can use ZUUL. Advantage is, in ZUUL we can programatically set configurations, like if /customer/* comes go to this microservice like that. Also ZUUL can act as a load balancer as well , which will pick the appropriate microservice in a round robin fashion. SO how does the ZUUL knows the details of microservices, the answer is eureka. It will work along with eureka to get microservice details. And in fact this ZUUL is also a Eureka client where we should mark using #EnableDiscoveryClient, thats how these 2 apps(Eureka and zuul) linked.
Ribbbon:
Ribbon use for load balancing. This is already available inside ZUUL, in which zuul is using Ribbon for load balancing stuff. Microservices are identified by service-name in properties file. IF we run 2 instances of one microservices in different port, this will be identified by Eureka and along with Ribbon(Inside zuul), requests will be redirected in a balanced way.
Aws ALB , NGINX , AWS Api gateway etc: There are alternatives for all the above mentioned things. Aws is having own load balancer, service discovery , api gateway etc . Not only AWS all cloud platofrms ,like Azure, have these. Its depends which one to use.
Adding a general question as well , How these microservices communicate each other: Using Resttemplate or Feignclient actual rest API can be called or Message queues like Rabbit MQ etc can be used .
Eureka can be used in conjunction with NGINX, which leads to very powerful combination.
I am using it on AWS EC2 environment. Previously instead of NGINX I was using Spring Cloud Gateway and before that Zuul. Depending of the load Spring Cloud Gateway was running on AWS t3.medium or t3.large instances. After moving to NGINX I am using t3.micro (8 times less memory) instance. I am almost sure that I can do the trick and with t3.nano (16 times less memory) instance, but I wanted to be sure that there will be no surprises.
Below are the high level steps what you have to do in order to plug NGINX in the Eureka ecosystem. More details you can find in NGINX With Eureka Instead of Spring Cloud Gateway or Zuul article.
Create a service which can read the configuration of all applications from Eureka and to 'translate' it to NGINX configuration.
Create a cronjob entry which at certain period will read the configuration from the above service and will call the NGINX hot reload
NGINX which will consume the configuration produced from the service and the cronjob and will work as API Gateway

Eureka with AWS ECS

We are using Eureka with AWS ECS service that can scale docker containers.
In ECS if you leave out the host port, or specify it as being '0', in your task definition, then the port will be chosen automatically and reported back to the service. After the task is running, describing it should show what port(s) it bound to.
How does Eureka can resolve what port to use if we have several EC2 instance. For example Service A from EC2-A try to call Service B from EC2-B. So Eureka can resolve hostname , but cannot identify exposed port
Hi #Aleksandr Filichkin,
I don't think Application Load Balancer and service registry does the same.
The main difference traffic flows over the (application) load balancer whereas the service registry just gives you a healthy endpoint that your client directly can address (so the network traffic does not flow over the service registry).
Cheap is a very relative term, maybe it's cheap for some, maybe it's an unnecessary overhead for others.
The issue was resolved
https://github.com/Netflix/eureka/issues/937
Currently ECS agent knows about running port.
But I don't recommend to use Eureka with ECS, because Application Load Balancer does the same. It works as service registry and discovery. You don't need to run addition service(Eureka), ALB is cheap.
There is another solution.
You can create an application loadbalancer and a target group, in which the docker containers can be launched.
Every docker container has set their hostname to the hostname of the loadbalancer. If you need a pretty url, then you can utilize Route53 for DNS-Routing.
It looks like this:
Service Discovery with Loadbalancer-Hostname
Request Flow
If you have two containers of the same task on different hosts, both will communicate the same loadbalancer hostname to eureka.
With this solution you can use eureka with docker on AWS ECS without loosing the advantages and flexibility of dynamic port mapping.