We are designing a new cluster for our application. We are required to use AWS EKS and Consul. We have the following questions:
1) Is it possible to set an AWS ALB ingress (Application load balancing on Amazon EKS - Amazon EKS) as downstream from consul so I can manage it in the rules?
In our local tests we used an nginx ingress and it worked perfectly, but in EKS, nginx ingress uses classic load balancers and these will be deprecated on August 15, 2022 (Elastic Load Balancing migrate-classic-load-balancer.html).
Obviously we can’t create a new project with something that is going to be deprecated so soon.
2) Is ingress-gateway a replacement? Is it possible to create ingress-gateway using ALB ingress-controller from EKS? In the same case, ingress-gateway uses in AWS Classic load balancer and we have the same problem when deprecation.
3) Following this guide: Deploy Consul on Amazon Elastic Kubernetes Service (EKS) | Consul - HashiCorp Learn I see that no type of ingress controller is taken into account, so does it make sense to control external access to services from Consul? Or would income control suffice?
Thank you very much!
Any advice or documentation will be appreciated.
Cheers!
Related
Background
Current State: I currently have a nlb that routes to an nginx server running on an ec2 instance.
Goal
I am trying to replace the nginx ec2 instance with a fargate service that runs nginx.
I would like to keep the current nlb and set the fargate cluster as the target group for the existing nlb.
Problem
according to aws documentation, aws ecs fargate cluster service supports loadbalancing with nlb or alb: https://docs.aws.amazon.com/AmazonECS/latest/userguide/service-load-balancing.html
when I try to deploy the nginx task, in the load balancing section,
there is only an option to select an existing alb or create a new
alb.
I tried changing the task protocol to TCP and UDP--regardless of
the protocol, when I try to deploy the task as a service, the only
load balancer option is still application load balancer.
Question
How do I load balance to a fargate cluster service task using an nlb? Am I missing a specific setting somewhere?
If you cannot set the fargate cluster as a target group for an nlb directly, would it be reasonable to route traffic from an nlb to an alb and then set the alb target group as a fargate cluster?
You can absolutely use an NLB with an ECS Fargate service. I've done this before many times. My guess is you are simply encountering a bug in the AWS web UI. I've always used Terraform to deploy this sort of thing. I just checked in the ECS web UI, and on the 2nd step of creating a new ECS service I get the option of using a Network Load Balancer:
If your view doesn't look like that, try switching from the "New ECS Experience" in the UI which is still fairly beta and missing a lot of features.
I just went back and checked, and in the new ECS UI they are currently missing the option to select an NLB, so you have to continue using the old version of the UI for now until they fix that. I suggest continuing to use the old UI until they phase it out, because the new ECS UI is still missing a lot of features.
I have running EKS Cluster in AWS. Now I want to try use my own Network Load Balancer which were created not with AWS EKS annotations.
So my question: Is it even possible to use own NLB with EKS? If yes how can I do it? If not why it is not possible?
I've researched a lot found one opensource kind for EKS named as TargetGroupBinding, I've provided an ARN of my target group however than health checks are failing.
For some providers it is possible, such as Tencent Kubernetes Engine per official documentation -
metadata:
name: my-service
annotations:
# ID of an existing load balancer
service.kubernetes.io/tke-existed-lbid:lb-6swtxxxx
However this is not supported for AWS, as in the above link you can see that there is no similar annotation for AWS load balancers.
I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.
In kubernete, I read this doc https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer, it seems that Kubernete itself has a loadbalance service. And it seems that it creates an external hostname and IP address.
so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?
Yes you need it to create Kubernetes Ingress using AWS ALB Ingress Controller, the following link explain how use ALB as Ingress controller in EKS: This
You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.
When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.
A Kubernetes Service of type LoadBalancer corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes Ingress that corresponds to an application load balancer (also known as L7 load balancer).
To use an ALB or Ingress in Kubernetes, you also need to install an Ingress Controller. For AWS you should install AWS Load Balancer Controller, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.
See the EKS docs about EKS network load balancing and EKS application load balancing
As already mentioned from the other guys, yes it is NOT required but it is very helpful to use an ALB.
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
If you still have some questions, just ask them here :)
No you don't need ALB and yes, you can use Route53 in an automated manner. Here's a nice article that describes the latter:
https://www.padok.fr/en/blog/external-dns-route53-eks
I am new to Kubernetes and AWS and exploring different AWS technologies for a project. One thing I am doing as part of that is to see how we can have routes in API Gateway connect to an EKS cluster (in a VPC).
This is what I have working:
An EKS Cluster
In the EKS Cluster I have nginx ingress-controller running
I have an EC2 inside the VPC and verified that I can reach a service running in the cluster through EC2 by using the ingress-controller url
This is what I am trying:
I tried to create an API Gateway route to access the same service using the ingress-controller url -> To achieve that, I am trying the steps here (because my cluster is in a VPC): https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-nlb-for-vpclink-using-console.html
One thing that is not clear to me is that, how do I specify the ingress-controller url as a target for the NLB? The only targets that I can specify are EC2 instances, but I want to direct the traffic through the ingress-controller (which is a service of type loadbalancer in K8s).
If I am doing this wrong way, please advice the right way of exposing EKS cluster in API Gateway through the nginx ingress controller. Thanks!
I have found the problem. When using nginx-ingress-controller, I just had to specify the annotation that it is of type "nlb"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Once I deploy the ingress controller with this annotation, it automatically creates an nlb in aws and sets the target according to the ingress defined! I was creating a new nlb myself and then trying to point to the ingress-controller which is not needed (nor the right way).
We have using multi-container docker environment for our project to deploy the microservices(Scala) in AWS. We are using AWS ECS (Elastic container service) to deploy and manage the application in AWS Cloud. We have placed 5 microservices within separate Task definition and launched it using ECS.
We have setup ALB (Application Load Balancer) and mapped with ECS and got the ALB (CName) domain. We have created new listener rules to route requests to targets API is routing (Path base routing)
http://umojify-alb-1987551880.us-east-1.elb.amazonaws.com
Finally, we got the response "502 Bad Gateway" and "Status code: 405". Please guide us on this issue.
Where and why the issue came? Is it for ALB or API?
API URL:
http://umojify-alb-1987551880.us-east-1.elb.amazonaws.com/save-user-rating
AWS ECS uses dynamic ports to connect to the microservice containers. Please check if the ports are open on the container hosts(instances). I faced the same issue and had to open all the TCP ports for the ALB. See the AWS documentation for configuring the security group rules for container instances -
AWS security group rules for container instances