I have my application running in EKS cluster.I have exposed the application using Ingress- ALB load balancer controller. ALB load balancer controller has deleted recently, how to find when it got deleted.
If you have configured the ALB-ingress controller driver to dump logs on S3. Its a place to start. This enter link description here guide will be a good start to understand how could it be configured.
Here is a pattern of an annotation for the ALB ingress controller that you could use for searching:
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app
Related
I am running a microservice application off of AWS ECS. Each microservice currently has its own Load balancer.
There is one main public facing service which the rest of the services communicate with via gateways. Having each service have its own ELB is currently too expensive, is there some way to have only 1 ELB for the public facing service that will route to the other services based off of path. Is this possible without actually having the other service names in the URL. Could a reverse proxy work?
I know this is a broad question but any help would be appreciated
Inside your EC2 panel go to loadbalancers section, choose a loadbalancer and then in listeners tab, there is a button named view/edit rules, there you set conditions to use a single loadbalancer for different clusters/instances of your app. note that for each container you need a target group defined.
You can config loadbalancer to route based on:
Http Headers
Path i.e: www.example.com/a or www.example.com/b
Host Header(hostname)
Query strings
or even source Ip.
That's it! cheers.
I am having hard time in understanding the role of a Load Balancer when used with Ingress Nginx.
I know a Load balancer distributes request over multiple nodes.
i.g, let's say I have two nodes A and B , and they are responisble for processing requests at example.com.
So a load balancer will take request for example.com and distribute among them with help of defined algorithm.
I also understand what an API Gateway is,
i.g., let's say I have one order service and another payment service so an API gateway will get the request for example.com and it will hand over the request for /orders to order service and /payments to payment service.
The Confusion:
Load Balancer(NLB) -> API Gateway -> Services -> order deployment -> which is running two replicas
Who distributes requests in those replicas for /orders
What is the role of load balancer in this case?
Some article suggest to create a service as type Load Balancer what does that mean? What this service will do?
Also, Load Balancer sits outside of the cluster NLB -> [ k8s cluster ], how does it know how to distribute requests?
These collectely could one question, I don't know.
Any kind of explanation would appreciated.
I have gone through many articles and blogs but none talks about complete picture.
Update
Many of my doubts are cleared through this article
Within the cluster a service does load balancing among the replicas.
Source
I still have some questions,
Do I only need a load balacner to expose the ingress controller service?
What if there is some problem with the ingress controller and it restarts.
What will happen will it get a new IP and load balancer will poin to new one or the ip will remain the same?
This article may help : https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/
Q: Do I only need a load balacner to expose the ingress controller service?
A: Expose K8s services mainly
Q: What if there is some problem with the ingress controller and it restarts.
A: Problem can appear if new broken changes will be applied, and in this case old controller will still work, but new one will fail to start, therefore you will have to do kubectl describe etc, to understand what is wrong.
Q: What will happen will it get a new IP and load balancer will poin to new one or the ip will remain the same?
A: Why you need LB ip's? Use LoadBalancer DNS.
I am running aws eks. I am trying to install the sample Nginx app and point a subdomain to it. I have hooked aws eks to an existing rancher portal. I can able to install my Nginx app using the rancher. It has a service file and ingress file. Here is my Nginx helm chart https://github.com/clayrisser/charts/tree/master/alpha/nginx
When I went through the many docs online i have seen aws eks requires aws load balancer controller which auto creates and load balancer of type we specify through our ingress and service file and we need to Alias point to domian. How can we alias point if our domain is a root domain?
How can we eliminate creating Lb's for each app. Is there a way to create and use only one LB for the whole cluster? and all apps can use this LB?
Is there a way to have IP for the elb instead of generated one?
If there is a better way of doing
How can we eliminate creating Lb's for each app. Is there a way to
create and use only one LB for the whole cluster? and all apps can use
this LB?
Yeah, you can install an ingress controller in your cluster with service type LoadBalancer. This will create a LoadBalancer in your account. Some of the popular ones are Nginx, Traefik, Contour etc.
The ingress resources you create now can use this ingress controller using annotation kubernetes.io/ingress.class. Make sure your app service type is not LoadBalancer as it will create a new LB.
Is there a way to have IP for the elb instead of generated one?
Yeah, some cloud providers (including AWS) allow you to specify the loadBalancerIP. In those cases, the load-balancer is created with the user-specified loadBalancerIP. Service should look something like this:
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xx
...
But as you're looking for a single LB, you should probably use the loadBalancerIP option with an ingress controller. For eg, nginx ingress controller provides this option. The configuration would look something like:
values:
controller:
service:
loadBalancerIP: "12.234.162.41"
xxx
https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml#L271
I have a simple spring boot application deployed on Kubernetes on GCP. I wish to custom auto-scale the application using latency threshold (response time). Stackdriver has a set of metrics for load balancer. Details of the metrics can be found in this link.
I have exposed my application to an external IP using the following command
kubectl expose deployment springboot-app-new --type=LoadBalancer --port 80 --target-port 9000
I used this API explorer to view the metrics. The response code is 200, but the response is empty.
The metrics filter I used is metric.type = "loadbalancing.googleapis.com/https/backend_latencies"
Question
Why am I not getting anything in the response? Am I making any mistake?
I have already enabled Stackdriver API. Is there any other settings to be made to get the response?
As mentioned in the comments, the metric you're trying to use belongs to an HTTP(S) load balancer and the type LoadBalancer, when used in GKE, will deploy a Network Load Balancer instead.
The reason you're not able to find its metrics using the Stackdriver Monitoring page is that, the link shared in the comment corresponds to a TCP/SSL Proxy load balancer (layer 7) documentation instead of a Network Load Balancer (layer 4), which is the one that is already created in your cluster and for now, the Network Load Balancer won't show up using the Stackdriver Monitoring page.
However, a feature request has been created to have this functionality enabled in the Monitoring dashboard.
If you need to have this particular metric (loadbalancing.googleapis.com/https/backend_latencies), it might be best to expose your deployment using an Ingress instead of using the LoadBalancer type. This will automatically create an HTTP(S) load balancer with the monitoring enabled instead of the current Network Load Balancer.
I've gotten a bit lost in the number of services in AWS and I'm having a difficult time finding the answer to what I think is probably a very simple question.
I have a Docker image that's serving a RestAPI over HTTP on port 80. I am currently hosting this on AWS with ECS. It's using Faregate but I could make an EC2 cluster if need be.
The problems are:
1) I currently get a new IP address whenever I run my task, I want a consistent address to access it from. Doesn't need to be a static IP, it could be routed from DNS.
2) It's not using my hostname which I would like to have api.myhostname.com go to the Docker image while www.myhostname.com currently already goes to my Cloudfront CDN serving the web application.
3) There's no SSL and I would need this to be encrypted.
Which services should I be using to make this happen? I looked into API Gateways and didn't find a way to use an ECS task as a backend. I looked into ELB for ECS but load balancers didn't seem to provide a way to make static IPs out of the Docker images.
Thanks.
I'll suggest a service for each of you requirements:
you want to run a Docker container: ECS using FARGATE is the right solution
you want a consistent address: use the Service Load Balancing which is integrated into ECS. [1] You can also achieve consistent addressing using Service Discovery if the price for running a load balancer is too high in your scenario. [2]
you want SSL: AWS Elastic Load Balancing integrates with AWS Certificate Manager (ACM) which allows you to create HTTPS listeners. [3]
you want to use your hostname: use AWS Route53 and an Application Load Balancer. The load balancer receives a hostname by aws automatically and you can then point your custom dns at that entry. [4]
So my advice is:
Create an ECS service which starts your docker container as FARGATE task.
Create a certificate for your HTTPS listener in AWS Certificate Manager. ACM manages your certificates and sends you an email if they are expiring soon. [5]
Use Service Load Balancing with an Application Load Balancer to automatically register any newly created ECS tasks to a target group. Configure the load balancer to listen for incoming traffic on an HTTPS listener and routes it to the target group which has your ECS tasks registered as targets.
References
[1] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
[2] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html
[3] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
[4] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html
[5] https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html