We're serving our product on AWS EKS where the service is created of type LoadBalancer. The ELB IP is assigned by AWS and this is what is being shared to the client.
However, when we re-deploy the service when we're making some changes/improvements, the ELB IP changes. Since this is causing us to frequently send mails to all the clients, we would need a dedicated IP which needs to be mapped to LB and thus will not change with re-deployment of the service.
Any existing AWS solution or a nice pointer to solve this situation would be helpful.
You can use elastic ip as is described here How to provide elastic ip to aws eks for external service with type loadbalancer?, and here https://docs.aws.amazon.com/es_es/eks/latest/userguide/network-load-balancing.html, just adding an anotation service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xxxxxxxxxxxxxxxxx,eipalloc-yyyyyyyyyyyyyyyyy to the nlb:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-05666791973f6a240
Another way is to use a domain name (my way). Then use https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md annotations to link your Service or Ingress with a dns name and configure external-dns to use your dns provider like Route53.
For example:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
namespace: ambassador
annotations:
external-dns.alpha.kubernetes.io/hostname: 'myserver.mydomain.com'
Every time your LoadBalancer changes the ip the dns server will be updated by the correct ip.
In order to have better control over exposed resources, you can use Ingress Controller such as AWS Load Balancer Controller https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/
With it, you'll be able to re-use the same ALBs for multiple Kubernetes services using alb.ingress.kubernetes.io/group.name annotation. It will create multiple listener rules based on Ingress configuration.
(Applicable if you're not restricted by hardcoded FW rules or similar configurations, that will require you to have static IPs, which is not recommended today)
Related
I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.
In kubernete, I read this doc https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer, it seems that Kubernete itself has a loadbalance service. And it seems that it creates an external hostname and IP address.
so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?
Yes you need it to create Kubernetes Ingress using AWS ALB Ingress Controller, the following link explain how use ALB as Ingress controller in EKS: This
You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.
When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.
A Kubernetes Service of type LoadBalancer corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes Ingress that corresponds to an application load balancer (also known as L7 load balancer).
To use an ALB or Ingress in Kubernetes, you also need to install an Ingress Controller. For AWS you should install AWS Load Balancer Controller, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.
See the EKS docs about EKS network load balancing and EKS application load balancing
As already mentioned from the other guys, yes it is NOT required but it is very helpful to use an ALB.
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
If you still have some questions, just ask them here :)
No you don't need ALB and yes, you can use Route53 in an automated manner. Here's a nice article that describes the latter:
https://www.padok.fr/en/blog/external-dns-route53-eks
I have a non-EKS AWS kubernetes cluster with 1 master 3 worker nodes
I am trying to install nginx ingress controller in order to use the cluster with a domain name but unfortunately it does not seem to work, the nginx ingress controller service is not taking automatically an IP and even if I set manually an external IP this IP is not answering in 80 port.
If you are looking for a public domain . Expose the nginx-ingress deployment(service) as a loadbalancer which will create an ALB.
You can then route the domain name to the ALB Alias in R53
The reason for External IP remaining in pending is that there is no load balancer in front of your cluster to provide it with external IP, like it would work EKS. You can achieve it by boostraping your cluster with --cloud-provider option using kubeadm.
You can follow these tutorials on how to successfully achieve it:
Kubernetes, Kubeadm, and the AWS Cloud Provider
Setting up the Kubernetes AWS Cloud Provider
Kubernetes: part 2 — a cluster set up on AWS with AWS cloud-provider and AWS LoadBalancer
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here)
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
The whole traffic flow could look like that:
If you still have some questions, just ask them here :)
Yes you will have to expose the deployment as a service
kubectl expose deployment {deploymentname} -n ns ==type=Loadbalancer --name={name}
Context:
I have a EKS cluster (EKS is AWS' managed kubernetes service).
I deploy an application to this EKS cluster (JupyterHub) via helm.
I have a VPN server.
Users of my application (JupyterHub on EKS) must connect to the VPN server first before they access the application.
I enforce this by removing the 0.0.0.0/0 "allow all" ingress rule on the elastic load balancer, and adding an ingress rule that allows traffic from the VPN server only.
The elastic load balancer referenced above is created implicitly by the JupyterHub application that gets deployed to EKS via helm.
Problem:
When I deploy changes to the running JuypyterHub application in EKS, sometimes [depending on the changes] the ELB gets deleted and re-created.
This causes the security group associated with the ELB to also get re-created, along with the ingress rules.
This is not ideal because it is easy to overlook this when deploying changes to JupyterHub/EKS, and a developer might forget to verify the security group rules are still present.
Question:
Is there a more robust place I can enforce this ingress network rule (only allow traffic from VPN server) ?
Two thoughts I had, but are not ideal:
Use a NACL. This won't work really, because it adds a lot of overhead managing the CIDRs due to the fact NACL is stateful and operates at subnet level.
I thought to add my ingress rules to the security group associated with the EKS worker nodes instead, but this won't work due to the same problem. When you delpoy an update to Jupyterhub/EKS, and if the ELB gets replaced, a "allow all traffic" ingress rule is implicitly added to the EKS worker node security group (allowing all traffic from the ELB). This would override my ingress rule.
It sounds like you're using a LoadBalanced service for JupyterHub. A better way of handling ingress into your cluster would be to use a single ingress controller (like the nginx ingress controller) - deployed via a different helm chart.
Then, deploy JupyterHub's helm chart but use a custom value passed into the release with the --set parameter to tell it to use a ClusterIP service instead of LoadBalancer type. This way, changes to your JupyterHub release that might re-create the ClusterIP service won't matter - as you'll be using Ingress Rules for the Ingress Controller to manage ingress for JupyterHub instead now.
Use the ingress rule feature of the JupyterHub helm chart to configure ingress rules for your nginx ingress controller: see docs here
The LoadBalancer generated by the Nginx Ingress Controller will instead remain persistent/stable and you can define your Security Group ingress rules on that separately.
Effectively you're decoupling ingress into EKS apps from your JupyterHub app by using the Ingress Controller + ingress rules pattern of access.
On the subject of ingress and LoadBalancers
With EKS/Helm and load balanced services the default is to create an internet facing elastic load balancer.
There are some extra annotations you can add to the service definition that will instead create it as an internal facing LoadBalancer.
This might be preferable to you for your ingress controller (or anywhere else you want to use LoadBalancer services), as it doesn't immediately expose the app to the open internet. You mentioned you already have VPN access into your VPC network, so users can still VPN in, and then hit the LoadBalancer hostname.
I wrote up a guide a while back on installing the nginx ingress controller here. It talks about doing this with DigitalOcean Kubernetes, but is still relevant for EKS as its just a helm chart.
There is another post I did which talks about some extra configuration annotations you can add to your ingress controller service that automatically creates the specific port range ingress security group rules at the same time as the load balancer. (This is another option for you if you find each time it gets created you are having to manually update the ingress rules on the security group). See the post on customising Ingress Controller load balancer and port ranges for ingress here
The config values you want for auto-configuring your LoadBalancer ingress source ranges and setting it to internal can be set with:
controller.service.loadBalancerSourceRanges
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Hope that helps!
After trying kubernetes on a few KVMs with kubeadm, I'd like to setup a proper auto-scalable cluster on AWS with kops and serve a few websites with it.
The mind-blowing magic of kops create cluster ... gives me a bunch of ec2 instances, makes the k8s API available at test-cluster.example.com and even configures my local ~/.kube/config so that I can kubectl apply -f any-stuff.yaml right away. This is just great!
I'm at the point when I can send my deployments to the cluster and configure the ingress rules – all this stuff is visible in the dashboard. However, at the moment it's not very clear how I can associate the nodes in my cluster with the domain names I've got.
In my small KVM k8s I simply install traefik and expose it on ports :80 and :443. Then I go to my DNS settings and add a few A records, which point to the public IP(s) of my cluster node(s). In AWS, there is a dynamic set of VMs, some of which may go down when the cluster is not under heavy load. So It feels like I need to use an external load balancer given that my traefik helm chart service exposes two random ports instead of fixed :80 and :443, but I'm not sure.
What are the options? What is their cost? What should go to DNS records in case if the domains are not controlled by AWS?
Configuring your service as a LoadBalancer service is not sufficient for your cluster to to setup the actual loadbalancer, you need an ingress controller running like the one above.
You should add the kops nginx ingress addon: https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
In this case the nginx ingress controller on AWS will find the ingress and create an AWS ELB for it. I am not sure of the cost, but its worth it.
You can also consider Node Ports which you can access against the node's public ips and node port (be sure to add a rule to your security group)
You can also consider the new AWS ELB v2 or ALB which supports Http/2 and websockets. You can use the alb-ingress-controller https://github.com/coreos/alb-ingress-controller for this.
Finally if you want SSL (which you should) consider the kube-lego project which will automate getting SSL certs for you. https://github.com/jetstack/kube-lego
In my case I used nginx-ingress-controller. I think that setup with traefik will be the same.
1) Set traefik service type as loadBalancer.
Kubernetes will add an ELB rule.
2) Set CNAME or ALIAS in Route53 to ELB hostname.
You can use https://github.com/kubernetes-incubator/external-dns for synchronize exposed services and ingresses with Route53.
I have a k8 cluster deployed in AWS using kube-aws. When I deploy a service, a new ELB is added for exposing the service to internet. Can I use ingress-controller to replace ELB or is there any other way to expose services other than ELB?
First, replace type: LoadBalancer with type: ClusterIP in your service definition. Then you have to configure the ingress and deploy a controller, like Nginx
If you are looking for a full example, I have one here: nginx-ingress-controller.
The ingress will expose you services using some of your workers public IPs, usually 2 of them. Just check your ingress kubectl get ing -o wide and create the DNS records.