When creating ALB rules in AWS console is there a ingress controller running in the background?
I'm a bit confused when learning about ALB ingress controller. I thought it'd be a just a API calls to ALB services in AWS. But the instructor seems to install a aws ingress controller. Then write rules to redirect path to node port services.
What's the difference compared to creating in AWS console and doing it via kubernets cluster.
This is the controller that's being installed.
https://github.com/kubernetes-sigs/aws-load-balancer-controller
In general, in Kubernetes, a controller is a running program that observes some Kubernetes objects and manages other objects based on that. For example, there is a controller that observes Deployments and creates ReplicaSets from them, and a controller that observes ReplicaSets and creates matching Pods. In this case, the ALB ingress controller observes Ingress objects and creates the corresponding AWS ALB rules.
Nothing stops you from doing the same setup by hand, or using a tool like Terraform that's a little more specialized for managing cloud resources. Using Ingress objects is a little more portable across environments (you can use a similar Ingress setup for a local minikube installation, for example). This approach would also let you use a single tool like Helm to create a Deployment, Service, and Ingress, and have the ALB automatically updated; if you delete the Helm chart, it will delete the Kubernetes objects, and the ALB will again update itself.
It's also possible that a development team would have permissions in Kubernetes to create Ingress objects, but wouldn't (directly) have permissions in AWS to create load balancers, and this setup might make sense depending on your local security and governance requirements.
just check my diagram first
So from a high level point of view you have a AWS ALB which redirects the traffic to the underlaying cluster. The Ingress Controller then is responsible to redirect the traffic to the correct Kubernetes service and the service is redirecting the traffic to one of its pods. There are multiple different solutions for that.
My favorite solution is:
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here)
Configure the IngressController Service to use NodePort as a type and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create an Ingress resource to configure the IngressController
So i hope that the diagram help to understand the reason of an ALB and an Ingress Controller.
If you still have questions, just ask here.
Related
I'm pretty new to k8s and I'm trying to figure out how to expose to Internet, multiple HTTP services, in cheap manner. Currently I'm using AWS EKS cluster with managed node groups, so the cheap way will be not to provision any kind ELB as it cost. Also I would like those services to be in private subnets so just f.ex only Ingress Resource will be exposed and the nodes will stay private. One load balancer per svc is definitely not an option as it will break down my budget
The options I consider:
Using K8s ingress resources (to be precise: Istio Ingress controller). But the downside of that is, when we creating ingress resource, AWS create Load Balancer for which I will need to pay.
Run node groups in public subnets, and create K8s Services of type NodePort so I could reach service using NodeIP:NodePort (NodePort will be specific for each service). The downside of that I will need to remember all IPs and ports assigned to each service. I can live with one service but when the number increase that will be pretty awful to remember.
At last, without any other option is to create one load balancer with public IP and also create Ingress controller with Istio. So I will reach each services by single DNS name of Load Balancer and I will route to services by request path.
Looking forward to any solution and inputs.
I don't think there is any magic here. Option 1 and 3 are basically one and the same (unless I am missing something). As you pointed out I don't think option 2 is viable for the reasons you call out. You have a number of options to go with. I don't know the Istio ingress (but I assume it will be fine). We often see customers using either the NGINX ingress or the ALB ingress.
All of these options require a Load Balancer.
I was using AWS ECS fargate for running my application. I am migrating to AWS EKS. When I use ECS, I deployed a ALB to route request to my service in ECS cluster.
In kubernete, I read this doc https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer, it seems that Kubernete itself has a loadbalance service. And it seems that it creates an external hostname and IP address.
so my question is do I need to deploy AWS ALB? If no, how can I pub this auto-generated hostname in route53? Does it change if I redeploy the service?
Yes you need it to create Kubernetes Ingress using AWS ALB Ingress Controller, the following link explain how use ALB as Ingress controller in EKS: This
You don't strictly need an AWS ALB for apps in your EKS cluster, but you probably want it.
When adopting Kubernetes, it is handy to manage some infrastructure parts from the Kubernetes cluster in a similar way to how you mange apps and in some cases there are a tight coupling between the app and configuration of your load balancer, therefore it makes sense to manage the infrastructure the same way.
A Kubernetes Service of type LoadBalancer corresponds to a network load balancer (also known as L4 load balancer). There is also Kubernetes Ingress that corresponds to an application load balancer (also known as L7 load balancer).
To use an ALB or Ingress in Kubernetes, you also need to install an Ingress Controller. For AWS you should install AWS Load Balancer Controller, this controller now also provides features in case you want to use a network load balancer, e.g. by using IP-mode or expose services using an Elastic IP. Using a pre-configured IP should help with using Route53.
See the EKS docs about EKS network load balancing and EKS application load balancing
As already mentioned from the other guys, yes it is NOT required but it is very helpful to use an ALB.
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
If you still have some questions, just ask them here :)
No you don't need ALB and yes, you can use Route53 in an automated manner. Here's a nice article that describes the latter:
https://www.padok.fr/en/blog/external-dns-route53-eks
Context:
I have a EKS cluster (EKS is AWS' managed kubernetes service).
I deploy an application to this EKS cluster (JupyterHub) via helm.
I have a VPN server.
Users of my application (JupyterHub on EKS) must connect to the VPN server first before they access the application.
I enforce this by removing the 0.0.0.0/0 "allow all" ingress rule on the elastic load balancer, and adding an ingress rule that allows traffic from the VPN server only.
The elastic load balancer referenced above is created implicitly by the JupyterHub application that gets deployed to EKS via helm.
Problem:
When I deploy changes to the running JuypyterHub application in EKS, sometimes [depending on the changes] the ELB gets deleted and re-created.
This causes the security group associated with the ELB to also get re-created, along with the ingress rules.
This is not ideal because it is easy to overlook this when deploying changes to JupyterHub/EKS, and a developer might forget to verify the security group rules are still present.
Question:
Is there a more robust place I can enforce this ingress network rule (only allow traffic from VPN server) ?
Two thoughts I had, but are not ideal:
Use a NACL. This won't work really, because it adds a lot of overhead managing the CIDRs due to the fact NACL is stateful and operates at subnet level.
I thought to add my ingress rules to the security group associated with the EKS worker nodes instead, but this won't work due to the same problem. When you delpoy an update to Jupyterhub/EKS, and if the ELB gets replaced, a "allow all traffic" ingress rule is implicitly added to the EKS worker node security group (allowing all traffic from the ELB). This would override my ingress rule.
It sounds like you're using a LoadBalanced service for JupyterHub. A better way of handling ingress into your cluster would be to use a single ingress controller (like the nginx ingress controller) - deployed via a different helm chart.
Then, deploy JupyterHub's helm chart but use a custom value passed into the release with the --set parameter to tell it to use a ClusterIP service instead of LoadBalancer type. This way, changes to your JupyterHub release that might re-create the ClusterIP service won't matter - as you'll be using Ingress Rules for the Ingress Controller to manage ingress for JupyterHub instead now.
Use the ingress rule feature of the JupyterHub helm chart to configure ingress rules for your nginx ingress controller: see docs here
The LoadBalancer generated by the Nginx Ingress Controller will instead remain persistent/stable and you can define your Security Group ingress rules on that separately.
Effectively you're decoupling ingress into EKS apps from your JupyterHub app by using the Ingress Controller + ingress rules pattern of access.
On the subject of ingress and LoadBalancers
With EKS/Helm and load balanced services the default is to create an internet facing elastic load balancer.
There are some extra annotations you can add to the service definition that will instead create it as an internal facing LoadBalancer.
This might be preferable to you for your ingress controller (or anywhere else you want to use LoadBalancer services), as it doesn't immediately expose the app to the open internet. You mentioned you already have VPN access into your VPC network, so users can still VPN in, and then hit the LoadBalancer hostname.
I wrote up a guide a while back on installing the nginx ingress controller here. It talks about doing this with DigitalOcean Kubernetes, but is still relevant for EKS as its just a helm chart.
There is another post I did which talks about some extra configuration annotations you can add to your ingress controller service that automatically creates the specific port range ingress security group rules at the same time as the load balancer. (This is another option for you if you find each time it gets created you are having to manually update the ingress rules on the security group). See the post on customising Ingress Controller load balancer and port ranges for ingress here
The config values you want for auto-configuring your LoadBalancer ingress source ranges and setting it to internal can be set with:
controller.service.loadBalancerSourceRanges
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Hope that helps!
When a Kubernetes service is exposed via an Ingress object, is the load balancer "phisically" deployed in the cluster, i.e. as some pod controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?
Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?
By default, a kubernetes cluster has no IngressController at all. This means that you need to deploy one yourself if you are on premise.
Some cloud providers do provide a default ingress controller in their kubernetes offer though, and this is the case of GKE. In their case the ingress controller is provided "As a service" but I am unsure about where it is exactly deployed.
Talking about AWS, if you deploy a cluster using kops you're on your own (you need to deploy an ingress controller yourself) but different deploy options on AWS could include an ingress controller deployment.
I would like to make some clarification concerning the Google Ingress Controller starting from its definition:
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
First of all if you want to understand better its behaviour I suggest you to read the official Kubernetes GitHub description of this resource.
In particular notice that:
It is a Daemon
It is deployed in a pod
It is in kube-system namespace
It is hidden to the customer
However you will not be able to "see" this resource for example running :
kubectl get all --all-namaspaces, because it is running on the master and not showed to the customer since it is a managed resource considered essential for the operation of the platform itself. As stated in the official documentation:
GCE/Google Kubernetes Engine deploys an ingress controller on the master
Note that the master itself of any the Google Cloud Kubernetes clusters is not accessible to the user and completely managed.
I will answer with respect to Google Cloud Engine.
Yes, everytime, you deploy a new ingress resource, a Load balancer is created which you can view from the section:
GCP Console --> Network services --> LoadBalancing
Clicking on the respective Loadbalancer id gives you all the details, for example the External IP, the backend service, ecc
After trying kubernetes on a few KVMs with kubeadm, I'd like to setup a proper auto-scalable cluster on AWS with kops and serve a few websites with it.
The mind-blowing magic of kops create cluster ... gives me a bunch of ec2 instances, makes the k8s API available at test-cluster.example.com and even configures my local ~/.kube/config so that I can kubectl apply -f any-stuff.yaml right away. This is just great!
I'm at the point when I can send my deployments to the cluster and configure the ingress rules – all this stuff is visible in the dashboard. However, at the moment it's not very clear how I can associate the nodes in my cluster with the domain names I've got.
In my small KVM k8s I simply install traefik and expose it on ports :80 and :443. Then I go to my DNS settings and add a few A records, which point to the public IP(s) of my cluster node(s). In AWS, there is a dynamic set of VMs, some of which may go down when the cluster is not under heavy load. So It feels like I need to use an external load balancer given that my traefik helm chart service exposes two random ports instead of fixed :80 and :443, but I'm not sure.
What are the options? What is their cost? What should go to DNS records in case if the domains are not controlled by AWS?
Configuring your service as a LoadBalancer service is not sufficient for your cluster to to setup the actual loadbalancer, you need an ingress controller running like the one above.
You should add the kops nginx ingress addon: https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
In this case the nginx ingress controller on AWS will find the ingress and create an AWS ELB for it. I am not sure of the cost, but its worth it.
You can also consider Node Ports which you can access against the node's public ips and node port (be sure to add a rule to your security group)
You can also consider the new AWS ELB v2 or ALB which supports Http/2 and websockets. You can use the alb-ingress-controller https://github.com/coreos/alb-ingress-controller for this.
Finally if you want SSL (which you should) consider the kube-lego project which will automate getting SSL certs for you. https://github.com/jetstack/kube-lego
In my case I used nginx-ingress-controller. I think that setup with traefik will be the same.
1) Set traefik service type as loadBalancer.
Kubernetes will add an ELB rule.
2) Set CNAME or ALIAS in Route53 to ELB hostname.
You can use https://github.com/kubernetes-incubator/external-dns for synchronize exposed services and ingresses with Route53.