AWS cluester - understanding Internal Loadbalancer and Ingress - amazon-web-services

I have a cluster in AWS which is set up as Topology=Private and has an Internal Loadbalancer. Now I'm trying to deploy an Nginx Ingress Loadbalancer for it to expose the application pods to the internet.
I am trying to understand that in such a setting what will be the role of my internal loadbalancer (which I believe is a Elastic Loadbalancer). And could I have this setup even without the internal loadbalancer? In fact, what functionality would the cluster lose without the internal loadbalancer?

It is good to have the load balancer (ELB) for HA purpose, But place public facing ELB before the nginx controller instead of behind it. You can also do custom path routing in ALB (Layer7). Ideal setup would be
ELB(Public with SSL termination) --> 2 Nginx Ingress Loadbalancer(for HA have 2 instances in diff subnet) --> Application Pods.
Apart from ELB, remaining can be placed in private subnets.

Related

GKE Multicluster Ingress for private clusters with on-premise hardware load balancers

It appears that istio based or MultiCluster Ingress on GKE will only work with External Load balancers. We have a need(due to a regulatory limitation) that allows external traffic ingress only via an on premise hardware load balancer, this traffic must then be routed to GCP via partner interconnect.
With such a setup - supposing we create an a static IP with an internal load balancer on GCP - Will a MultiCluster Ingress work with this Internal load balancer?
Alternatively, how would you design a solution if you have multiple GKE clusters that need to be load balanced with this type of on-premise hardware LB type ingress?
MultiCluster Ingress only supports Public LoadBalancers.
You can use the new Gateway API with the gke-l7-rilb-mc Gateway Class, this should deploy an internal Multi cluster L7 LoadBalancer. You can route external traffic thought you onprem F5 and via Partner Interconnect to GCP to the LoadBalancer provisioned via the Gateway API
Just keep in mind that the Gateway API and controller are still in preview for now, they should be GA by the eoy https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways#blue-green

How to route traffic to ECS Fargate instance without an Application Load Balancer

I have a Fargate instance running on port 3000. For this service "Service Discovery" is enabled, and corresponding hosted zone is created in Route 53. I have added name servers from this hosted zone in my domain registrar(GoDaddy) DNS setting.
I want to route all traffic from my domain to this Fargate instance. Currently, I don't see a need to add an ALB since the traffic is very little and routing is simple. So I want to know the following
Is it possible to route my traffic from Route 53 to the Fargate instance running on port 3000 without an ALB? If Yes, how can I do it?
Is ALB required for configuring SSL? Or I can do it without an ALB?
See this article under the heading External Networking.
TL;DR is to create a VPC with a public subnet and an attached IP address via an internat gateway, and ensure your Fargate cluster/task is running in that VPC.
If you want to run SSL without a load balancer (which one of it's responsibilities can be for terminating SSL, you will need to terminate the SSL certificates yourself from your Fargate task.

How to expose my app outside cluster or vpc my internal load balancer in pprivate EKS cluster

I am having doubt with AWS EKS
i have EKS cluster (Private subnets) managed worker nodes( private subnets)
and i deployed nginx deplyoment with three replicas and did service internal loadbalancer
i can do curl
getting expected output
problem: How to expose my app outside cluster or vpc
Thanks
You can have your EKS nodes in private subnet of VPC but you need public subnets also for exposing your pods/containers.
So ideally you need to create a LB service for your nginx deployment.
The below blog helped me during my initial EKS setup hope it helps you too
Nginx ingress controller with NLB
You can have AWS Application Load Balancer added to your EKS cluster and have an ingress targeting your service.
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
Deploy ALB Controller in your cluster
Add new ingress pointing your service
Remember to make alb.ingress.kubernetes.io/scheme: internet-facing as you want to expose your service to public.
You can get DNS of new ingress in AWS Console(EC2/Load Balancer) or by describing ingress using kubectl.

Good way to whitelist ingress traffic for JHub on EKS (AWS kubernetes)?

Context:
I have a EKS cluster (EKS is AWS' managed kubernetes service).
I deploy an application to this EKS cluster (JupyterHub) via helm.
I have a VPN server.
Users of my application (JupyterHub on EKS) must connect to the VPN server first before they access the application.
I enforce this by removing the 0.0.0.0/0 "allow all" ingress rule on the elastic load balancer, and adding an ingress rule that allows traffic from the VPN server only.
The elastic load balancer referenced above is created implicitly by the JupyterHub application that gets deployed to EKS via helm.
Problem:
When I deploy changes to the running JuypyterHub application in EKS, sometimes [depending on the changes] the ELB gets deleted and re-created.
This causes the security group associated with the ELB to also get re-created, along with the ingress rules.
This is not ideal because it is easy to overlook this when deploying changes to JupyterHub/EKS, and a developer might forget to verify the security group rules are still present.
Question:
Is there a more robust place I can enforce this ingress network rule (only allow traffic from VPN server) ?
Two thoughts I had, but are not ideal:
Use a NACL. This won't work really, because it adds a lot of overhead managing the CIDRs due to the fact NACL is stateful and operates at subnet level.
I thought to add my ingress rules to the security group associated with the EKS worker nodes instead, but this won't work due to the same problem. When you delpoy an update to Jupyterhub/EKS, and if the ELB gets replaced, a "allow all traffic" ingress rule is implicitly added to the EKS worker node security group (allowing all traffic from the ELB). This would override my ingress rule.
It sounds like you're using a LoadBalanced service for JupyterHub. A better way of handling ingress into your cluster would be to use a single ingress controller (like the nginx ingress controller) - deployed via a different helm chart.
Then, deploy JupyterHub's helm chart but use a custom value passed into the release with the --set parameter to tell it to use a ClusterIP service instead of LoadBalancer type. This way, changes to your JupyterHub release that might re-create the ClusterIP service won't matter - as you'll be using Ingress Rules for the Ingress Controller to manage ingress for JupyterHub instead now.
Use the ingress rule feature of the JupyterHub helm chart to configure ingress rules for your nginx ingress controller: see docs here
The LoadBalancer generated by the Nginx Ingress Controller will instead remain persistent/stable and you can define your Security Group ingress rules on that separately.
Effectively you're decoupling ingress into EKS apps from your JupyterHub app by using the Ingress Controller + ingress rules pattern of access.
On the subject of ingress and LoadBalancers
With EKS/Helm and load balanced services the default is to create an internet facing elastic load balancer.
There are some extra annotations you can add to the service definition that will instead create it as an internal facing LoadBalancer.
This might be preferable to you for your ingress controller (or anywhere else you want to use LoadBalancer services), as it doesn't immediately expose the app to the open internet. You mentioned you already have VPN access into your VPC network, so users can still VPN in, and then hit the LoadBalancer hostname.
I wrote up a guide a while back on installing the nginx ingress controller here. It talks about doing this with DigitalOcean Kubernetes, but is still relevant for EKS as its just a helm chart.
There is another post I did which talks about some extra configuration annotations you can add to your ingress controller service that automatically creates the specific port range ingress security group rules at the same time as the load balancer. (This is another option for you if you find each time it gets created you are having to manually update the ingress rules on the security group). See the post on customising Ingress Controller load balancer and port ranges for ingress here
The config values you want for auto-configuring your LoadBalancer ingress source ranges and setting it to internal can be set with:
controller.service.loadBalancerSourceRanges
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Hope that helps!

Make k8s services available via ingress on an AWS cluster created with kops

After trying kubernetes on a few KVMs with kubeadm, I'd like to setup a proper auto-scalable cluster on AWS with kops and serve a few websites with it.
The mind-blowing magic of kops create cluster ... gives me a bunch of ec2 instances, makes the k8s API available at test-cluster.example.com and even configures my local ~/.kube/config so that I can kubectl apply -f any-stuff.yaml right away. This is just great!
I'm at the point when I can send my deployments to the cluster and configure the ingress rules – all this stuff is visible in the dashboard. However, at the moment it's not very clear how I can associate the nodes in my cluster with the domain names I've got.
In my small KVM k8s I simply install traefik and expose it on ports :80 and :443. Then I go to my DNS settings and add a few A records, which point to the public IP(s) of my cluster node(s). In AWS, there is a dynamic set of VMs, some of which may go down when the cluster is not under heavy load. So It feels like I need to use an external load balancer given that my traefik helm chart service exposes two random ports instead of fixed :80 and :443, but I'm not sure.
What are the options? What is their cost? What should go to DNS records in case if the domains are not controlled by AWS?
Configuring your service as a LoadBalancer service is not sufficient for your cluster to to setup the actual loadbalancer, you need an ingress controller running like the one above.
You should add the kops nginx ingress addon: https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
In this case the nginx ingress controller on AWS will find the ingress and create an AWS ELB for it. I am not sure of the cost, but its worth it.
You can also consider Node Ports which you can access against the node's public ips and node port (be sure to add a rule to your security group)
You can also consider the new AWS ELB v2 or ALB which supports Http/2 and websockets. You can use the alb-ingress-controller https://github.com/coreos/alb-ingress-controller for this.
Finally if you want SSL (which you should) consider the kube-lego project which will automate getting SSL certs for you. https://github.com/jetstack/kube-lego
In my case I used nginx-ingress-controller. I think that setup with traefik will be the same.
1) Set traefik service type as loadBalancer.
Kubernetes will add an ELB rule.
2) Set CNAME or ALIAS in Route53 to ELB hostname.
You can use https://github.com/kubernetes-incubator/external-dns for synchronize exposed services and ingresses with Route53.