I have installed a private EKS cluster where subnets attached are private subnet. What my requirement is "Private EKS with Istio installation" and create multiple microservices and expose them within the vpc.
For exposing them within VPC, i expected the 'istio-ingressgateway' would create an internal ALB but it is showing "Pending"
istio-ingressgateway LoadBalancer 1xx.x0x.xx.2xx <pending>
My need is to install multiple microservice within in different port using "NodePort" and expose them via Gateway.
Request any help or view in this.
Thanks!
You have 2 options, can use ALB ingress controller and create an internal ingress object and add the annotation:
alb.ingress.kubernetes.io/scheme: "internal"
or create a LoadBalancer service that will create an ELB. Add these annotations to the service:
service.beta.kubernetes.io/aws-load-balancer-internal: Used on the service to indicate that we want an internal ELB.
service.beta.kubernetes.io/aws-load-balancer-security-groups: Used to specify the security groups to be added to ELB created. This replaces all other security groups previously assigned to the ELB.
For example,
Also, you need these tags in the VPC subnets:
Key: kubernetes.io/role/internal-elb
Value: 1
For your case, take option 2.
Related
We're serving our product on AWS EKS where the service is created of type LoadBalancer. The ELB IP is assigned by AWS and this is what is being shared to the client.
However, when we re-deploy the service when we're making some changes/improvements, the ELB IP changes. Since this is causing us to frequently send mails to all the clients, we would need a dedicated IP which needs to be mapped to LB and thus will not change with re-deployment of the service.
Any existing AWS solution or a nice pointer to solve this situation would be helpful.
You can use elastic ip as is described here How to provide elastic ip to aws eks for external service with type loadbalancer?, and here https://docs.aws.amazon.com/es_es/eks/latest/userguide/network-load-balancing.html, just adding an anotation service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xxxxxxxxxxxxxxxxx,eipalloc-yyyyyyyyyyyyyyyyy to the nlb:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-05666791973f6a240
Another way is to use a domain name (my way). Then use https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md annotations to link your Service or Ingress with a dns name and configure external-dns to use your dns provider like Route53.
For example:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
namespace: ambassador
annotations:
external-dns.alpha.kubernetes.io/hostname: 'myserver.mydomain.com'
Every time your LoadBalancer changes the ip the dns server will be updated by the correct ip.
In order to have better control over exposed resources, you can use Ingress Controller such as AWS Load Balancer Controller https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/
With it, you'll be able to re-use the same ALBs for multiple Kubernetes services using alb.ingress.kubernetes.io/group.name annotation. It will create multiple listener rules based on Ingress configuration.
(Applicable if you're not restricted by hardcoded FW rules or similar configurations, that will require you to have static IPs, which is not recommended today)
So, I am very new to using EKS with NLB ingress and managing my own worker nodes using nodegroup (ASG).
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
Generally, when I have not used EKS and created by own k8s cluster, I have spun one NLB per service. Not sure how would it work in case of EKS with one NLB ingress for the whole cluster with multiple service inside.
Or, do I need to create multiple NLBs somehow?
Any help would be highly appreciated
when I have not used EKS and created by own k8s cluster, I have spun one NLB per service
AWS EKS is no different on this point. For a Network Load Balancer, NLB, e.g. on TCP/UDP level, you use a Kubernetes Service of type: LoadBalancer. But there are options, configured by the annotations on the Service. The most recent feature is IP mode. See EKS Network Load Balancing doc for more configuration alternatives.
Example:
kind: Service
apiVersion: v1
metadata:
name: nlb-ip-svc
annotations:
# route traffic directly to pod IPs
service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: nginx
If I create a NLB ingress for the cluster and deploy multiple services inside the node group, how does NLB know that it has to load balance across service separately?
The load balancer uses the target pods that is matched by the selector: in your Service.
The alternative is to use an Application Load Balancer, ALB that is working on the HTTP/HTTPS level using the Kubernetes Ingress resources. The ALB requires an Ingress controller installed in the cluster and the controller for the ALB is recently updated, see AWS Load Balancer Controller
I have two Kubernetes clusters in AWS, each in it's own VPC.
Cluster1 in VPC1
Cluster2 in VPC2
I want to do http(s) requests from cluster1 into cluster2 through a VPC peering. The VPC peering is setup and I can ping hosts from Cluster1 to hosts in Cluster2 currently.
How can I create a service that I can connect to from Cluster1 in Cluster2. I have experience setting up services using external ELBs and the like, but not for traffic internally in this above scenario.
You can create internal LoadBalancer.
All you need to do is to create a regular service of type LoadBalancer and annotate it with the following annotation:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Use an internal loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: cluster2-service
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
That will instruct the CNI to allocate the elb on a private subnet, which should make services behind it in the cluster reachable from the other vpc.
I am trying to expose a service to the outside world using the loadBalancer type service.
For that, i have followed this doc
https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/
My loadbalancer.yaml looks like this
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
But the load balancer is not creating as expected I am getting the following error
Warning SyncLoadBalancerFailed 8s (x3 over 23s) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
Seems like its because of some issues in the subnet tags to solve,but i have the required tags in my subnets
kubernetes.io/cluster/<cluster-name>. owned
kubernetes.io/role/elb 1
But still, I am getting the error could not find any suitable subnets for creating the ELB
By default AWS EKS only attaches load balancers to public subnets. In order to launch it in a private subnet you need to not only label your subnets (which it looks like you did) but also annotate your load balancer-
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
You can find more information here.
For people that may reach this question, I have faced the same error, but the problem was really simple.
The tag with key kubernetes.io/cluster/<cluster-name> had the wrong cluster name as the automation that deployed it was wrong.
In EKS 1.16, I need internet-facing NLB.
The root cause in EKS is that you haven't selected a public subnet while creating the cluster.
After creating the cluster EKS will not allow to update subnets as of now here
To resolve the issue, I have performed the below steps
created a public subnet in the same vpc of EKS
Attached IGW in route tables in new created public subnets
Added below tags in public subnets
kubernetes.io/cluster/<EKSClusterName> : shared
Note: In a 4th step, Replace your EKS cluster name in placeholder EKSClusterName
Resolution This has resolved my issue.
To identify a cluster's subnets, the Kubernetes Cloud Controller
Manager (cloud-controller-manager) and AWS Load Balancer Controller
(aws-load-balancer-controller) query that cluster's subnets by using
the following tag as a filter:
Choose the appropriate option for tagging your subnets:
For public and private subnets used by load balancer resources
Tag all public and private subnets that your cluster uses for load balancer resources with the following key-value pair:
Key: kubernetes.io/cluster/cluster-name Value: shared
The cluster-name value is for your Amazon EKS cluster. The shared value allows more than one cluster to use the subnet.
For private subnets used by internal load balancers
To allow Kubernetes to use your private subnets for internal load balancers, tag all private subnets in your VPC with the following key-value pair:
Key: kubernetes.io/role/internal-elb Value: 1
For public subnets used by external load balancers
To allow Kubernetes to use only tagged subnets for external load balancers, tag all public subnets in your VPC with the following key-value pair:
Key: kubernetes.io/role/elb Value: 1
Note: Use the preceding tag instead of using a public subnet in each Availability Zone.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/
Possibly your subnet is not a public one, i.e. accessible from the internet. This will be required for your Loadbalancer to accept traffic from the outside world. In order to make it public, you need to attach an Internet Gateway to your VPC. Check here for more documentation.
Additional to Robert' answer, you can use the following kubectl command for annotating a service;
kubectl annotate svc <service-name> service.beta.kubernetes.io/aws-load-balancer-internal="true"
Context:
I have a EKS cluster (EKS is AWS' managed kubernetes service).
I deploy an application to this EKS cluster (JupyterHub) via helm.
I have a VPN server.
Users of my application (JupyterHub on EKS) must connect to the VPN server first before they access the application.
I enforce this by removing the 0.0.0.0/0 "allow all" ingress rule on the elastic load balancer, and adding an ingress rule that allows traffic from the VPN server only.
The elastic load balancer referenced above is created implicitly by the JupyterHub application that gets deployed to EKS via helm.
Problem:
When I deploy changes to the running JuypyterHub application in EKS, sometimes [depending on the changes] the ELB gets deleted and re-created.
This causes the security group associated with the ELB to also get re-created, along with the ingress rules.
This is not ideal because it is easy to overlook this when deploying changes to JupyterHub/EKS, and a developer might forget to verify the security group rules are still present.
Question:
Is there a more robust place I can enforce this ingress network rule (only allow traffic from VPN server) ?
Two thoughts I had, but are not ideal:
Use a NACL. This won't work really, because it adds a lot of overhead managing the CIDRs due to the fact NACL is stateful and operates at subnet level.
I thought to add my ingress rules to the security group associated with the EKS worker nodes instead, but this won't work due to the same problem. When you delpoy an update to Jupyterhub/EKS, and if the ELB gets replaced, a "allow all traffic" ingress rule is implicitly added to the EKS worker node security group (allowing all traffic from the ELB). This would override my ingress rule.
It sounds like you're using a LoadBalanced service for JupyterHub. A better way of handling ingress into your cluster would be to use a single ingress controller (like the nginx ingress controller) - deployed via a different helm chart.
Then, deploy JupyterHub's helm chart but use a custom value passed into the release with the --set parameter to tell it to use a ClusterIP service instead of LoadBalancer type. This way, changes to your JupyterHub release that might re-create the ClusterIP service won't matter - as you'll be using Ingress Rules for the Ingress Controller to manage ingress for JupyterHub instead now.
Use the ingress rule feature of the JupyterHub helm chart to configure ingress rules for your nginx ingress controller: see docs here
The LoadBalancer generated by the Nginx Ingress Controller will instead remain persistent/stable and you can define your Security Group ingress rules on that separately.
Effectively you're decoupling ingress into EKS apps from your JupyterHub app by using the Ingress Controller + ingress rules pattern of access.
On the subject of ingress and LoadBalancers
With EKS/Helm and load balanced services the default is to create an internet facing elastic load balancer.
There are some extra annotations you can add to the service definition that will instead create it as an internal facing LoadBalancer.
This might be preferable to you for your ingress controller (or anywhere else you want to use LoadBalancer services), as it doesn't immediately expose the app to the open internet. You mentioned you already have VPN access into your VPC network, so users can still VPN in, and then hit the LoadBalancer hostname.
I wrote up a guide a while back on installing the nginx ingress controller here. It talks about doing this with DigitalOcean Kubernetes, but is still relevant for EKS as its just a helm chart.
There is another post I did which talks about some extra configuration annotations you can add to your ingress controller service that automatically creates the specific port range ingress security group rules at the same time as the load balancer. (This is another option for you if you find each time it gets created you are having to manually update the ingress rules on the security group). See the post on customising Ingress Controller load balancer and port ranges for ingress here
The config values you want for auto-configuring your LoadBalancer ingress source ranges and setting it to internal can be set with:
controller.service.loadBalancerSourceRanges
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Hope that helps!