I am trying to expose a service to the outside world using the loadBalancer type service.
For that, i have followed this doc
https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-services-cluster/
My loadbalancer.yaml looks like this
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
But the load balancer is not creating as expected I am getting the following error
Warning SyncLoadBalancerFailed 8s (x3 over 23s) service-controller Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
Seems like its because of some issues in the subnet tags to solve,but i have the required tags in my subnets
kubernetes.io/cluster/<cluster-name>. owned
kubernetes.io/role/elb 1
But still, I am getting the error could not find any suitable subnets for creating the ELB
By default AWS EKS only attaches load balancers to public subnets. In order to launch it in a private subnet you need to not only label your subnets (which it looks like you did) but also annotate your load balancer-
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
You can find more information here.
For people that may reach this question, I have faced the same error, but the problem was really simple.
The tag with key kubernetes.io/cluster/<cluster-name> had the wrong cluster name as the automation that deployed it was wrong.
In EKS 1.16, I need internet-facing NLB.
The root cause in EKS is that you haven't selected a public subnet while creating the cluster.
After creating the cluster EKS will not allow to update subnets as of now here
To resolve the issue, I have performed the below steps
created a public subnet in the same vpc of EKS
Attached IGW in route tables in new created public subnets
Added below tags in public subnets
kubernetes.io/cluster/<EKSClusterName> : shared
Note: In a 4th step, Replace your EKS cluster name in placeholder EKSClusterName
Resolution This has resolved my issue.
To identify a cluster's subnets, the Kubernetes Cloud Controller
Manager (cloud-controller-manager) and AWS Load Balancer Controller
(aws-load-balancer-controller) query that cluster's subnets by using
the following tag as a filter:
Choose the appropriate option for tagging your subnets:
For public and private subnets used by load balancer resources
Tag all public and private subnets that your cluster uses for load balancer resources with the following key-value pair:
Key: kubernetes.io/cluster/cluster-name Value: shared
The cluster-name value is for your Amazon EKS cluster. The shared value allows more than one cluster to use the subnet.
For private subnets used by internal load balancers
To allow Kubernetes to use your private subnets for internal load balancers, tag all private subnets in your VPC with the following key-value pair:
Key: kubernetes.io/role/internal-elb Value: 1
For public subnets used by external load balancers
To allow Kubernetes to use only tagged subnets for external load balancers, tag all public subnets in your VPC with the following key-value pair:
Key: kubernetes.io/role/elb Value: 1
Note: Use the preceding tag instead of using a public subnet in each Availability Zone.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/
Possibly your subnet is not a public one, i.e. accessible from the internet. This will be required for your Loadbalancer to accept traffic from the outside world. In order to make it public, you need to attach an Internet Gateway to your VPC. Check here for more documentation.
Additional to Robert' answer, you can use the following kubectl command for annotating a service;
kubectl annotate svc <service-name> service.beta.kubernetes.io/aws-load-balancer-internal="true"
Related
If you specify a subnet ID in ingress.yaml, the AWS LoadBalancer Controller specifies a subnet, but otherwise the subnet is not found.
P.S. You have recently applied aws-nuke to that account.
I wanted to share a situation that I've come across, which is puzzling me.
I found a Load Balancer created by the AWS Load Balancer Controller linked to one public subnet and one private subnet:
My subnets are adequately tagged to be automatically discovered by load balancers and ingress controllers, as described in the documentation and clarified in an ever more clear way in this question:
Private subnets:
kubernetes.io/cluster/<cluster_name>: shared
kubernetes.io/role/internal-elb: 1
Public subnets:
kubernetes.io/cluster/<cluster_name>: shared
kubernetes.io/role/elb: 1
My Load Balancer service in Kubernetes has the necessary annotations to state that it's a public-facing load balancer:
service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=<bucket_name>,access_logs.s3.prefix=<prefix>
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: <eip_allocation_id_1>, <eip_allocation_id_2>
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <certificate_id>
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-type: external
The manual workaround was to delete the Elastic Load Balancer and the LoadBalancer service in Kubernetes and then re-creating the service, which launched a new Elastic Load Balancer, this time with the right subnets attached.
I have installed a private EKS cluster where subnets attached are private subnet. What my requirement is "Private EKS with Istio installation" and create multiple microservices and expose them within the vpc.
For exposing them within VPC, i expected the 'istio-ingressgateway' would create an internal ALB but it is showing "Pending"
istio-ingressgateway LoadBalancer 1xx.x0x.xx.2xx <pending>
My need is to install multiple microservice within in different port using "NodePort" and expose them via Gateway.
Request any help or view in this.
Thanks!
You have 2 options, can use ALB ingress controller and create an internal ingress object and add the annotation:
alb.ingress.kubernetes.io/scheme: "internal"
or create a LoadBalancer service that will create an ELB. Add these annotations to the service:
service.beta.kubernetes.io/aws-load-balancer-internal: Used on the service to indicate that we want an internal ELB.
service.beta.kubernetes.io/aws-load-balancer-security-groups: Used to specify the security groups to be added to ELB created. This replaces all other security groups previously assigned to the ELB.
For example,
Also, you need these tags in the VPC subnets:
Key: kubernetes.io/role/internal-elb
Value: 1
For your case, take option 2.
I have two Kubernetes clusters in AWS, each in it's own VPC.
Cluster1 in VPC1
Cluster2 in VPC2
I want to do http(s) requests from cluster1 into cluster2 through a VPC peering. The VPC peering is setup and I can ping hosts from Cluster1 to hosts in Cluster2 currently.
How can I create a service that I can connect to from Cluster1 in Cluster2. I have experience setting up services using external ELBs and the like, but not for traffic internally in this above scenario.
You can create internal LoadBalancer.
All you need to do is to create a regular service of type LoadBalancer and annotate it with the following annotation:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
Use an internal loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: cluster2-service
namespace: test
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
That will instruct the CNI to allocate the elb on a private subnet, which should make services behind it in the cluster reachable from the other vpc.
I created a VPC and an EBS app. Here is how the EBS app is configured:
VPC: set to the VPC I created
Load balancer visibility: I set this to public
Load balancer subnets: two public subnets in the VPC
Public IP address: I did not assign a public IP address for the instance
Instance subnets: two private subnets in the VPC
Route table for both public subnets:
Route table for both private subnets:
Security group for the load balancer:
Please let me know if you need any more information! Thank you!
Based on the comments.
The issue was caused by incorrect security groups for ALB. Adjusting the group to allow internet traffic solved the issue.
Here is what the security group should have in it: