I am using a yaml config to create a network load balancer in AWS using kubectl.
The load balancer is created successfully and the target groups are attached correctly.
As the part of settings, I have passed annotations required for AWS, but all annotations are not applied when looking at the Load Balancer in aws console.
The name is not getting set and the load balancer logs are not enabled. I get a load balancer with random alphanumeric name.
apiVersion: v1
kind: Service
metadata:
name: test-nlb-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: test-nlb # not set
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-2016-08
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:***********:certificate/*********************
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp,http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443,8883
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=dev,app=test, name=test-nlb-dev"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "15" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "random-bucket-name" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "random-bucket-name/dev/test-nlb-dev" # not set
labels:
app: test
spec:
ports:
- name: mqtt
protocol: TCP
port: 443
targetPort: 8080
- name: websocket
protocol: TCP
port: 8883
targetPort: 1883
type: LoadBalancer
selector:
app: test
If anyone can point what could be the issue here ? I am using kubectl v1.19 and Kubernetes v1.19
I think this is a version problem.
I assume you are running the in-tree cloud controller and not an external one (see here).
The annotation service.beta.kubernetes.io/aws-load-balancer-name is not present even in the master branch of kubernetes.
That does not explain why the other annotations do not work though. In fact
here you can see what annotations are supported by kubernetes 1.19.12 and the others you mentioned are not working are listed in the sources.
You might find more information in the controller-manager logs.
My suggestion is to disable the in-tree cloud controller in controller manager and run the standalone version.
Related
I deployed kong ingress controller on aws eks cluster with fargate option.
I am unable to access out application over the internet over http port.
I am keep getting -ERR_CONNECTION_TIMED_OUT in browser.
I did follow the Kong deployment as per steps given at -
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/eks.md
Kong-proxy service is created wihtout issue.
kong-proxy service is created yet its “EXTERNAL-IP” is still showing pending.
We are able to access our local application in internal network (by logging on to running pod) via Kong-proxy CLUSTER-IP without any problem using curl.
A nlb load balancer is also created automatically in aws console when we created kong-proxy service. Its DNS name we are using to try to connect from internet.
Kindly help me understand what could be the problem.
My kong-proxy yaml is-
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 80
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-kong
type: LoadBalancer
I don't think it's supported now as per https://github.com/aws/containers-roadmap/issues/617
I have k8s cluster deployed over aws.
I created load balancer service with annotation of :
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
now, I saw that k8s created new elb attached to a sg with outbound role 443 opened to 0.0.0.0/0.
I tried to take a look and see if there's additional annotation that manage source ip's (pre defined ip's instead the 0.0.0.0) and couldn't find.
Do you know if there's kind of option to manage this as part of annotations ?
Make use of loadBalancerSourceRanges in loadbalancer service resource as described here.
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
Update:
In case of nginx-ingress you can use nginx.ingress.kubernetes.io/whitelist-source-range annotation.
For more info check this.
I’m trying to configure SSL for an AWS Load Balancer for my AWS EKS cluster. The load balancer is proxying to a Traefik instance running on my cluster. This works fine over HTTP.
Then I created my AWS Certificate in the Cert Manager, copied the ARN and followed this part of the documentation: Services - Kubernetes
But the certificate is not linked to the Listeners in the AWS Load Balancer. I can’t find further documentations or a working example on the web. Can anyone point me out to one?
The LoadBalancer configuration looks like this:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"traefik-ingress-service","namespace":"kube-system"},"spec":{"ports":[{"name":"web","port":80,"targetPort":80},{"name":"admin","port":8080,"targetPort":8080},{"name":"secure","port":443,"targetPort":443}],"selector":{"k8s-app":"traefik-ingress-lb"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-north-1:000000000:certificate/e386a77d-26d9-4608-826b-b2b3a5d1ec47
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
creationTimestamp: 2019-01-14T14:33:17Z
name: traefik-ingress-service
namespace: kube-system
resourceVersion: "10172130"
selfLink: /api/v1/namespaces/kube-system/services/traefik-ingress-service
uid: e386a77d-26d9-4608-826b-b2b3a5d1ec47
spec:
clusterIP: 10.100.115.166
externalTrafficPolicy: Cluster
ports:
- name: web
port: 80
protocol: TCP
targetPort: 80
- name: admin
port: 8080
protocol: TCP
targetPort: 8080
- name: secure
port: 443
protocol: TCP
targetPort: 80
selector:
k8s-app: traefik-ingress-lb
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: e386a77d-26d9-4608-826b-b2b3a5d1ec47.eu-north-1.elb.amazonaws.com
Kind Regards and looking forward to your answers.
I had a similar issue since I'm using EKS v1.14 (and nginx-ingress-controller) and a Network Load Balancer, and according to Kubernetes, it's possible since Kubernetes v1.15 - GitHub Issue. And since 10-March-2020 - Amazon EKS now supports Kubernetes version 1.15
So if it's still relevant, read more about it here - How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?.
I ran into the same problem and discovered that the issue was that the certificate type that I chose (ECDSA 384-bit) wasn't compatible with the Classic Load Balancer (but was supported by the new Application Load Balancer). When I switched to an RSA certificate it worked correctly.
I wanting to have my own Kubernetes playground within AWS which currently involves 2 EC2 instances and an Elastic Load Balancer.
I use Traefik as my ingress controller which has easily allowed me to set up automatic subdomains and TLS to some of the deployments (deployment.k8s.mydomain.com).
I love this but as a student, the load balancer is just too much. I'm having to kill the cluster when not using it but ideally, I want this up full time.
Is there a way to keep my setup (the cool domain/tls stuff) but drop the need for a ELB?
If you want to drop the use of a LoadBalancer, you have still another option, this is to expose Ingress Controller via Service of externalIPs type or NodePort.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
externalIPs:
- 80.11.12.10
You can then create a CNAME (deployment.k8s.mydomain.com) to point to the external IP of your cluster node. Additionally, you should ensure that the local firewall rules on your node are allowing access to the open port.
route53 dns load balancing? im sure there must be a way . https://www.virtualtothecore.com/load-balancing-services-with-aws-route53-dns-health-checks/
I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422