I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422
Related
i just set up a private EKS Cluster with an external DNS. A Service is exposed on a fargate instance and accessible via https://IP. The service is furthermore annotated with
external-dns.alpha.kubernetes.io/internal-hostname: duplicate-clearing-dev.aws.ui.loc
Thus a DNS entry is created by the external DNS (bitnami). Yet it routes to -all- ip addresses i have running in my EKS cluster instead of the one (IP) the service is running on and i don't know why.
A similar setup with Ingress worked just find where the DNS entry routed to a Load Balancer.
So my question is if i miss some kind of selector to route the DNS entry to the only one correct IP.
My service looks like this
apiVersion: v1
kind: Service
metadata:
name: "service-duplicate-clearing"
namespace: "duplicate-clearing"
annotations:
external-dns.alpha.kubernetes.io/internal-hostname: duplicate-clearing-dev.aws.ui.loc
spec:
ports:
- port: 443
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: duplicate-clearing
Thanks in advance,
Eric
What i was missing was the following input for the specs:
externalTrafficPolicy: Local
I deployed kong ingress controller on aws eks cluster with fargate option.
I am unable to access out application over the internet over http port.
I am keep getting -ERR_CONNECTION_TIMED_OUT in browser.
I did follow the Kong deployment as per steps given at -
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/eks.md
Kong-proxy service is created wihtout issue.
kong-proxy service is created yet its “EXTERNAL-IP” is still showing pending.
We are able to access our local application in internal network (by logging on to running pod) via Kong-proxy CLUSTER-IP without any problem using curl.
A nlb load balancer is also created automatically in aws console when we created kong-proxy service. Its DNS name we are using to try to connect from internet.
Kindly help me understand what could be the problem.
My kong-proxy yaml is-
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 80
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-kong
type: LoadBalancer
I don't think it's supported now as per https://github.com/aws/containers-roadmap/issues/617
I have k8s cluster deployed over aws.
I created load balancer service with annotation of :
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
now, I saw that k8s created new elb attached to a sg with outbound role 443 opened to 0.0.0.0/0.
I tried to take a look and see if there's additional annotation that manage source ip's (pre defined ip's instead the 0.0.0.0) and couldn't find.
Do you know if there's kind of option to manage this as part of annotations ?
Make use of loadBalancerSourceRanges in loadbalancer service resource as described here.
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
Update:
In case of nginx-ingress you can use nginx.ingress.kubernetes.io/whitelist-source-range annotation.
For more info check this.
I had created a service with the type load balancer and I also configured SSL certificate to it, everything working fine but it's not redirecting my HTTP calls to https until I give https manually before my domain.
Here is my svc.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
dns.alpha.kubernetes.io/external: test.example.com
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
labels:
app: nginx
spec:
type: LoadBalancer
loadBalancerIP:
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 80
selector:
app: nginx
I believe, k8s service object does not have redirection functionality, it is designed to provide a static IP (clusterIP) to the pods who has ephemeral IP. It enables pods to have service discovery functionality in the cluster
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).
As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling.
k8s service
Redirection should happen at the Ingress Level(L7) or at the load balancer(L4) of the cloud provider.
I wanting to have my own Kubernetes playground within AWS which currently involves 2 EC2 instances and an Elastic Load Balancer.
I use Traefik as my ingress controller which has easily allowed me to set up automatic subdomains and TLS to some of the deployments (deployment.k8s.mydomain.com).
I love this but as a student, the load balancer is just too much. I'm having to kill the cluster when not using it but ideally, I want this up full time.
Is there a way to keep my setup (the cool domain/tls stuff) but drop the need for a ELB?
If you want to drop the use of a LoadBalancer, you have still another option, this is to expose Ingress Controller via Service of externalIPs type or NodePort.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
externalIPs:
- 80.11.12.10
You can then create a CNAME (deployment.k8s.mydomain.com) to point to the external IP of your cluster node. Additionally, you should ensure that the local firewall rules on your node are allowing access to the open port.
route53 dns load balancing? im sure there must be a way . https://www.virtualtothecore.com/load-balancing-services-with-aws-route53-dns-health-checks/