I have k8s cluster deployed over aws.
I created load balancer service with annotation of :
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
now, I saw that k8s created new elb attached to a sg with outbound role 443 opened to 0.0.0.0/0.
I tried to take a look and see if there's additional annotation that manage source ip's (pre defined ip's instead the 0.0.0.0) and couldn't find.
Do you know if there's kind of option to manage this as part of annotations ?
Make use of loadBalancerSourceRanges in loadbalancer service resource as described here.
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
Update:
In case of nginx-ingress you can use nginx.ingress.kubernetes.io/whitelist-source-range annotation.
For more info check this.
Related
I am using a yaml config to create a network load balancer in AWS using kubectl.
The load balancer is created successfully and the target groups are attached correctly.
As the part of settings, I have passed annotations required for AWS, but all annotations are not applied when looking at the Load Balancer in aws console.
The name is not getting set and the load balancer logs are not enabled. I get a load balancer with random alphanumeric name.
apiVersion: v1
kind: Service
metadata:
name: test-nlb-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: test-nlb # not set
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-2016-08
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:***********:certificate/*********************
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp,http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443,8883
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=dev,app=test, name=test-nlb-dev"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "15" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "random-bucket-name" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "random-bucket-name/dev/test-nlb-dev" # not set
labels:
app: test
spec:
ports:
- name: mqtt
protocol: TCP
port: 443
targetPort: 8080
- name: websocket
protocol: TCP
port: 8883
targetPort: 1883
type: LoadBalancer
selector:
app: test
If anyone can point what could be the issue here ? I am using kubectl v1.19 and Kubernetes v1.19
I think this is a version problem.
I assume you are running the in-tree cloud controller and not an external one (see here).
The annotation service.beta.kubernetes.io/aws-load-balancer-name is not present even in the master branch of kubernetes.
That does not explain why the other annotations do not work though. In fact
here you can see what annotations are supported by kubernetes 1.19.12 and the others you mentioned are not working are listed in the sources.
You might find more information in the controller-manager logs.
My suggestion is to disable the in-tree cloud controller in controller manager and run the standalone version.
i just set up a private EKS Cluster with an external DNS. A Service is exposed on a fargate instance and accessible via https://IP. The service is furthermore annotated with
external-dns.alpha.kubernetes.io/internal-hostname: duplicate-clearing-dev.aws.ui.loc
Thus a DNS entry is created by the external DNS (bitnami). Yet it routes to -all- ip addresses i have running in my EKS cluster instead of the one (IP) the service is running on and i don't know why.
A similar setup with Ingress worked just find where the DNS entry routed to a Load Balancer.
So my question is if i miss some kind of selector to route the DNS entry to the only one correct IP.
My service looks like this
apiVersion: v1
kind: Service
metadata:
name: "service-duplicate-clearing"
namespace: "duplicate-clearing"
annotations:
external-dns.alpha.kubernetes.io/internal-hostname: duplicate-clearing-dev.aws.ui.loc
spec:
ports:
- port: 443
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: duplicate-clearing
Thanks in advance,
Eric
What i was missing was the following input for the specs:
externalTrafficPolicy: Local
I've set up a simple GKE cluster hooked to GCP Traffic Director with the Traffic Director setup with automatic Envoy injection tutorial.
The next step is how do I map external traffic into the Traffic Director backend service, which is only internal?
Basically, my goal is to have an external load balancer with an IP address that takes outside traffic and routes it to the Traffic Director service mesh to split traffic between different Network Endpoint Groups.
I tried the following:
Create an external load balancer manually in Network Services -> Load Balancing --> However the list of Backends does not include the Traffic Director backend service so I can't create one to have an external IP and redirect it to the internal service mesh.
Install the NGINX ingress controller chart and install an ingress controller via .yaml that maps to the k8s cluster service --> This creates an external load balancer but it simply goes directly to the service instead of through Traffic Director
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-body-size: 1M
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: my-host-name.hostname.com
http:
paths:
- path: "/"
backend:
serviceName: service-test
servicePort: 80
Service:
apiVersion: v1
kind: Service
metadata:
name: service-test
annotations:
cloud.google.com/neg: '{"exposed_ports":{"80":{"name": "service-test-neg"}}}'
spec:
ports:
- port: 80
name: service-test
protocol: TCP
targetPort: 8000
selector:
run: app1
type: ClusterIP
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app1
name: app1
spec:
replicas: 1
selector:
matchLabels:
run: app1
template:
metadata:
labels:
run: app1
spec:
containers:
- image: gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.1
name: app1
command:
- /bin/sh
- -c
- /serve_hostname -http=true -udp=false -port=8000
ports:
- protocol: TCP
containerPort: 8000
The deployment and service above is taken directly from the tutorial.
There seems to be a concept in the official documentation for Handling ingress traffic using a second-level gateway at the edge of your mesh, but it's only conceptual and does not provide how to actually do it.
How do I map external traffic using an external load balancer into a GCP Traffic Director-managed service mesh for advanced traffic configuration into GKE?
Traffic Director is not an endpoint to point to for routing. It is the "control plane" of your service mesh.
So you would configure your routing rules from GCP, and Traffic Director would configure your sidecars as expected. But eventually your Load Balancer should point to an Instance Group or Network Endpoint Group, not to Traffic Director.
EDIT
Traffic Director is not the one getting configured, but the one configuring. It configures the Envoy sidecars. These are L7 proxies, so the URL mapping happens on the proxies.
The Endpoint Group will be a group of IP addresses of pods. Since the pod ranges of the cluster have been added to the subnetwork; as IP alias, the VPC is capable of pulling any IP address from this range, group it, and make a backend for a HTTP load balancer on GCP.
Basically, Traffic Director is Istio, but with control plane decoupled to GCP.
I wanting to have my own Kubernetes playground within AWS which currently involves 2 EC2 instances and an Elastic Load Balancer.
I use Traefik as my ingress controller which has easily allowed me to set up automatic subdomains and TLS to some of the deployments (deployment.k8s.mydomain.com).
I love this but as a student, the load balancer is just too much. I'm having to kill the cluster when not using it but ideally, I want this up full time.
Is there a way to keep my setup (the cool domain/tls stuff) but drop the need for a ELB?
If you want to drop the use of a LoadBalancer, you have still another option, this is to expose Ingress Controller via Service of externalIPs type or NodePort.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
externalIPs:
- 80.11.12.10
You can then create a CNAME (deployment.k8s.mydomain.com) to point to the external IP of your cluster node. Additionally, you should ensure that the local firewall rules on your node are allowing access to the open port.
route53 dns load balancing? im sure there must be a way . https://www.virtualtothecore.com/load-balancing-services-with-aws-route53-dns-health-checks/
I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422