I have hard time getting this working with NLB using ingress controller :
https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb
Even subnets are not taking effect here , its not passing my configurations in the API that creates the NLB:
================================
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-
07e3afcd4b7b5d644,eipalloc-0d9cb0154be5ab55d,eipalloc-0e4e5ec3df81aa3ea"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-
061f4a497621a7179,subnet-001c2e5df9cc93960"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
The number of eip allocations must match the number of subnets in the subnet annotation.
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet
You have 3 allocations but only 2 subnets.
In addition, the annotation
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
is missing.
By default this will use scheme "internal".
I assume since you are allocating elastic IP addresses that you might want "internet-facing".
Also, you are using annotations that are meant for "AWS Load Balancer Controller" but you are using an "AWS cloud provider load balancer controller"
The external value for aws-load-balancer-type is what causes the AWS
Load Balancer Controller, rather than the AWS cloud provider load
balancer controller, to create the Network Load Balancer.
docs
You are using service.beta.kubernetes.io/aws-load-balancer-type: nlb
which means that none of the links provided earlier in this answer pertain to your Load Balancer. nlb type is an "AWS cloud provider load balancer controller" not an "AWS Load Balancer Controller"
For "AWS cloud provider load balancer controller" all the docs reference is this.
So, as it turned out - these annotations will be supported only since Kubernetes 1.16, which is "coming soon" on AWS.
Currently supported version is 1.15, which just ignores those annotations...
Considering that you are using AWS-specific annotations here (service.beta.kubernetes.io/aws-load-balancer-eip-allocations) - I assume that this is exactly the reason why it does not work on your case.
As a workaround, I would advice:
Create custom post-deployment script that re-configures newly-created LoadBalancer, after each Kubernetes Service Update.
Switch to use something more conventional, like ELB with your Container, and AutoScaling groups (that's what we did.)
Setup your own Kubernetes Controller (super-hard thingie, which will become completely obsolete and will just be basically a lost of time, as soon as 1.16 is officially out). See this how-to
Wait...
Official statement:
https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html#1-16-prequisites
Full list of annotations (when they will be "supported" ofc):
https://github.com/kubernetes/kubernetes/blob/v1.16.0/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L208-L211
Stay tuned! :(
Related
I'd like to create a nginx ingress controller with AWS internal NLB, the requirement is fix the IP address of NLB endpoint, for example, currently the NLB dns of Nginx ingress service is abc.elb.eu-central-1.amazonaws.com which is resolved to ip address 192.168.1.10, if I delete and re-create nginx ingress controller, I want the NLB DNS must be the same as before.
Having a look in kubernetes service annotation, I did not see any way to re-use existing NLB, however, I find out the annotation service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses in link, as far as I understand that it allow me to set ip address for NLB, but it not work as my expectation, everytime I re-created nginx controller, the ip address is difference, Below is K8s service yaml file.
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: "10.136.103.251"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-00df069133b22"
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
spec:
type: LoadBalancer
externalTrafficPolicy: Local
I know this requirement is werid, is it possible to do that?
If your Kubernetes cluster runs on a VPC with more than one subnet (which is probably the case), you must provide a private ip address for each subnet.
I installed the AWS Load balancer controller with the helm chart, then i installed the nginx ingress controller with this helm chart :
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress -f internal-ingress-values.yaml
Here the content of internal-ingress-values.yaml
controller:
ingressClass: nginx
service:
enableHttp: false
enableHttps: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.136.103.251, 10.136.104.251
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-00a1a7f9949aa0ba1, subnet-12ea9f1df24aa332c
ingressClassResource:
enabled: true
default: true
According to the documentation the service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses annotation length/order must match subnets
So, you must provide IP addresses and subnet in the same order (don't mismatch).
If you take my example above, you must make sure that :
10.136.103.251 is included in subnet-00a1a7f9949aa0ba1
10.136.104.251 is included in subnet-12ea9f1df24aa332c
It's a good idea to tag your subnets according to the documentation :
Key: kubernetes.io/cluster/my-cluster-name
Value: shared
Key: kubernetes.io/role/internal-elb
Value: 1
I tested this K8S on 1.20 and it works for my project.
Don't provide "ingressClassResource" if you're on K8S <= 1.17.
The only LBs that will be managed (at least at the current version 2.3 of the AWS LB Controller) are "nlb-ip" and "external" types. This is specified at:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/service/annotations/#legacy-cloud-provider
The annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external, legacy cloud provider ignores the service resource (provided it has the correct patch) so that the AWS Load Balancer controller can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later.
I have k8s cluster deployed over aws.
I created load balancer service with annotation of :
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
now, I saw that k8s created new elb attached to a sg with outbound role 443 opened to 0.0.0.0/0.
I tried to take a look and see if there's additional annotation that manage source ip's (pre defined ip's instead the 0.0.0.0) and couldn't find.
Do you know if there's kind of option to manage this as part of annotations ?
Make use of loadBalancerSourceRanges in loadbalancer service resource as described here.
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerSourceRanges:
- 10.0.0.0/8
Update:
In case of nginx-ingress you can use nginx.ingress.kubernetes.io/whitelist-source-range annotation.
For more info check this.
I wanting to have my own Kubernetes playground within AWS which currently involves 2 EC2 instances and an Elastic Load Balancer.
I use Traefik as my ingress controller which has easily allowed me to set up automatic subdomains and TLS to some of the deployments (deployment.k8s.mydomain.com).
I love this but as a student, the load balancer is just too much. I'm having to kill the cluster when not using it but ideally, I want this up full time.
Is there a way to keep my setup (the cool domain/tls stuff) but drop the need for a ELB?
If you want to drop the use of a LoadBalancer, you have still another option, this is to expose Ingress Controller via Service of externalIPs type or NodePort.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
externalIPs:
- 80.11.12.10
You can then create a CNAME (deployment.k8s.mydomain.com) to point to the external IP of your cluster node. Additionally, you should ensure that the local firewall rules on your node are allowing access to the open port.
route53 dns load balancing? im sure there must be a way . https://www.virtualtothecore.com/load-balancing-services-with-aws-route53-dns-health-checks/
I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422
I have question about kubernetes ingress.
I want to use ingress with my Amazon account and/or private cloud and want to assign external IP.
It is possible to assign external ip for services :
Services documentation - chapter external IP
but cannot find a way to do that for Ingress : Ingress documentation.
My question is direct especially to Kubernetes team.
Similar question was asked by Simon in this topic : How to force SSL for Kubernetes Ingress on GKE 2
but he asked about GKE while I am interested in private cloud, AWS.
Thank you in advance.
[UPDATE]
Guys found that my question may was answered already in this topic.
Actually answer that #anigosa put there is specific for GCloud.
His solution won't work in private cloud neither in AWS cloud. In my opinion the reason for that is that he use type: LoadBalancer (which cannot be used in private cloud) and use loadBalancerIP property which will works only on GCloud(for AWS it cause error : "Failed to create load balancer for service default/nginx-ingress-svc: LoadBalancerIP cannot be specified for AWS ELB
").
Looking at this issue, it seems you can define annotation on your service and map it to existing elastic ip.
Something like that:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: <>
spec:
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Please note this will create ELB for this service, not ingress.
As an ingress is simply one service (=ELB) handling requests for many other services, it should be possible to do something similar for ingress, but I couldn't find any docs for it.
There are two main ways you can do this. One is using a static IP annotation as shown in Omer's answer (which is cloud specific, and normally relies on the external IP being setup beforehand), the other is using an ingress controller (which is generally cloud agnostic).
The ingress controller will obtain an external IP on its service and then pass that to your ingress which will then use that IP as its own.
Traffic will then come into the cluster via the controller's service and the controller will route to your ingress.
Here's an example of the ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: my-ingress-class
spec:
tls:
- hosts:
- ssl.somehost.com
rules:
- host: ssl.somehost.com
http:
paths:
- backend:
serviceName: backend-service
servicePort: 8080
The line
kubernetes.io/ingress.class: my-ingress-class
Tells the cluster we want only an ingress controller that handles this "class" of ingress traffic -- you can have multiple ingress controllers in the cluster, each declaring they are handling a different class of ingress traffic so when you install the ingress controller, you also need to declare which ingress class you want it to handle.
Caveat: If you do not declare the ingress class on an ingress resource, ALL the ingress controllers in the cluster will attempt to route traffic to the ingres
Now if you want an external IP that is private, you can do that via the controller. For AWS and GCP you have annotations that tell the cloud provider you want an IP that is internal only by adding a specific annotation to the loadbalancer of the ingress controller
For AWS:
service.beta.kubernetes.io/aws-load-balancer-type: "internal"
For GCP:
networking.gke.io/load-balancer-type: "Internal"
or (< Kubernetes 1.17)
cloud.google.com/load-balancer-type: "Internal"
Your ingress will inherit the IP obtained by the ingress controller's loadbalancer