k8s service annotations not working on AWS LB - amazon-web-services

I am running a cluster in EKS, with k8s 1.21.5
I know by default k8s has a Cloud Controller Manager which can be used to create Load balancers and by default it will create a classic LB in AWS.
I realize CLB are going away and I should use NLB or ALB and rather install the AWS Load Balancer controller instead but I want to work out why my annotations dont work.
What I am trying to do is setup a TLS listen using an ACM certificate because by default its all setup as TCP
Here are my annotations
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:<region>:<account>:certificate/<id>
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
# service.beta.kubernetes.io/aws-load-balancer-ssl-ports: <port>
I have followed the k8s docs here which specify which annotations to use https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
And I have checked in the k8s code that these annotations are present
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L162
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L167
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L181
When I create my service with these annotations, the service in k8s says pending
Can anyone tell me why it wont work or give me any insight?
What I have been doing is manually configuring the LB after its created, but I want to get away from doing that

#congbaoguier
Thanks for your advice to look at the logs, I was being a complete dummy. After enabling my logging on control plane I was able to see that there was an issue with my ACM ARN and weirdly I have no idea where I got that ARN from, what I check it in ACM it was WRONG DOH
Updating my ARN it now works, so thanks for the push to use my brain again :P

Related

k8s Service annotations for AWS NLB ALPN

I'm facing an issue on the Service annotation that enables ALPN policy in an AWS load balancer.
I'm testing an application in production, managed by EKS. I need to enable a Network Load Balancer (NLB) on AWS to manage some ingress rules (tls cert and so on...).
Among annotations is available:
service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred
I think I need this to enable ALPN in the TLS handshake.
The issue is that it does not apply to my load balancer (other annotations works), I can confirm it by accessing the AWS dashboard or by executing curl -s -vv https://my.example.com. To enable this ALPN policy I must apply this patch manually, e.g. through the dashboard.
What am I missing? I wonder if that annotation could only be available for the load balancer controller and not for the base Service for NLBs.
EDIT: I found some github issues that requested for this feature in the legacy mode without using a third party controller, here is a comment that resumes all. Since it seems to be an unavailable feature (for now), how can I achieve the configuration result using terraform for example? Do I need to create the NLB first and then attach to my Service?

K8s service annotations getting reset after edit

I have a service of type 'loadbalancer' in my eks cluster. Currently aws has configured a classic loadbalancer for this service which is open to internet. Now i have to change this to a network loadbalancer which is not open to internet but whose scheme is internal.
In order to do that I tried adding the below marked annotations -
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-type: nlb
But after I do kubectl edit svc and add these annotations and save, The service definition gets reset to the previous version and removes the newly added annotations. I do see a network lb getting created in aws, but the classic lb still remains and is operational.
I also tried manually deleting the classic lb, but it gets re-created after sometime.
Appreciate any help or insights on this issue.

How to do DNS pointing to aws eks

I am running aws eks. I am trying to install the sample Nginx app and point a subdomain to it. I have hooked aws eks to an existing rancher portal. I can able to install my Nginx app using the rancher. It has a service file and ingress file. Here is my Nginx helm chart https://github.com/clayrisser/charts/tree/master/alpha/nginx
When I went through the many docs online i have seen aws eks requires aws load balancer controller which auto creates and load balancer of type we specify through our ingress and service file and we need to Alias point to domian. How can we alias point if our domain is a root domain?
How can we eliminate creating Lb's for each app. Is there a way to create and use only one LB for the whole cluster? and all apps can use this LB?
Is there a way to have IP for the elb instead of generated one?
If there is a better way of doing
How can we eliminate creating Lb's for each app. Is there a way to
create and use only one LB for the whole cluster? and all apps can use
this LB?
Yeah, you can install an ingress controller in your cluster with service type LoadBalancer. This will create a LoadBalancer in your account. Some of the popular ones are Nginx, Traefik, Contour etc.
The ingress resources you create now can use this ingress controller using annotation kubernetes.io/ingress.class. Make sure your app service type is not LoadBalancer as it will create a new LB.
Is there a way to have IP for the elb instead of generated one?
Yeah, some cloud providers (including AWS) allow you to specify the loadBalancerIP. In those cases, the load-balancer is created with the user-specified loadBalancerIP. Service should look something like this:
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xx
...
But as you're looking for a single LB, you should probably use the loadBalancerIP option with an ingress controller. For eg, nginx ingress controller provides this option. The configuration would look something like:
values:
controller:
service:
loadBalancerIP: "12.234.162.41"
xxx
https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml#L271

How to add SSL certificates using AWS console on a ALB spun up by alb-ingress-controller AWS?

I have a requirement where I need to maintain multiple public domains to point to our server,
so I use alb-ingress-controller which spins up an ALB and also allows me to pass a 25 cert ARNS in certificateArns annotation,
However if I add any new SSL certificate to the ALB spun up by alb-ingress-controller via AWS console, the K8s controller reconciler removes the certificate that I manually added.
An ideal way for me would be to get the ALB spun up by alb-ingress-controller, but for me to still be able to add the SSL certs to this ALB via AWS Console/API.
Does anyone know how to make this work?
I tried working on the alb-ingress-controller project but it's a big one and I have got a timeline :-)
Hope to get help out of the community.
I solved the problem using kubectl annotate, creating a dynamic patch always appending all certificates there is no other way.

Kubernetes ELB service: How to disable TLS 1.0 and 1.1?

I am running Kubernetes on AWS, and exposing services using a Service with type: LoadBalancer, which provisions an ELB. Is there any way to control the ELB cipher configuration with annotations on this service? I need to disable TLS 1.0 and 1.1.
I am aware that I can do this by hand, but I would like for Kubernetes to do this for me, otherwise I'll have to remember to do it again the next time a new ELB is provisioned (Kubernetes upgrade, config change, etc).
If I understood you right, you would like to adjust security policies directly from Service.yml file.
From what I see, here you can find a list of all the annotations that are supported at the moment.
There is one called "aws-load-balancer-ssl-negotiation-policy". For me it looks exactly as the one you are looking for.
// ServiceAnnotationLoadBalancerSSLNegotiationPolicy is the annotation used on
// the service to specify a SSL negotiation settings for the HTTPS/SSL listeners
// of your load balancer. Defaults to AWS's default
const ServiceAnnotationLoadBalancerSSLNegotiationPolicy = "service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy"
The link to that file is listed under official documentation on K8s.
Additionally, there is a predefined policy ELBSecurityPolicy-TLS-1-2-2017-01 that uses only TLS v1.2 ( with 1.0 and 1.1 disabled).
Hope that helps.
you can use for example annotations like:
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01
ALB Ingress Controller SSL Policy Annotations
Edit the Security policy on the HTTPS listener on the Load Balancer.