k8s Service annotations for AWS NLB ALPN - amazon-web-services

I'm facing an issue on the Service annotation that enables ALPN policy in an AWS load balancer.
I'm testing an application in production, managed by EKS. I need to enable a Network Load Balancer (NLB) on AWS to manage some ingress rules (tls cert and so on...).
Among annotations is available:
service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred
I think I need this to enable ALPN in the TLS handshake.
The issue is that it does not apply to my load balancer (other annotations works), I can confirm it by accessing the AWS dashboard or by executing curl -s -vv https://my.example.com. To enable this ALPN policy I must apply this patch manually, e.g. through the dashboard.
What am I missing? I wonder if that annotation could only be available for the load balancer controller and not for the base Service for NLBs.
EDIT: I found some github issues that requested for this feature in the legacy mode without using a third party controller, here is a comment that resumes all. Since it seems to be an unavailable feature (for now), how can I achieve the configuration result using terraform for example? Do I need to create the NLB first and then attach to my Service?

Related

k8s service annotations not working on AWS LB

I am running a cluster in EKS, with k8s 1.21.5
I know by default k8s has a Cloud Controller Manager which can be used to create Load balancers and by default it will create a classic LB in AWS.
I realize CLB are going away and I should use NLB or ALB and rather install the AWS Load Balancer controller instead but I want to work out why my annotations dont work.
What I am trying to do is setup a TLS listen using an ACM certificate because by default its all setup as TCP
Here are my annotations
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:<region>:<account>:certificate/<id>
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
# service.beta.kubernetes.io/aws-load-balancer-ssl-ports: <port>
I have followed the k8s docs here which specify which annotations to use https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
And I have checked in the k8s code that these annotations are present
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L162
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L167
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L181
When I create my service with these annotations, the service in k8s says pending
Can anyone tell me why it wont work or give me any insight?
What I have been doing is manually configuring the LB after its created, but I want to get away from doing that
#congbaoguier
Thanks for your advice to look at the logs, I was being a complete dummy. After enabling my logging on control plane I was able to see that there was an issue with my ACM ARN and weirdly I have no idea where I got that ARN from, what I check it in ACM it was WRONG DOH
Updating my ARN it now works, so thanks for the push to use my brain again :P

Kubernetes ELB service: How to disable TLS 1.0 and 1.1?

I am running Kubernetes on AWS, and exposing services using a Service with type: LoadBalancer, which provisions an ELB. Is there any way to control the ELB cipher configuration with annotations on this service? I need to disable TLS 1.0 and 1.1.
I am aware that I can do this by hand, but I would like for Kubernetes to do this for me, otherwise I'll have to remember to do it again the next time a new ELB is provisioned (Kubernetes upgrade, config change, etc).
If I understood you right, you would like to adjust security policies directly from Service.yml file.
From what I see, here you can find a list of all the annotations that are supported at the moment.
There is one called "aws-load-balancer-ssl-negotiation-policy". For me it looks exactly as the one you are looking for.
// ServiceAnnotationLoadBalancerSSLNegotiationPolicy is the annotation used on
// the service to specify a SSL negotiation settings for the HTTPS/SSL listeners
// of your load balancer. Defaults to AWS's default
const ServiceAnnotationLoadBalancerSSLNegotiationPolicy = "service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy"
The link to that file is listed under official documentation on K8s.
Additionally, there is a predefined policy ELBSecurityPolicy-TLS-1-2-2017-01 that uses only TLS v1.2 ( with 1.0 and 1.1 disabled).
Hope that helps.
you can use for example annotations like:
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01
ALB Ingress Controller SSL Policy Annotations
Edit the Security policy on the HTTPS listener on the Load Balancer.

Load balancing gRPC requests using one of AWS Load Balancers

I'm trying to work out whether I could use one of the (A/E/N)LBs to load balance gRPC traffic. A simple round robin would suffice in our case.
I've read that ALB doesn't fully support HTTP2 and therefore can't be used with gRPC. Specifically lack of support of sending HTTP2 traffic downstream and lack of support for trailer headers was mentioned. Is it still true?
Couldn't find any definitive answers with regards to NLBs or "classic" ELBs. Any hints?
As of October 29, 2020, Application Load Balancers now support HTTP/2 and gRPC load balancing. From the announcement:
To use the feature on your ALB, choose HTTPS as your listener protocol, gRPC as the protocol version for your target group and register instance or IP as targets for the configured target group. ALB provides rich content based routing features that will let you inspect gRPC calls and route them to the appropriate target group based on the service and method requested. Within a target group, ALB will use gRPC specific health checks to determine availability of targets and provide gRPC specific access logs to monitor your traffic.
The support for gRPC and end-to-end HTTP/2 is available for existing and new Application Load Balancers at no extra charge in all AWS Regions. To learn more, please refer to the blog post, demo, and the ALB documentation.
Using gRPC on AWS had some major challenges. Without full HTTP/2 support on AWS Application Load Balancer, you have to spin up and manage your own load balancers. Neither NLB and ELB are viable alternatives on AWS due to issues with traffic to and from the same host, dynamic port mappings, SSL termination complications, and sub-optimal client and server-side round-robining of TCP connections.
gRPC demonstrated performance improvements, however, it would take considerable infrastructure efforts to adopt, whether it be using LBs such as Nginx or Envoy; or setting up a service mesh with something of the likes of Istio. Another possibility would be to make use of thick client load balancing, though this would also require additional service discovery infrastructures such as Consul or ZooKeeper.
AWS recently announced a new service called AWS App Mesh. AWS App Mesh supports HTTP2 and gRPC services
gRPC can now model and manage their inter-service communications using AWS App Mesh.
Reference:
https://aws.amazon.com/about-aws/whats-new/2019/11/aws-app-mesh-now-supports-http2-and-grpc-services/
https://aws.amazon.com/app-mesh/
https://docs.aws.amazon.com/app-mesh/latest/userguide/what-is-app-mesh.html

How to get latency details of load balancer in stackdriver?

I have a simple spring boot application deployed on Kubernetes on GCP. I wish to custom auto-scale the application using latency threshold (response time). Stackdriver has a set of metrics for load balancer. Details of the metrics can be found in this link.
I have exposed my application to an external IP using the following command
kubectl expose deployment springboot-app-new --type=LoadBalancer --port 80 --target-port 9000
I used this API explorer to view the metrics. The response code is 200, but the response is empty.
The metrics filter I used is metric.type = "loadbalancing.googleapis.com/https/backend_latencies"
Question
Why am I not getting anything in the response? Am I making any mistake?
I have already enabled Stackdriver API. Is there any other settings to be made to get the response?
As mentioned in the comments, the metric you're trying to use belongs to an HTTP(S) load balancer and the type LoadBalancer, when used in GKE, will deploy a Network Load Balancer instead.
The reason you're not able to find its metrics using the Stackdriver Monitoring page is that, the link shared in the comment corresponds to a TCP/SSL Proxy load balancer (layer 7) documentation instead of a Network Load Balancer (layer 4), which is the one that is already created in your cluster and for now, the Network Load Balancer won't show up using the Stackdriver Monitoring page.
However, a feature request has been created to have this functionality enabled in the Monitoring dashboard.
If you need to have this particular metric (loadbalancing.googleapis.com/https/backend_latencies), it might be best to expose your deployment using an Ingress instead of using the LoadBalancer type. This will automatically create an HTTP(S) load balancer with the monitoring enabled instead of the current Network Load Balancer.

How to enable IAP for services with canary deployment

I am trying to do a canary deployment in GKE. I need to enable IAP for all the deployments in this.
I can build the canary using both Istio and nginx-ingress for my usecase. But I cant figure how to enable IAP for the either of them. I provisioned a GLB (Global HTTP load balancer) and tried to add the ingresses as backends in both cases. That failed as I expected because health checks and things didnt work.
You need to have an HTTPS load balancer to be able to enable IAP. You can click on this link which provides step by step instructions on how to enable IAP within GKE. I would also highly suggest reading the section “Before you begin” as you will need the prerequisites mentioned to enable IAP.