AWS Kubernetes: Selecting SSL Certificate on AWS Load Balancer - amazon-web-services

I’m trying to configure SSL for an AWS Load Balancer for my AWS EKS cluster. The load balancer is proxying to a Traefik instance running on my cluster. This works fine over HTTP.
Then I created my AWS Certificate in the Cert Manager, copied the ARN and followed this part of the documentation: Services - Kubernetes
But the certificate is not linked to the Listeners in the AWS Load Balancer. I can’t find further documentations or a working example on the web. Can anyone point me out to one?
The LoadBalancer configuration looks like this:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"traefik-ingress-service","namespace":"kube-system"},"spec":{"ports":[{"name":"web","port":80,"targetPort":80},{"name":"admin","port":8080,"targetPort":8080},{"name":"secure","port":443,"targetPort":443}],"selector":{"k8s-app":"traefik-ingress-lb"},"type":"LoadBalancer"}}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-north-1:000000000:certificate/e386a77d-26d9-4608-826b-b2b3a5d1ec47
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
creationTimestamp: 2019-01-14T14:33:17Z
name: traefik-ingress-service
namespace: kube-system
resourceVersion: "10172130"
selfLink: /api/v1/namespaces/kube-system/services/traefik-ingress-service
uid: e386a77d-26d9-4608-826b-b2b3a5d1ec47
spec:
clusterIP: 10.100.115.166
externalTrafficPolicy: Cluster
ports:
- name: web
port: 80
protocol: TCP
targetPort: 80
- name: admin
port: 8080
protocol: TCP
targetPort: 8080
- name: secure
port: 443
protocol: TCP
targetPort: 80
selector:
k8s-app: traefik-ingress-lb
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: e386a77d-26d9-4608-826b-b2b3a5d1ec47.eu-north-1.elb.amazonaws.com
Kind Regards and looking forward to your answers.

I had a similar issue since I'm using EKS v1.14 (and nginx-ingress-controller) and a Network Load Balancer, and according to Kubernetes, it's possible since Kubernetes v1.15 - GitHub Issue. And since 10-March-2020 - Amazon EKS now supports Kubernetes version 1.15
So if it's still relevant, read more about it here - How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?.

I ran into the same problem and discovered that the issue was that the certificate type that I chose (ECDSA 384-bit) wasn't compatible with the Classic Load Balancer (but was supported by the new Application Load Balancer). When I switched to an RSA certificate it worked correctly.

Related

AWS EKS WITH FARGATE PROFILE USING KONG INGRESS- Unable to expose port 80 to public

I deployed kong ingress controller on aws eks cluster with fargate option.
I am unable to access out application over the internet over http port.
I am keep getting -ERR_CONNECTION_TIMED_OUT in browser.
I did follow the Kong deployment as per steps given at -
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/eks.md
Kong-proxy service is created wihtout issue.
kong-proxy service is created yet its “EXTERNAL-IP” is still showing pending.
We are able to access our local application in internal network (by logging on to running pod) via Kong-proxy CLUSTER-IP without any problem using curl.
A nlb load balancer is also created automatically in aws console when we created kong-proxy service. Its DNS name we are using to try to connect from internet.
Kindly help me understand what could be the problem.
My kong-proxy yaml is-
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 80
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-kong
type: LoadBalancer
I don't think it's supported now as per https://github.com/aws/containers-roadmap/issues/617

Kubectl : Add healthcheck using TCP targetting https, do not terminate SSL on ELB for AWS

I am working on a Kubernetes application in which we are running a nginx server. There are 2 issues we are facing currently,
one is related to healthchecks. I would like to add healthchecks which check for the container on port-443, but with TCP, Kubernetes is somehow doing that on SSL, causing the containers to show out of service by AWS.
Secondly, SSL Traffic is getting terminated on ELB, while still talking with container on port-443. I have added a self-signed certificate on the container inside nginx already. We redirect from http to https internally, so anything on port-80 is of no use to us. What am I doing wrong?
service.yaml :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: service-name
app.kubernetes.io/instance: service-name-instance
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/managed-by: kubectl
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: OUR ARN
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "TCP"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
name: service-name
spec:
selector:
app: service-name
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443

How to forward traffic from domain in route53 to a pod using nginx ingress?

I deployed grafana using helm and now it is running in pod. I can access it if I proxy port 3000 to my laptop.
Im trying to point a domain grafana.something.com to that pod so I can access it externally.
I have a domain in route53 that I can attach to a loadbalancer (Application Load Balancer, Network Load Balancer, Classic Load Balancer). That load balancer can forward traffic from port 80 to port 80 to a group of nodes (Let's leave port 443 for later).
I'm really struggling with setting this up. Im sure there is something missing but I don't know what.
Basic diagram would look like this I imagine.
Internet
↓↓
Domain in route53 (grafana.something.com)
↓↓
Loadbalancer 80 to 80 (Application Load Balancer, Network Load Balancer, Classic Load Balancer)
I guess that LB would forward traffic to port 80 to the below Ingress Controllers (Created when Grafana was deployed using Helm)
↓↓
Group of EKS worker nodes
↓↓
Ingress resource ?????
↓↓
Ingress Controllers - Created when Grafana was deployed using Helm in namespace test.
kubectl get svc grafana -n test
grafana Type:ClusterIP ClusterIP:10.x.x.x Port:80/TCP
apiVersion: v1
kind: Service
metadata:
creationTimestamp:
labels:
app: grafana
chart: grafana-
heritage: Tiller
release: grafana-release
name: grafana
namespace: test
resourceVersion: "xxxx"
selfLink:
uid:
spec:
clusterIP: 10.x.x.x
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
app: grafana
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
↓↓
Pod Grafana is listening on port 3000. I can access it successfully after proxying to my laptop port 3000.
Given that it seems you don't have an Ingress Controller installed, if you have the aws cloud-provider configured in your K8S cluster you can follow this guide to install the Nginx Ingress controller using Helm.
By the end of the guide you should have a load balancer created for your ingress controller, point your Route53 record to it and create an Ingress that uses your grafana service. Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/app-root: /
nginx.ingress.kubernetes.io/enable-access-log: "true"
name: grafana-ingress
namespace: test
spec:
rules:
- host: grafana.something.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 80
path: /
The final traffic path would be:
Route53 -> ELB -> Ingress -> Service -> Pods
Adding 2 important suggestions here.
1 ) Following improvements to the ingress api in kubernetes 1.18 -
a new ingressClassName field has been added to the Ingress spec that is used to reference the IngressClass that should be used to implement this Ingress.
Please consider to switch to ingressClassName field instead of the kubernetes.io/ingress.class annotation:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: grafana-ingress
namespace: test
spec:
ingressClassName: nginx # <-- Here
rules:
- host: grafana.something.com
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 80
2 ) Consider using External-DNS for the integration between external DNS servers (Check this example on AWS Route53) and the Kubernetes Ingresses / Services.

My kubernetes AWS NLB integration is not working

I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422

Multiple IPs for a single container

I'm not sure if this is preferred/correct way of setting up kubernetes, but I have two websites "x.com" and "y.com" each with their own separate IPs. Currently, they running off separate ec2 instances, but I'm in the process of moving our architecture to using docker/kubernetes on aws. What I'd like to do is have a single nginx container that hands of the requests to the appropriate backend services. However, I'm currently stuck on trying to figure out how to point two IPs at the same container.
My k8s setup is like so:
Replication controller/pod for x.com
Replication controller/pod for y.com
Service for x.com
Service for y.com
Replication controller for nginx, specifying a single replica
Service for nginx specifying that I need port 80 and 443.
Is there a way for me to specify that I want two IPs pointing to the single nginx container, or is there a preferred k8s way of solving this problem?
You could use the ingress routing for this:
http://kubernetes.io/docs/user-guide/ingress/
http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html
https://github.com/nginxinc/kubernetes-ingress
Also to not use ingress you could setup services type LoadBalancer and then create CNAMEs to the ELB matching the domains you want to route.
I ended up using two separate services/load balancers that point to the same nginx pod but using different ports.
apiVersion: v1
kind: Service
metadata:
name: nginx-service1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 880
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service2
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 980
protocol: TCP
- name: https
port: 443
targetPort: 9443
protocol: TCP
selector:
app: nginx