How to keep LoadBalancer[ALB] even after we delete Ingress Manifest in AWS EKS? - amazon-web-services

When we launch the EKS Cluster using the below manifest, it is creating ALB. We have a default ALB that we are using, let's call it EKS-ALB. The Hosted zone is routing traffic to this EKS-ALB. We gave tag ingress.k8s.aws/resource:LoadBalancer, ingress.k8s.aws/stack:test-alb, elbv2.k8s.aws/cluster: EKS. But when we delete the manifest, it is deleting the default ALB and we need to reconfigure hosted zone again with New ALB which will get created in next deployment. Is there any way to block Ingress-controller not deleting ALB, but only deleting the listeners and Target Group?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-rule
namespace: test
annotations:
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg
spec:
ingressClassName: alb
rules:
- host: test.eks.abc.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: test-svc
port:
number: 5005
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-dep
namespace: test
labels:
app: test
spec:
replicas: 1
restartPolicy:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: Imagepath
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5005
resources:
requests:
memory: "256Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
labels:
app: test
spec:
type: NodePort
ports:
- port: 5005
targetPort: 80
protocol: TCP
selector:
app: test
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: test-scaler
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-dep
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 60
---

In order to achieve the existing ALB not being deleted with group.name annotation enabled, we need to meet following conditions:
ALB should be tagged with below 3 tags:
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
Create a dummy ingress with the same group name with the below manifest.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-rule
namespace: test
annotations:
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg
spec:
ingressClassName: alb
rules:
- host: dummy.eks.abc.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: test-svc
port:
number: 5005
After deploying the above manifest, an ingress will be created using the same ALB and listener will have rule of if host is dummy.eks.abc.com, it will return 443. It's create and forget type of manifest, so after creating this ingress, even after we delete all the running deployment services (except the dummy manifest file above), the ALB will remain.

Related

AWS EKS configure HTTPS listener

I want to secure my web service running on Kubernetes (EKS). It is running on port 80 .I want to run this on port 443.
When I apply the YAML file (for service and ingress), on AWS console I still have it listening on port 80 (and not on 443):
This is my YAML file:
How can I let it works? Thanks for you time!
#SERVICE LOGGER
apiVersion: v1
kind: Service
metadata:
name: load-balancer-api-logger
namespace: servicename-core-ns
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:786543355018:certificate/acdff29d4-7a32-42f1-8f11-1d4f495a5c77
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/force-ssl-redirect: "true"
spec:
selector:
app: api-logger
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 5000
selector:
app.kubernetes.io/name: api-logger
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-articor
namespace: servicename-core-ns
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/healthcheck-path: "/healthcheckep"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
spec:
rules:
- host: logger.domainname.com
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: load-balancer-api-logger
port:
number: 80
Please consider that if I try to manually set the ALB to work with HTTPS it works fine. What I'm trying to achive here is to configure it via YAML file.
You should configure all settings in Ingress object. The following spec also don't repeat the default value set by the controller:
apiVersion: v1
kind: Service
metadata:
name: load-balancer-api-logger
namespace: servicename-core-ns
spec:
selector:
app: api-logger
type: NodePort
ports:
- protocol: TCP
port: 443
targetPort: 5000
selector:
app.kubernetes.io/name: api-logger
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-articor
namespace: servicename-core-ns
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: "/healthcheckep"
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-central-1:786543355018:certificate/acdff29d4-7a32-42f1-8f11-1d4f495a5c77
spec:
rules:
- host: logger.domainname.com
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: load-balancer-api-logger
port:
number: 443

AWS route53 external dns point only one A records to the (E)LB of the (nginx) ingress controller

I have created two ingresses, one for Grafana and one for my App. When the external dns write them into the route53 hosted zone as a A record, only one of them (the Myapp dns) get the (E)LB alias (dns), though the second A record get the internal IP as an ip address in the route53 A record.
the big question:
is there a way to set them all to the same alias/or at list to the same elb?
why it doesn't success doing so by default?
using:
terraform:
helm_release:
nginx-controller-bitnami
external-dns-bitnami
prometheus community
grafana-ingress:
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: kube-prometheus-stack
meta.helm.sh/release-namespace: prometheus
generation: 1
labels:
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 8.3.3
helm.sh/chart: grafana-6.20.5
name: kube-prometheus-stack-grafana
namespace: prometheus
resourceVersion: "2419"
spec:
rules:
- host: grafana.dns.io
http:
paths:
- backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 10.0.1.19
kind: List
metadata:
resourceVersion: ""
selfLink: ""
my-app-ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: /
nginx.ingress.kubernetes.io/rewrite-target: /$1
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "some.website.io"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: "{{ .Release.Name }}-app"
port:
number: 80
status:
loadBalancer:
ingress:
- ip: 10.0.1.19
nginx-controller-service
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: website.my-app.io, some.grafana.io
meta.helm.sh/release-name: nginx-ingress-controller
meta.helm.sh/release-namespace: default
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nginx-ingress-controller
helm.sh/chart: nginx-ingress-controller-9.1.4
name: nginx-ingress-controller
namespace: default
resourceVersion: "997"
spec:
clusterIP: 172.30.0.140
clusterIPs:
- 172.30.0.140
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 30337
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31512
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress-controller
app.kubernetes.io/name: nginx-ingress-controller
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: someElbDns.eu-west-1.elb.amazonaws.com
Since your ingress controller service is of type LoadBalancer, creating this service will provision an NLB for you which will have your nginx controller pods as target groups.
A request will pass from the NLB to the ingress pods and then to grafana/your app subsequently.
I would try to use the same host (website.io) and the same nginx ingress class so both ingresses get assigned the same NLB address:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: default
name: grafana-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- grafana.website.io
secretName: tls-secret-website-io
rules:
- host: website.io
- http:
paths:
- path: /
backend:
serviceName: grafana-service
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: default
name: awebsite-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- subdomain.website.io
secretName: tls-secret-website-io
rules:
- host: website.io
- http:
paths:
- path: /(/|$)(.*)
backend:
serviceName: website-service
servicePort: 80
kind: Service
apiVersion: v1
metadata:
name: nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: nginx-controller
app.kubernetes.io/part-of: nginx-controller
annotations:
# by default the type is elb (classic load balancer).
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
# this setting is to make sure the source IP address is preserved.
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: nginx-controller
app.kubernetes.io/part-of: nginx-controller
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
Last step is to create a CNAME entry in Rote53 from your domain to the loadbalancer DNS.

Dynamically set External dns for EKS fargate ingress alb using external-dns.alpha.kubernetes.io

I am trying to set up external dns from Eks manifest file.
I created EKS cluster and created 3 fargate profiles, default, kube-system and dev.
Coredns pods are up and running.
I then installed AWS Load Balancer Controller by following this doc.
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
The load balancer controller came up in kube-system.
I then installed external-dns deployment using the following manifest file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxxx:role/eks-externaldnscontrollerRole-XST756O4A65B
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
selector:
matchLabels:
app: external-dns
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: bitnami/external-dns:0.7.1
args:
- --source=service
- --source=ingress
#- --domain-filter=xxxxxxxxxx.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
#- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-identifier
#securityContext:
# fsGroup: 65534
I used both namespace kube-system and dev for external-dns, both came up fine.
I then deployed, the application and ingress manifest files. I used both namespaces, kube-system and dev.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: kube-nginxapp1:1.0.0
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-nodeport-service
labels:
app: app1-nginx
namespace: kube-system
annotations:
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
spec:
type: NodePort
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
----------
# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
namespace: kube-system
annotations:
# Ingress Core Settings
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
## SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0xxxxxxxxxx:certificate/0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
#alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)
# SSL Redirect Setting
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# External DNS - For creating a Record Set in Route53
external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com
# For Fargate
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /* # SSL Redirect Setting
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app1-nginx-nodeport-service
port:
number: 80
All pods came up fine but it is not dynamically registering the the dns alias for thr alb.
Can you please guide me to know what i am going wrong?
First, check the ingress itself works. Check the AWS load balancers and load balancer target groups. The target group targets should be active.
If you do a kubectl get ingress this should be also output the DNS name of the load balancer created.
Use curl to check this url works!
The annotation external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com does not work for ingresses. It is only valid for services. But you don't need it. Just specify the host field for the ingress spec.rules. In your example there is no such property. Specify it!

Which part of my AWS EKS setup isn't running correctly?

I have setup an EKS cluster and configured it to run 6 or so different microservices in their own pods. I am using an ALB as the ingress to said pods and have noticed that sometimes the connection to the pods will time out. I am struggling to determine exactly what the cause of this is.
The pods work as expected for the first X amount of requests but once they have been left for a while and then I try to make a new request, the connection will time out. Could this be due to the ALB or am I missing something with Kubernetes?
One of the deployments looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: challengepasswordlookupservice
labels:
app: challengepasswordlookupservice
spec:
replicas: 1
selector:
matchLabels:
app: challengepasswordlookupservice
template:
metadata:
labels:
app: challengepasswordlookupservice
spec:
containers:
- name: challengepasswordlookupservice
image: *withheld*
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: challengepasswordlookupservice
spec:
type: NodePort
selector:
app: challengepasswordlookupservice
ports:
- protocol: TCP
port: 80
targetPort: 80
And the ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: golf-high-service-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-2:206106816545:certificate/8d91b886-9d57-4fcf-b016-04959cf4d97d
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=60
spec:
rules:
- host: "*withheld*"
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: "/*"
pathType: Prefix
backend:
service:
name: challengepasswordlookupservice
port:
number: 80
Try to increase the idle_timeout.timeout_seconds=60 to something relevant like maybe 1800 inside the annotations and then try it out.
I believe you'll want the load-balancer-attributes annotation set to idle_timeout.timeout_seconds=1800
https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/8c2fbfd478a77f983afde2bb5a3ce2f06957e75d/internal/alb/lb/attributes.go#L20
The issue was with the ALB. On AWS, an ALB requires 1 subnet in at least two different availability zones. One of the subnets that was attached did not have an internet gateway attached and thus was not establishing a connection when the request was routed to this subnet.

Unable to create ingress and load balancer in k8s

I have created one load balancer service using the following configuration:-
apiVersion: v1
kind: Service
metadata:
name: dealer-service-elb
namespace: staging
labels:
service: dealer-service-elb
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
service: dealer-service
ports:
- name: 'http'
port: 80
targetPort: 3000
protocol: 'TCP'
and my aws alb looks like this:-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dealer-service-ingress
labels:
app: dealer-service
namespace: staging
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/healthcheck-path: /dealers-service/v1/health
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: dealer-service
servicePort: 80
But when I checked my ingress using kubectl command:-
I did not find my ingress url:-
NAME HOSTS ADDRESS PORTS AGE
dealer-service-ingress * 80 34m