Expose AWS EKS Service Through Cloudfront on Top of AWS ALB Ingress - amazon-web-services

I want to expose our Kubernetes service through alb and CloudFront.
We have created distribution and added load balancer origin with X-Custom-Header: cloudfront-header
In the load balancer, we have manually created a rule
HTTP Header X-Custom-Header is cloudfront-header: Forward to kubernetes-service-target-group : 1 (100%)
The above solution seems to be working and exposing our Kubernetes service through CloudFront on top of ALB but we want to make this configuration through ingress.yaml file because for any configuration change the old rules get removed which are manually added.
cloudfront configuration
AWS ALB configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-1:xxxx:certificate/daxxx-xxx-43c8-ada-cb5c97a1366b
alb.ingress.kubernetes.io/group.name: domain-web
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
finalizers:
- group.ingress.k8s.aws/domain-web
labels:
app.kubernetes.io/instance: domain-web
app.kubernetes.io/name: domain-web
name: domain-web
spec:
ingressClassName: alb
rules:
- host: market.domain.com
http:
paths:
- backend:
service:
name: marketing
port:
number: 3000
pathType: ImplementationSpecific
tls:
- hosts:
- market.domain.com

Related

How to set an ingress with ClusterIP in AWS-EKS

I am new to the AWS EKS and I want to know how I can setup an ingress and enable TLS (with a free service such as lets-encrypt).
I have deployed an EKS cluster and I have the following sample nginx manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-loadbalancer
spec:
type: LoadBalancer. // <------ can't I use a ClusterIp and still have a LB priovisioned?
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
#05-ALB-Ingress-Basic.yml
# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
rules:
- http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: nginx-service-loadbalancer
port:
number: 80
When it creates the LoadBalancer type service, it go ahead and creates a classic load balancer.
My questions are:
How can I provision (automatically) a Layer7 application load balancer and not the classic load balancer
Instead of using LoadBalancer type service, can I use a ClusterIP service and use my ingress to point to that and still create an automatic Load Balancer?
Thank you!
How can I provision (automatically) a Layer7 application load
balancer and not the classic load balancer
By using an ingress resource and specifying kubernetes.io/ingress.class: "alb".
Instead of using LoadBalancer type service, can I use a ClusterIP service and use my ingress to point to that and still
create an automatic Load Balancer?
yes, when used alb ingress resource with annotation alb.ingress.kubernetes.io/target-type: ip you can use a clusterip service.
so please don't create both a service-type loadbalancer and ingress resource at the same time.

ALB for Argo CD on kubernetes using AWS Load Balancer Controller (HTTP2)

I have an EKS kubernetes cluster with AWS Load Balancer Controller and Argo CD installed. I'm creating an Application Load Balancer based on Argo CD documentation here.
Basically, I'm creating a NodePort service that receives traffic from the load balancer, and an ingress that will create the load balancer (using AWS Load Balancer Controller).
The ingress code is this one:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
# Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# ALB annotations
kubernetes.io/ingress.class: 'alb'
alb.ingress.kubernetes.io/scheme: 'internet-facing'
alb.ingress.kubernetes.io/target-type: 'instance'
alb.ingress.kubernetes.io/load-balancer-name: 'test-argocd'
alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:us-east-1:1234567:certificate/longcertcode'
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
# Health Check
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.argoproj.io
http:
paths:
- path: /
backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.argoproj.io
defaultBackend:
service:
name: argogrpc
port:
number: 443
And that creates a Load Balancer as expected.
I'm creating the service with this:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
The issue here is that the health check is failing on the Target Group:
If I change the backend protocol version to GRPC:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
Then the health check is passed, but I get an 464 error on Chrome:
This is what AWS documentation says about this error, but it doesn't help to clarify why I'm getting it:
So the question is, how do I create an application load balancer for my Argo CD using AWS load balancer controller that works? According to the documentation, it should work in both cases.

Sonarqube with alb ingress controller gives 504 time out error

I have deployed sonarqube helmchart in eks https://github.com/SonarSource/helm-chart-sonarqube/tree/master/charts/sonarqube. So in order to access the sonarqube from outside eks cluster, I'm using alb ingress controller instead on nginx ingress controller. I don't understand where does it break. When I call my host, for example "myhost.com", I get 504 time out. Even though the app is running fine inside the pod, I tested it. I don't understand why isn't it forwarding the traffic to server.
My ingress file example
kind: Ingress
metadata:
name: app-k8s-sonarqube-sonarqube-internal
annotations:
# annotations: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/annotations/
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: {{ .Values.sonarqube.ingress.groupname }}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
spec:
rules:
- host: {{ .Values.sonarqube.ingress.host_public }}
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: app-k8s-sonarqube-sonarqube
port:
number: 9000 ```

AWS certificate resolver for Traefik Ingress Controller in K3S Kubernetes Cluster with existing AWS HTTPS Load Balancer

I have AWS K3S Kubernetes Cluster
I have AWS Load Balancer
I have registered domain
I have registered AWS Certificate
I created CNAME record for my domain and for AWS Load Balancer DNS name
I installed Traefik Ingress Controller on AWS K3S Kubernetes Cluster
I deployed "usermgmt" and "whoami" services to AWS K3S Kubernetes Cluster
I created Traefik Ingress with paths to "usermgmt" and "whoami"
The question is:
How to connect my AWS Load Balancer, which is hosted on my domain, to my services on K3s, using Ingress Traefik Controller?
Or in other words:
How to adapt "traefik service" or "traefik deployment", described below, to use AWS Certificate Resolver for my registered domain?
Or any example of how to use
AWS Load Balancer, AWS Target Group, AWS Security Group, created with Terraform files
in combination with Traefik Ingress Controller and Traefik Ingress Routes, deployed to K3S Kubernetes Cluster, resolved with AWS Certificate.
I currently can't connect to my services through AWS Load Balancer.
The following errors are returned:
404 Page Not Found
502 Bad Gateway
Here are the examples of URLs, which I try:
https://keycloak.skycomposer.net/usermgmt
https://keycloak.skycomposer.net/whoami
I set up correspondent Ingress Routes for "usermgmt" and "whoami" kubernetes services.
Here is some more information:
I created K3S Kubernetes Cluster in AWS with Load Balancer
These are my terraform files:
https://github.com/skyglass/user-management/tree/master/terraform
K3S cluster is deployed to EC2 instance (see "userdata.tpl" script)
I disabled Traefik Ingress Controller deployment, so I could deploy it later.
I found example on how to install "Traefik" to K3S Kubernetes cluster here:
https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd
Unfortunately, this example uses "godaddy" certificate resolver, but my domain is registered with AWS Route 53 and I use AWS certificate manager.
Here are files for "traefik service" and "traefik deployment", which I try to adapt:
traefik-service:
---
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: kube-system
spec:
# The targetPort entries are required as the Traefik container is listening on ports > 1024
# so that the container can be run as a non-root user and they can bind to these ports.
# Traefik is still accessed over 80 and 443 on the host, but the service routes the traffic
# to ports 8080 and 8443 on the container.
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 8443
- protocol: TCP
name: admin
port: 8080
targetPort: 9080
selector:
app: traefik
# Set externalTrafficPolicy to Local so that all external traffic intended for
# the Traefik pod goes directly to that local node. If the default of Cluster is
# used instead then the client source IP address is lost, and may hop between nodes.
externalTrafficPolicy: Local
type: LoadBalancer
traefik-deployment:
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: kube-system
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.4
args:
- --api.dashboard=true
- --ping=true
- --accesslog
- --entrypoints.traefik.address=:9080
- --entrypoints.web.address=:8080
- --entrypoints.websecure.address=:8443
# Uncomment the below lines to redirect http requests to https.
# This specifies the port :443 and not the https entrypoint name for the
# redirect as the service is listening on port 443 and directing traffic
# to the 8443 target port. If the entrypoint name "websecure" was used,
# instead of "to=:443", then the browser would be redirected to port 8443.
- --entrypoints.web.http.redirections.entrypoint.to=:443
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --providers.kubernetescrd
- --providers.kubernetesingress
- --certificatesresolvers.myresolver.acme.tlschallenge=true
- --certificatesresolvers.myresolver.acme.email=postmaster#example.com
- --certificatesresolvers.myresolver.acme.storage=/etc/traefik/certs/acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
# - --certificatesresolvers.godaddy.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
- --log
- --log.level=INFO
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
resources:
limits:
memory: '100Mi'
cpu: '1000m'
ports:
# The Traefik container is listening on ports > 1024 so the container
# can be run as a non-root user and they can bind to these ports.
- name: web
containerPort: 8080
- name: websecure
containerPort: 8443
- name: admin
containerPort: 9080
volumeMounts:
- name: certificates
mountPath: /etc/traefik/certs
# volumes:
# - name: certificates
# persistentVolumeClaim:
# claimName: traefik-certs-pvc
volumes:
- name: certificates
hostPath:
path: "/Users/dddd/git/aws/letsencrypt:/etc/traefik/certs"
See other files here: https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd
Ideally there should be solution like this:
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec:
type: LoadBalancer
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80
In this solution, I would just provide my AWS Certificate ARN and traefik ingress controller will do everything else.
The similar solution is described in this article:
https://www.ronaldjamesgroup.com/blog/getting-started-with-traefik
But, unfortunately, this solution doesn't work for me too, I tried it without any success.
The following errors are returned:
404 Page Not Found
502 Bad Gateway
when I try Ingress Route Paths for my domain:
https://keycloak.skycomposer.net/usermgmt
https://keycloak.skycomposer.net/whoami
After trying several options, I finally found the solution:
https://github.com/skyglass-examples/aws-k3s-traefik
I created AWS Load Balancer and K3S cluster with Terraform
I created Traefik Ingress Controller kubernetes manifest files
I created kubernetes manifest files for 2 services
I registered AWS Load Balancer DNS name for my domain
I created AWS Certificate for my domain
I used AWS Certificate ARN for Traefik Ingress Controller and AWS HTTPS Load Balancer
Here are my Traefik Ingress Controller manifest files:
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-proxy
namespace: kube-system
labels:
app: traefik-proxy
tier: proxy
spec:
replicas: 1
selector:
matchLabels:
app: traefik-proxy
tier: proxy
template:
metadata:
labels:
app: traefik-proxy
tier: proxy
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.2.0-rc1-alpine
name: traefik-proxy
ports:
- containerPort: 80
hostPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
traefik-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
namespace: kube-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-1:dddddddddd"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
service.beta.kubernetes.io/aws-load-balancer-type: "alb"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 80
targetPort: 8080
traefik-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-usermgmt-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/usermgmt"
backend:
serviceName: "usermgmt"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-whoami-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/whoami"
backend:
serviceName: "whoami"
servicePort: 80
See the full code here:
https://github.com/skyglass-examples/aws-k3s-traefik
The code includes:
terraform files for AWS Load Balancer and K3S Kubernetes Cluster
source code for one of the docker containers, which I deployed to K3S
kubernetes manifest files for Traefik Ingress Controller, 2 Kubernetes Services and Traefik Ingress, which exposes these services with secured HTTPS connection on registered domain.
Replace AWS Certificate ARN with correspondent ARN of your certificate
Replace "skycomposer.net" with your domain name (see more details in the Readme file: https://github.com/skyglass-examples/aws-k3s-traefik)

In AWS EKS, how can I define ingress to use one ALB for multiple subdomain URLs, each with their own certificate?

I have multiple services that need to be exposed to the internet, but I'd like to use a single ALB for them.
I am using the latest AWS Load Balancer Controller, and I've been reading the documentation here (https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/ingress/annotations/#traffic-routing), but I haven't found a clear explanation on how to achieve this.
Here's the setup:
I have service-a.example.com -and- service-b.example.com. They each have their own certificates within Amazon Certificate Manager.
Within Kubernetes, each has its own service object defined as follows (each unique):
apiVersion: v1
kind: Service
metadata:
name: svc-a-service
annotations:
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthy-threshold-count: '5'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '30'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/tags: Environment=Test,App=ServiceA
spec:
selector:
app: service-a
ports:
- port: 80
targetPort: 80
type: NodePort
And each service has it's own Ingress object defined as follows (again, unique to each and with the correct certificates specified for each service):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: svc-a-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: services
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-01234567898765432
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/actions.response-503: >
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":"Unknown Host"}}
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/tags: Environment=Test
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:555555555555:certificate/33333333-2222-4444-AAAA-EEEEEEEEEEEE
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: svc-a-service
servicePort: 80
- path: /*
backend:
serviceName: response-503
servicePort: use-annotation
The HTTP to HTTPS redirection works as expected.
However -- there is no differentiation between my two apps for the load balancer to be able to know that traffic destined for service-a.example.com and service-b.example.com should be routed to two different target groups.
In the HTTP:443 listener rules in the console, it shows:
IF Path is /* THEN Forward to ServiceATargetGroup
IF Path is /* THEN Return fixed 503
IF Path is /* THEN Forward to ServiceBTargetGroup
IF Path is /* THEN Return fixed 503
IF Request otherwise not routed THEN Return fixed 404
So the important question here is:
How should the ingress be defined to force traffic destined for service-a.example.com to ServiceATargetGroup - and traffic destined for service-b.example.com to ServiceBTargetGroup?
And secondarily, I need the "otherwise not routed" to return a 503 instead of 404. I was expecting this to appear only once in the rules (be merged) - yet it is created for each ingress. How should my yaml be structured to achieve this?
I eventually figured this out -- so for anyone else stumbling onto this post, here's how I resolved it:
The trick was not relying on merging between the Ingress objects. Yes, it can handle a certain degree of merging, but there's not really a one-to-one relationship between Services as TargetGroups and Ingress as ALB. So you have to be very cautious and aware of what's in each Ingress object.
Once I combined all of my ingress into a single object definition, I was able to get it working exactly as I wanted with the following YAML:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: svc-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: services
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-01234567898765432
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/actions.response-503: >
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":"Unknown Host"}}
alb.ingress.kubernetes.io/actions.svc-a-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-a-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-a-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-a.example.com"]}}]
alb.ingress.kubernetes.io/actions.svc-b-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-b-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-b-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-b.example.com"]}}]
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/tags: Environment=Test
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:555555555555:certificate/33333333-2222-4444-AAAA-EEEEEEEEEEEE,arn:aws:acm:us-east-2:555555555555:certificate/44444444-3333-5555-BBBB-FFFFFFFFFFFF
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08
spec:
backend:
serviceName: response-503
servicePort: use-annotation
rules:
- http:
paths:
- backend:
serviceName: ssl-redirect
servicePort: use-annotation
- backend:
serviceName: svc-a-host
servicePort: use-annotation
- backend:
serviceName: svc-b-host
servicePort: use-annotation
Default Action:
Set by specifying the serviceName and servicePort directly under spec:
spec:
backend:
serviceName: response-503
servicePort: use-annotation
Routing:
Because I'm using subdomains and paths won't work for me, I simply omitted the path and instead relied on hostname as a condition.
metadata:
alb.ingress.kubernetes.io/actions.svc-a-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-a-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-a-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-a.example.com"]}}]
End Result:
The ALB rules were configured precisely how I wanted them:
default action is a 503 fixed response
all http traffic is redirected to https
traffic is directed to TargetGroups based on the host header
AWS EKS now has a notion of IngressGroups so multiple ingresses can share one ingress controller. See Application load balancing on Amazon EKS
To share an application load balancer across multiple ingress resources using IngressGroups
To join an Ingress to an Ingress group, add the following annotation to a Kubernetes Ingress resource specification.
alb.ingress.kubernetes.io/group.name: <my-group>
The group name must be:
63 characters or less in length.
Consist of lower case alphanumeric characters, -, and ., and must start and end with an alphanumeric character.
The controller will automatically merge ingress rules for all Ingresses in the same Ingress group and support them with a single ALB. Most annotations defined on an Ingress only apply to the paths defined by that Ingress. By default, Ingress resources don't belong to any Ingress group.