I have deployed sonarqube helmchart in eks https://github.com/SonarSource/helm-chart-sonarqube/tree/master/charts/sonarqube. So in order to access the sonarqube from outside eks cluster, I'm using alb ingress controller instead on nginx ingress controller. I don't understand where does it break. When I call my host, for example "myhost.com", I get 504 time out. Even though the app is running fine inside the pod, I tested it. I don't understand why isn't it forwarding the traffic to server.
My ingress file example
kind: Ingress
metadata:
name: app-k8s-sonarqube-sonarqube-internal
annotations:
# annotations: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/annotations/
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: {{ .Values.sonarqube.ingress.groupname }}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
spec:
rules:
- host: {{ .Values.sonarqube.ingress.host_public }}
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: app-k8s-sonarqube-sonarqube
port:
number: 9000 ```
Related
I want to expose our Kubernetes service through alb and CloudFront.
We have created distribution and added load balancer origin with X-Custom-Header: cloudfront-header
In the load balancer, we have manually created a rule
HTTP Header X-Custom-Header is cloudfront-header: Forward to kubernetes-service-target-group : 1 (100%)
The above solution seems to be working and exposing our Kubernetes service through CloudFront on top of ALB but we want to make this configuration through ingress.yaml file because for any configuration change the old rules get removed which are manually added.
cloudfront configuration
AWS ALB configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-1:xxxx:certificate/daxxx-xxx-43c8-ada-cb5c97a1366b
alb.ingress.kubernetes.io/group.name: domain-web
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
finalizers:
- group.ingress.k8s.aws/domain-web
labels:
app.kubernetes.io/instance: domain-web
app.kubernetes.io/name: domain-web
name: domain-web
spec:
ingressClassName: alb
rules:
- host: market.domain.com
http:
paths:
- backend:
service:
name: marketing
port:
number: 3000
pathType: ImplementationSpecific
tls:
- hosts:
- market.domain.com
I have an EKS kubernetes cluster with AWS Load Balancer Controller and Argo CD installed. I'm creating an Application Load Balancer based on Argo CD documentation here.
Basically, I'm creating a NodePort service that receives traffic from the load balancer, and an ingress that will create the load balancer (using AWS Load Balancer Controller).
The ingress code is this one:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
# Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# ALB annotations
kubernetes.io/ingress.class: 'alb'
alb.ingress.kubernetes.io/scheme: 'internet-facing'
alb.ingress.kubernetes.io/target-type: 'instance'
alb.ingress.kubernetes.io/load-balancer-name: 'test-argocd'
alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:us-east-1:1234567:certificate/longcertcode'
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
# Health Check
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.argoproj.io
http:
paths:
- path: /
backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.argoproj.io
defaultBackend:
service:
name: argogrpc
port:
number: 443
And that creates a Load Balancer as expected.
I'm creating the service with this:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
The issue here is that the health check is failing on the Target Group:
If I change the backend protocol version to GRPC:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
Then the health check is passed, but I get an 464 error on Chrome:
This is what AWS documentation says about this error, but it doesn't help to clarify why I'm getting it:
So the question is, how do I create an application load balancer for my Argo CD using AWS load balancer controller that works? According to the documentation, it should work in both cases.
I have AWS K3S Kubernetes Cluster
I have AWS Load Balancer
I have registered domain
I have registered AWS Certificate
I created CNAME record for my domain and for AWS Load Balancer DNS name
I installed Traefik Ingress Controller on AWS K3S Kubernetes Cluster
I deployed "usermgmt" and "whoami" services to AWS K3S Kubernetes Cluster
I created Traefik Ingress with paths to "usermgmt" and "whoami"
The question is:
How to connect my AWS Load Balancer, which is hosted on my domain, to my services on K3s, using Ingress Traefik Controller?
Or in other words:
How to adapt "traefik service" or "traefik deployment", described below, to use AWS Certificate Resolver for my registered domain?
Or any example of how to use
AWS Load Balancer, AWS Target Group, AWS Security Group, created with Terraform files
in combination with Traefik Ingress Controller and Traefik Ingress Routes, deployed to K3S Kubernetes Cluster, resolved with AWS Certificate.
I currently can't connect to my services through AWS Load Balancer.
The following errors are returned:
404 Page Not Found
502 Bad Gateway
Here are the examples of URLs, which I try:
https://keycloak.skycomposer.net/usermgmt
https://keycloak.skycomposer.net/whoami
I set up correspondent Ingress Routes for "usermgmt" and "whoami" kubernetes services.
Here is some more information:
I created K3S Kubernetes Cluster in AWS with Load Balancer
These are my terraform files:
https://github.com/skyglass/user-management/tree/master/terraform
K3S cluster is deployed to EC2 instance (see "userdata.tpl" script)
I disabled Traefik Ingress Controller deployment, so I could deploy it later.
I found example on how to install "Traefik" to K3S Kubernetes cluster here:
https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd
Unfortunately, this example uses "godaddy" certificate resolver, but my domain is registered with AWS Route 53 and I use AWS certificate manager.
Here are files for "traefik service" and "traefik deployment", which I try to adapt:
traefik-service:
---
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: kube-system
spec:
# The targetPort entries are required as the Traefik container is listening on ports > 1024
# so that the container can be run as a non-root user and they can bind to these ports.
# Traefik is still accessed over 80 and 443 on the host, but the service routes the traffic
# to ports 8080 and 8443 on the container.
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 8443
- protocol: TCP
name: admin
port: 8080
targetPort: 9080
selector:
app: traefik
# Set externalTrafficPolicy to Local so that all external traffic intended for
# the Traefik pod goes directly to that local node. If the default of Cluster is
# used instead then the client source IP address is lost, and may hop between nodes.
externalTrafficPolicy: Local
type: LoadBalancer
traefik-deployment:
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: kube-system
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.4
args:
- --api.dashboard=true
- --ping=true
- --accesslog
- --entrypoints.traefik.address=:9080
- --entrypoints.web.address=:8080
- --entrypoints.websecure.address=:8443
# Uncomment the below lines to redirect http requests to https.
# This specifies the port :443 and not the https entrypoint name for the
# redirect as the service is listening on port 443 and directing traffic
# to the 8443 target port. If the entrypoint name "websecure" was used,
# instead of "to=:443", then the browser would be redirected to port 8443.
- --entrypoints.web.http.redirections.entrypoint.to=:443
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --providers.kubernetescrd
- --providers.kubernetesingress
- --certificatesresolvers.myresolver.acme.tlschallenge=true
- --certificatesresolvers.myresolver.acme.email=postmaster#example.com
- --certificatesresolvers.myresolver.acme.storage=/etc/traefik/certs/acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
# - --certificatesresolvers.godaddy.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
- --log
- --log.level=INFO
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 3
resources:
limits:
memory: '100Mi'
cpu: '1000m'
ports:
# The Traefik container is listening on ports > 1024 so the container
# can be run as a non-root user and they can bind to these ports.
- name: web
containerPort: 8080
- name: websecure
containerPort: 8443
- name: admin
containerPort: 9080
volumeMounts:
- name: certificates
mountPath: /etc/traefik/certs
# volumes:
# - name: certificates
# persistentVolumeClaim:
# claimName: traefik-certs-pvc
volumes:
- name: certificates
hostPath:
path: "/Users/dddd/git/aws/letsencrypt:/etc/traefik/certs"
See other files here: https://github.com/sleighzy/k3s-traefik-v2-kubernetes-crd
Ideally there should be solution like this:
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec:
type: LoadBalancer
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80
In this solution, I would just provide my AWS Certificate ARN and traefik ingress controller will do everything else.
The similar solution is described in this article:
https://www.ronaldjamesgroup.com/blog/getting-started-with-traefik
But, unfortunately, this solution doesn't work for me too, I tried it without any success.
The following errors are returned:
404 Page Not Found
502 Bad Gateway
when I try Ingress Route Paths for my domain:
https://keycloak.skycomposer.net/usermgmt
https://keycloak.skycomposer.net/whoami
After trying several options, I finally found the solution:
https://github.com/skyglass-examples/aws-k3s-traefik
I created AWS Load Balancer and K3S cluster with Terraform
I created Traefik Ingress Controller kubernetes manifest files
I created kubernetes manifest files for 2 services
I registered AWS Load Balancer DNS name for my domain
I created AWS Certificate for my domain
I used AWS Certificate ARN for Traefik Ingress Controller and AWS HTTPS Load Balancer
Here are my Traefik Ingress Controller manifest files:
traefik-deployment.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-proxy
namespace: kube-system
labels:
app: traefik-proxy
tier: proxy
spec:
replicas: 1
selector:
matchLabels:
app: traefik-proxy
tier: proxy
template:
metadata:
labels:
app: traefik-proxy
tier: proxy
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.2.0-rc1-alpine
name: traefik-proxy
ports:
- containerPort: 80
hostPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
traefik-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
namespace: kube-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-1:dddddddddd"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
service.beta.kubernetes.io/aws-load-balancer-type: "alb"
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 80
targetPort: 8080
traefik-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: traefik-lb
spec:
controller: traefik.io/ingress-controller
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-usermgmt-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/usermgmt"
backend:
serviceName: "usermgmt"
servicePort: 80
---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "traefik-whoami-ingress"
spec:
ingressClassName: "traefik-lb"
rules:
- host: "keycloak.skycomposer.net"
http:
paths:
- path: "/whoami"
backend:
serviceName: "whoami"
servicePort: 80
See the full code here:
https://github.com/skyglass-examples/aws-k3s-traefik
The code includes:
terraform files for AWS Load Balancer and K3S Kubernetes Cluster
source code for one of the docker containers, which I deployed to K3S
kubernetes manifest files for Traefik Ingress Controller, 2 Kubernetes Services and Traefik Ingress, which exposes these services with secured HTTPS connection on registered domain.
Replace AWS Certificate ARN with correspondent ARN of your certificate
Replace "skycomposer.net" with your domain name (see more details in the Readme file: https://github.com/skyglass-examples/aws-k3s-traefik)
I have multiple services that need to be exposed to the internet, but I'd like to use a single ALB for them.
I am using the latest AWS Load Balancer Controller, and I've been reading the documentation here (https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/ingress/annotations/#traffic-routing), but I haven't found a clear explanation on how to achieve this.
Here's the setup:
I have service-a.example.com -and- service-b.example.com. They each have their own certificates within Amazon Certificate Manager.
Within Kubernetes, each has its own service object defined as follows (each unique):
apiVersion: v1
kind: Service
metadata:
name: svc-a-service
annotations:
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthy-threshold-count: '5'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '30'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/tags: Environment=Test,App=ServiceA
spec:
selector:
app: service-a
ports:
- port: 80
targetPort: 80
type: NodePort
And each service has it's own Ingress object defined as follows (again, unique to each and with the correct certificates specified for each service):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: svc-a-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: services
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-01234567898765432
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/actions.response-503: >
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":"Unknown Host"}}
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/tags: Environment=Test
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:555555555555:certificate/33333333-2222-4444-AAAA-EEEEEEEEEEEE
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: svc-a-service
servicePort: 80
- path: /*
backend:
serviceName: response-503
servicePort: use-annotation
The HTTP to HTTPS redirection works as expected.
However -- there is no differentiation between my two apps for the load balancer to be able to know that traffic destined for service-a.example.com and service-b.example.com should be routed to two different target groups.
In the HTTP:443 listener rules in the console, it shows:
IF Path is /* THEN Forward to ServiceATargetGroup
IF Path is /* THEN Return fixed 503
IF Path is /* THEN Forward to ServiceBTargetGroup
IF Path is /* THEN Return fixed 503
IF Request otherwise not routed THEN Return fixed 404
So the important question here is:
How should the ingress be defined to force traffic destined for service-a.example.com to ServiceATargetGroup - and traffic destined for service-b.example.com to ServiceBTargetGroup?
And secondarily, I need the "otherwise not routed" to return a 503 instead of 404. I was expecting this to appear only once in the rules (be merged) - yet it is created for each ingress. How should my yaml be structured to achieve this?
I eventually figured this out -- so for anyone else stumbling onto this post, here's how I resolved it:
The trick was not relying on merging between the Ingress objects. Yes, it can handle a certain degree of merging, but there's not really a one-to-one relationship between Services as TargetGroups and Ingress as ALB. So you have to be very cautious and aware of what's in each Ingress object.
Once I combined all of my ingress into a single object definition, I was able to get it working exactly as I wanted with the following YAML:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: svc-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: services
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-01234567898765432
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/actions.response-503: >
{"type":"fixed-response","fixedResponseConfig":{"contentType":"text/plain","statusCode":"503","messageBody":"Unknown Host"}}
alb.ingress.kubernetes.io/actions.svc-a-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-a-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-a-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-a.example.com"]}}]
alb.ingress.kubernetes.io/actions.svc-b-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-b-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-b-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-b.example.com"]}}]
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true,idle_timeout.timeout_seconds=600
alb.ingress.kubernetes.io/tags: Environment=Test
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:555555555555:certificate/33333333-2222-4444-AAAA-EEEEEEEEEEEE,arn:aws:acm:us-east-2:555555555555:certificate/44444444-3333-5555-BBBB-FFFFFFFFFFFF
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08
spec:
backend:
serviceName: response-503
servicePort: use-annotation
rules:
- http:
paths:
- backend:
serviceName: ssl-redirect
servicePort: use-annotation
- backend:
serviceName: svc-a-host
servicePort: use-annotation
- backend:
serviceName: svc-b-host
servicePort: use-annotation
Default Action:
Set by specifying the serviceName and servicePort directly under spec:
spec:
backend:
serviceName: response-503
servicePort: use-annotation
Routing:
Because I'm using subdomains and paths won't work for me, I simply omitted the path and instead relied on hostname as a condition.
metadata:
alb.ingress.kubernetes.io/actions.svc-a-host: >
{"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"svc-a-service","servicePort":80,"weight":100}]}}
alb.ingress.kubernetes.io/conditions.svc-a-host: >
[{"field":"host-header","hostHeaderConfig":{"values":["svc-a.example.com"]}}]
End Result:
The ALB rules were configured precisely how I wanted them:
default action is a 503 fixed response
all http traffic is redirected to https
traffic is directed to TargetGroups based on the host header
AWS EKS now has a notion of IngressGroups so multiple ingresses can share one ingress controller. See Application load balancing on Amazon EKS
To share an application load balancer across multiple ingress resources using IngressGroups
To join an Ingress to an Ingress group, add the following annotation to a Kubernetes Ingress resource specification.
alb.ingress.kubernetes.io/group.name: <my-group>
The group name must be:
63 characters or less in length.
Consist of lower case alphanumeric characters, -, and ., and must start and end with an alphanumeric character.
The controller will automatically merge ingress rules for all Ingresses in the same Ingress group and support them with a single ALB. Most annotations defined on an Ingress only apply to the paths defined by that Ingress. By default, Ingress resources don't belong to any Ingress group.
I am very new to Kubernetes and I am trying to figure out how to set up a http -> https redirect for my kubernetes cluster. I have searched and have tried many different annotations and I am not sure if I am applying them correctly or not. I have pasted my files below and would be happy to share more if more is necessary.
I have tried to adding these lines to the annotation section
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
I have also tried to implement this workaround, but have not had success.
Redirect workaround
I appreciate the help!
service.yaml
kind: Service
apiVersion: v1
metadata:
name: loadbalancer-ingress
annotations:
{{- if .Values.loadbalancer.cert }}
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.loadbalancer.cert | quote }}
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "{{- range .Values.loadbalancer.ports -}}{{- if .ssl -}}{{ .name }},{{- end -}}{{- end -}}"
{{- end }}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: {{ .Values.loadbalancer.backend_protocol | quote }}
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
spec:
type: LoadBalancer
selector:
pod: {{ .Chart.Name }}
ports:
{{- range .Values.loadbalancer.ports }}
- name: {{ .name }}
port: {{ .port }}
targetPort: {{ .targetPort }}
{{- end }}
configmap.yaml
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Chart.Name }}-nginx-configuration
data:
use-proxy-protocol: "false"
use-forwarded-headers: "true"
server-tokens: "false"
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Chart.Name }}-tcp-services
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Chart.Name }}-udp-services
values.yaml
loadbalancer:
backend_protocol: http
cert: MY_AWS_CERT
ports:
- name: http
port: 80
targetPort: 80
ssl: false
- name: https
port: 443
targetPort: 80
ssl: true
You need to have pods and a clusterIP service in front of those pods and then in the ingress resource you can refer to the service. So ingress controller such as nginx will receive the traffic from client which is outside the kubernetes cluster and forward that traffic to the pods behind the service. The ingress controller itself need to be exposed outside the cluster via a LoadBalancer type service.
Referring from docs here an ingress resource will look like
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
HTTPS for backend
If you wanna to setup https as backend traffic, then replace http->https in values.yaml
Explanation:
service.yaml has:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: {{ .Values.loadbalancer.backend_protocol | quote }}
It means that aws-load-balancer-backend-protocol is declared in values.yaml
loadbalancer:
backend_protocol: http
According to AWS part of k8s man:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If http (default) or https, an HTTPS listener that terminates the connection and parses headers is created. If set to ssl or tcp, a “raw” SSL listener is used. If set to http and aws-load-balancer-ssl-cert is not used then a HTTP listener is used.
HTTPS for frontend
If your question about https for frontend, then setup an ingress-controller.
As per ingress man of k8s:
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
You may need to deploy an Ingress controller such as ingress-nginx.
AWS part of NGINX ingress controller installation guide is here:
In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer.
Network Load Balancer (NLB) ¶
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml
`
TLS termination in AWS Load Balancer (ELB) > In some scenarios is required to terminate TLS in the Load Balancer and not in the ingress controller.
For this purpose we provide a template: deploy-tls-termination.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy-tls-termination.yaml
Edit the file and change:
VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX
AWS Certificate Manager (ACM) ID
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
Deploy the manifest:
kubectl apply -f deploy-tls-termination.yaml
Adding the follwing annotations to Ingress will do the job:
This will do redirect the followings
HTTP => HTTPS
non-www => WWW
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'