Multiple Istio Request Authentication Policies - istio

According to the Istio security doc: "Request authentication policies can specify more than one JWT if each uses a unique location. When more than one policy matches a workload, Istio combines all rules as if they were specified as a single policy. This behavior is useful to program workloads to accept JWT from different providers. However, requests with more than one valid JWT are not supported because the output principal of such requests is undefined."
Does this mean I can have multiple unique "jwtRules: issuer, jwksUri" in different policy yamls, the receiving workload can accept these different JWT, but each request must contain only One particular JWT?
Thanks!

Here is our approach of the scenario to allow more than one issuer policy
Example of 2 types of jwt( siteminder based issuer / gateway issuer) called
$.Values.jwt.siteminder.issuer
$.Values.jwt.gateway.issuer
Then the definition in authn
####
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: {{ $.Values.appDomain }}-app-bff
labels:
app.kubernetes.io/part-of: {{ $.Values.appDomain }}
spec:
selector:
matchLabels:
app: {{ $.Values.appDomain }}-app-bff-exp
jwtRules:
- forwardOriginalToken: true
fromHeaders:
- name: Authorization
prefix: 'Bearer '
issuer: {{ $.Values.jwt.siteminder.issuer }}
jwksUri: {{ $.Values.jwt.siteminder.jwksUri }}
- forwardOriginalToken: true
fromHeaders:
- name: Authorization
prefix: 'Bearer '
issuer: {{ $.Values.jwt.gateway.issuer }}
jwksUri: {{ $.Values.jwt.gateway.jwksUri }}
---
# Similar in the Authorization app BFF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: {{ $.Values.appDomain }}-app-bff
labels:
app.kubernetes.io/part-of: {{ $.Values.appDomain }}
spec:
selector:
matchLabels:
app: {{ $.Values.appDomain }}-app-bff-exp
action: ALLOW
rules:
- when:
- key: request.auth.claims[iss]
values:
- {{ $.Values.jwt.siteminder.issuer }}
- {{ $.Values.jwt.gateway.issuer }}
- when:
- key: request.auth.claims[scope]
values:
- "life-object:write"
- "life-object:read"
hope this helps anyone trying to apply multiple issuers validation in authn or multiple rules for authorization
Regards

Related

Envoy based header to metadata filtering regex not working

My use case is to remove query parameters from the path so the envoy ISTIO filter can filter on the basis of just APIs.
I am using the below configuration it is a filtering route but also takes query parameters in the path not truncating it.
The ratelimiter service on its part does not detect any special configuration for the descriptor ("PATH", "/foo?param=value") and therfore use the default of key "PATH".
any idea why truncating regex is not working? Thanks
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: {{ template "name" . }}-httpfilter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.header_to_metadata
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.header_to_metadata.v3.Config
request_rules:
- header: ':path'
on_header_present:
# use an arbitary name for the namespace
# will be used later to extract descriptor value
metadata_namespace: qry-filter
# use an arbitary key for the metadata
# will be used later to extract descriptor value
key: uri
regex_value_rewrite:
pattern:
# regex matcher
# truncates parameters from path
regex: '^(\/[\/\d\w-]+)\??.*$'
substitution: '\\1'
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: 'envoy.filters.network.http_connection_manager'
subFilter:
name: 'envoy.filters.http.router'
patch:
operation: INSERT_BEFORE
value:
name: envoy.filters.http.ratelimit
typed_config:
'#type': type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
# ensure the domain matches with the domain used in the ratelimit service config
domain: {{ template "fullname" . }}-ratelimit
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
# must match load_assignment.cluster_name from the patch to the CLUSTER above
cluster_name: rate_limit_cluster
timeout: 10s
transport_api_version: V3
- applyTo: CLUSTER
match:
cluster:
# kubernetes dns of your ratelimit service
service: ratelimit.{{ .Values.openapi.destinationSuffix }}
patch:
operation: ADD
value:
name: rate_limit_cluster
type: STRICT_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
# arbitrary name
cluster_name: rate_limit_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
# kubernetes dns of your ratelimit service
address: ratelimit.{{ .Values.openapi.destinationSuffix }}
port_value: 8081
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: {{ template "name" . }}-virtualhost
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: ""
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions: # any actions in here
- dynamic_metadata:
descriptor_key: PATH
metadata_key:
key: qry-filter
path:
- key: uri
apiVersion: v1
kind: ConfigMap
metadata:
name: ratelimit-config
data:
config.yaml: |
domain: {{ template "fullname" . }}-ratelimit
descriptors:
- key: PATH
rate_limit:
unit: minute
requests_per_unit: 10

argocd login redirect loop for SSO

I'm using argocd with JumpCloud for SSO. For some reason, it periodically gets into a weird state where clicking on the login button redirects back to the login page, a not found error, or an infinite redirect loop. It's intermittent and can sometimes (but not always) be fixed by deleting cookies, clearing cache, and/or restarting or deleting the argocd-server pods.
charts/argocd/values.yaml
environment: ""
argo-cd:
redis-ha:
enabled: true
controller:
replicas: 1
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: kube-system
additionalLabels:
release: kube-prometheus-stack
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
args:
appResyncPeriod: 20
resources:
requests:
cpu: 1000m
memory: 768Mi
server:
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: kube-system
additionalLabels:
release: kube-prometheus-stack
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
resources:
requests:
cpu: 50m
memory: 128Mi
rbacConfigCreate: false
config:
dex.config: |
connectors:
- type: ldap
name: jumpcloud.com
id: ad
config:
# Ldap server address
host: ldap.jumpcloud.com:636
insecureNoSSL: false
insecureSkipVerify: true
# Variable name stores ldap bindDN in argocd-secret
bindDN: "uid=dexservice,ou=Users,o=myorganizationid,dc=jumpcloud,dc=com"
# Variable name stores ldap bind password in argocd-secret
bindPW: "mysecurepassword"
usernamePrompt: Jumpcloud Username
userSearch:
baseDN: ou=Users,o=myorganizationid,dc=jumpcloud,dc=com
username: uid
idAttr: entryDN
emailAttr: mail
nameAttr: cn
groupSearch:
baseDN: ou=Users,o=myorganizationid,dc=jumpcloud,dc=com
filter: "(|(cn=K8S-Admin)(cn=K8S-ReadOnly))"
userMatchers:
- userAttr: entryDN
groupAttr: member
nameAttr: cn
repositories: |
- url: git#github.com:myorgname/myreponame
sshPrivateKeySecret:
name: ssh-private-key-secret
key: id_ed25519
service:
type: NodePort
dex:
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
redis:
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
repoServer:
replicas: 2
nodeSelector:
eks.amazonaws.com/capacityType: ON_DEMAND
applicationSet:
replicaCount: 2
configs:
params:
server.insecure: true
charts/argocd/templates/argocd-rbac-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
labels:
helm.sh/chart: {{ (index .Chart.Dependencies 0).Version }}
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-rbac-cm
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/instance: argocd
data:
policy.default: role:none
scopes: '[groups, email]'
policy.csv: |
p, role:none, *, *, */*, deny
g, K8S-Admin, role:admin
g, K8S-ReadOnly, "role:admin"
charts/argocd/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Chart.Name }}
annotations:
"kubernetes.io/ingress.class": "alb"
alb.ingress.kubernetes.io/group.name: {{ .Values.environment }}
"alb.ingress.kubernetes.io/scheme": "internal"
"alb.ingress.kubernetes.io/target-type": "instance"
"alb.ingress.kubernetes.io/listen-ports": '[{"HTTP":80},{"HTTPS":443}]'
"alb.ingress.kubernetes.io/backend-protocol": HTTP
"alb.ingress.kubernetes.io/healthcheck-path": "/"
"alb.ingress.kubernetes.io/actions.ssl-redirect": '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
"external-dns.alpha.kubernetes.io/hostname": argocd.{{ .Values.hostName }}
spec:
rules:
- host: argocd.{{ .Values.hostName }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
We're currently experiencing similar (weird) issues with our Argo CD stack that utilizes DEX as an LDAP connector. DEX logs shows successful authentication, yet when returning to Argo CD, it intermittently seems to switch back to an OICD based login method and denies access with varying error messages completely unrelated to LDAP.
Recent communication on this topic on Argo's Slack Channel showed that this is not a known issue, but might have to do with running Argo in HA mode, i.e. using two or more argocd-server-instances, which was exactly our setup.
I've scaled down argocd-server to one instance only for the time being. Until now, the issue hasn't resurfaced.
This might not be an exact answer, but rather (hopefully) provide a helpful hint as to where to look for a possible solution. Approval pending. :)

GCP GKE Ingress Health Checks

I have a deployment and service running in GKE using Deployment Manager. Everything about my service works correctly except that the ingress I am creating reports the service in a perpetually unhealthy state.
To be clear, everything about the deployment works except the healthcheck (and as a consequence, the ingress). This was working previously (circa late 2019), and apparently about a year ago GKE added some additional requirements for healthchecks on ingress target services and I have been unable to make sense of them.
I have put an explicit health check on the service, and it reports healthy, but the ingress does not recognize it. The service is using a NodePort but also has containerPort 80 open on the deployment, and it does respond with HTTP 200 to requests on :80 locally, but clearly that is not helping in the deployed service.
The cluster itself is an almost nearly identical copy of the Deployment Manager example
Here is the deployment:
- name: {{ DEPLOYMENT }}
type: {{ CLUSTER_TYPE }}:{{ DEPLOYMENT_COLLECTION }}
metadata:
dependsOn:
- {{ properties['clusterType'] }}
properties:
apiVersion: apps/v1
kind: Deployment
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ DEPLOYMENT }}
labels:
app: {{ APP }}
tier: resters
spec:
replicas: 1
selector:
matchLabels:
app: {{ APP }}
tier: resters
template:
metadata:
labels:
app: {{ APP }}
tier: resters
spec:
containers:
- name: rester
image: {{ IMAGE }}
resources:
requests:
cpu: 100m
memory: 250Mi
ports:
- containerPort: 80
env:
- name: GCP_PROJECT
value: {{ PROJECT }}
- name: SERVICE_NAME
value: {{ APP }}
- name: MODE
value: rest
- name: REDIS_ADDR
value: {{ properties['memorystoreAddr'] }}
... the service:
- name: {{ SERVICE }}
type: {{ CLUSTER_TYPE }}:{{ SERVICE_COLLECTION }}
metadata:
dependsOn:
- {{ properties['clusterType'] }}
- {{ APP }}-cluster-nodeport-firewall-rule
- {{ DEPLOYMENT }}
properties:
apiVersion: v1
kind: Service
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: {{ SERVICE }}
labels:
app: {{ APP }}
tier: resters
spec:
type: NodePort
ports:
- nodePort: {{ NODE_PORT }}
port: {{ CONTAINER_PORT }}
targetPort: {{ CONTAINER_PORT }}
protocol: TCP
selector:
app: {{ APP }}
tier: resters
... the explicit healthcheck:
- name: {{ SERVICE }}-healthcheck
type: compute.v1.healthCheck
metadata:
dependsOn:
- {{ SERVICE }}
properties:
name: {{ SERVICE }}-healthcheck
type: HTTP
httpHealthCheck:
port: {{ NODE_PORT }}
requestPath: /healthz
proxyHeader: NONE
checkIntervalSec: 10
healthyThreshold: 2
unhealthyThreshold: 3
timeoutSec: 5
... the firewall rules:
- name: {{ CLUSTER_NAME }}-nodeport-firewall-rule
type: compute.v1.firewall
properties:
name: {{ CLUSTER_NAME }}-nodeport-firewall-rule
network: projects/{{ PROJECT }}/global/networks/default
sourceRanges:
- 130.211.0.0/22
- 35.191.0.0/16
targetTags:
- {{ CLUSTER_NAME }}-node
allowed:
- IPProtocol: TCP
ports:
- 30000-32767
- 80
You could try to define a readinessProbe on your container in your Deployment.
This is also a metric that the ingress uses to create health checks (note that these health checks probes come from outside of GKE)
And In my experience, these readiness probes work pretty well to get the ingress health checks to work,
To do this, you create something like this, this is a TCP Probe, I have seen better performance with TCP probes.
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 10
periodSeconds: 10
So this probe will check port: 80, which is the one I see is used by the pod in this service, and this will also help configure the ingress health check for a better result.
Here is some helpful documentation on how to create the TCP readiness probes which the ingress health check can be based on.

Django/Google Kubernetes Intermittent 111: Connection refused to upstream services

I've done quite a bit of searching and cannot seem to find anyone that shows a resolution to this problem.
I'm getting intermittent 111 Connection refused errors on my kubernetes clusters. It seems that about 90% of my requests succeed and the other 10% fail. If you "refresh" the page, a previously failed request will then succeed. I have 2 different Kubernetes clusters with the same exact setup both showing the errors.
This looks to be very close to what I am experiencing. I did install my setup onto a new cluster, but the same problem persisted:
Kubernetes ClusterIP intermittent 502 connection refused
Setup
Kubernetes Cluster Version: 1.18.12-gke.1206
Django Version: 3.1.4
Helm to manage kubernetes charts
Cluster Setup
Kubernetes nginx ingress controller that serves web traffic into the cluster:
https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke
From there I have 2 Ingresses defined that route traffic based on the referrer url.
Stage Ingress
Prod Ingress
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: potr-tms-ingress-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
# this line below doesn't seem to have an effect
# nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
cert-manager.io/cluster-issuer: "letsencrypt-{{ .Values.environment }}"
spec:
rules:
- host: {{ .Values.ingress_host }}
http:
paths:
- path: /
backend:
serviceName: potr-tms-service-{{ .Values.environment }}
servicePort: 8000
tls:
- hosts:
- {{ .Values.ingress_host }}
- www.{{ .Values.ingress_host }}
secretName: potr-tms-{{ .Values.environment }}-tls
These ingresses route to 2 services that I have defined for prod and stage:
Service
apiVersion: v1
kind: Service
metadata:
name: potr-tms-service-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
type: ClusterIP
ports:
- name: potr-tms-service-{{ .Values.environment }}
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: potr-tms-{{ .Values.environment }}
These 2 services route to deployments that I have for both prod and stage:
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: potr-tms-deployment-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
replicas: {{ .Values.deployment_replicas }}
selector:
matchLabels:
app: potr-tms-{{ .Values.environment }}
strategy:
type: RollingUpdate
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
containers:
- command: ["gunicorn", "--bind", ":8000", "config.wsgi"]
# - command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
envFrom:
- secretRef:
name: potr-tms-secrets-{{ .Values.environment }}
image: gcr.io/potrtms/potr-tms-{{ .Values.environment }}:latest
name: potr-tms-{{ .Values.environment }}
ports:
- containerPort: 8000
resources:
requests:
cpu: 200m
memory: 512Mi
restartPolicy: Always
serviceAccountName: "potr-tms-service-account-{{ .Values.environment }}"
status: {}
Error
This is the error that I'm seeing inside of my ingress controller logs:
This seems pretty clear, if my deployment pods were failing or showing errors they would be "unavailable" and the service would not be able to route them to the pod. To try and debug this I did increase my deployment resources and replica counts. The amount of web traffic to this app is pretty low though, ~10 users.
What I've Tried
I tried using a completely different ingress controller https://github.com/kubernetes/ingress-nginx
Increasing deployment resources / replica counts (seems to have no effect)
Installing my whole setup on a brand new cluster (same results)
restart the ingress controller / deleting and re installing
Potentially it sounds like this could be a Gunicorn problem. To test I tried starting my pods with python manage.py runserver, problem remained.
Update
Raising the pod counts seems to have helped a little bit.
deployment replicas: 15
cpu request: 200m
memory request: 512Mi
Some requests do fail still though.
Did you find a solution to this? I am seeing something very similar on a minikube setup.
In my case, I believe I also see the nginx controller restarting after the 502. The 502 is intermittent, frequently the first access fails, then reload works.
The best idea I've found so far is to increase the Nginx timeout parameter, but I have not tried that yet. Still trying to search out all options.
I was not able to figure out why these connection errors happen but I did find a work around that seems to solve the problem for our users.
Inside of your ingress config add the annotation
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "10"
I set it to 10 just to make sure it retried as I was fairly confident our services were working. You could probably get away with 2 or 3.
Here's my full ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: potr-tms-ingress-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
# nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
nginx.ingress.kubernetes.io/client-body-buffer-size: "100m"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "1024m"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "10"
cert-manager.io/cluster-issuer: "letsencrypt-{{ .Values.environment }}"
spec:
rules:
- host: {{ .Values.ingress_host }}
http:
paths:
- path: /
backend:
serviceName: potr-tms-service-{{ .Values.environment }}
servicePort: 8000
tls:
- hosts:
- {{ .Values.ingress_host }}
- www.{{ .Values.ingress_host }}
secretName: potr-tms-{{ .Values.environment }}-tls

Kubernetes SSL Redirect Ingress

I am very new to Kubernetes and I am trying to figure out how to set up a http -> https redirect for my kubernetes cluster. I have searched and have tried many different annotations and I am not sure if I am applying them correctly or not. I have pasted my files below and would be happy to share more if more is necessary.
I have tried to adding these lines to the annotation section
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
I have also tried to implement this workaround, but have not had success.
Redirect workaround
I appreciate the help!
service.yaml
kind: Service
apiVersion: v1
metadata:
name: loadbalancer-ingress
annotations:
{{- if .Values.loadbalancer.cert }}
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.loadbalancer.cert | quote }}
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "{{- range .Values.loadbalancer.ports -}}{{- if .ssl -}}{{ .name }},{{- end -}}{{- end -}}"
{{- end }}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: {{ .Values.loadbalancer.backend_protocol | quote }}
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
spec:
type: LoadBalancer
selector:
pod: {{ .Chart.Name }}
ports:
{{- range .Values.loadbalancer.ports }}
- name: {{ .name }}
port: {{ .port }}
targetPort: {{ .targetPort }}
{{- end }}
configmap.yaml
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Chart.Name }}-nginx-configuration
data:
use-proxy-protocol: "false"
use-forwarded-headers: "true"
server-tokens: "false"
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Chart.Name }}-tcp-services
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Chart.Name }}-udp-services
values.yaml
loadbalancer:
backend_protocol: http
cert: MY_AWS_CERT
ports:
- name: http
port: 80
targetPort: 80
ssl: false
- name: https
port: 443
targetPort: 80
ssl: true
You need to have pods and a clusterIP service in front of those pods and then in the ingress resource you can refer to the service. So ingress controller such as nginx will receive the traffic from client which is outside the kubernetes cluster and forward that traffic to the pods behind the service. The ingress controller itself need to be exposed outside the cluster via a LoadBalancer type service.
Referring from docs here an ingress resource will look like
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
HTTPS for backend
If you wanna to setup https as backend traffic, then replace http->https in values.yaml
Explanation:
service.yaml has:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: {{ .Values.loadbalancer.backend_protocol | quote }}
It means that aws-load-balancer-backend-protocol is declared in values.yaml
loadbalancer:
backend_protocol: http
According to AWS part of k8s man:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: Used on the service to specify the protocol spoken by the backend (pod) behind a listener. If http (default) or https, an HTTPS listener that terminates the connection and parses headers is created. If set to ssl or tcp, a “raw” SSL listener is used. If set to http and aws-load-balancer-ssl-cert is not used then a HTTP listener is used.
HTTPS for frontend
If your question about https for frontend, then setup an ingress-controller.
As per ingress man of k8s:
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
You may need to deploy an Ingress controller such as ingress-nginx.
AWS part of NGINX ingress controller installation guide is here:
In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer.
Network Load Balancer (NLB) ¶
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml
`
TLS termination in AWS Load Balancer (ELB) > In some scenarios is required to terminate TLS in the Load Balancer and not in the ingress controller.
For this purpose we provide a template: deploy-tls-termination.yaml
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy-tls-termination.yaml
Edit the file and change:
VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX
AWS Certificate Manager (ACM) ID
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
Deploy the manifest:
kubectl apply -f deploy-tls-termination.yaml
Adding the follwing annotations to Ingress will do the job:
This will do redirect the followings
HTTP => HTTPS
non-www => WWW
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'