How to configure NGINX in AWS - amazon-web-services

I have set up k8s cluster in AWS ec2 instances with 1 parent and 2 child nodes using kops.
Deployed 2 services and running with LoadBalancer service type in browser.
Now I installed NGINX but through LB ip not able to hit my service. It is giving 504 GATEWAY_TIME_OUT exception. Googled it but no success, Where am I going wrong? Here is my sample code...[AWS FREE ACCOUNT]
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
spec:
imagePullSecrets:
- name: $IMG_PULL_SECRET
containers:
- image: $IMAGE_REG/$APP_NAME:$IMAGE_TAG
name: $APP_NAME
imagePullPolicy: Always
ports:
- containerPort: ${CONTAINER_PORT}
protocol: TCP
env:
- name: spring.cloud.config.uri
value: 'http://config-server-service'
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
type: $SERVICE_TYPE
#type: $SERVICE_TYPE
ports:
- port: 80
targetPort: ${CONTAINER_PORT}
protocol: TCP
selector:
app: $APP_NAME
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.global-static-ip-name: "my-gateway"
spec:
rules:
- http:
paths:
- path: /${APP_NAME}
backend:
serviceName: ${APP_NAME}
servicePort: 80

Related

ALB Ingress Return fixed response 503 not pointing to my services

Hello im new to kubernetes , just want to ask why my ALB Ingress not pointing to my services when access through url , it just return fix response 503
This is my ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "shortenurl-ingress"
namespace: "shortenurl-ingress"
labels:
app: shortenurl-ingress
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-name: shortenurl-ingress
alb.ingress.kubernetes.io/security-groups: eks-cluster-sg-shortener-eks-895573458
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/inbound-cidrs: 0.0.0.0/0
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
external-dns.alpha.kubernetes.io/hostname: fuadariqoh.com
spec:
ingressClassName: alb
rules:
- host: fuadariqoh.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: shortenerapp
port:
number: 31674
this is my service.yaml
apiVersion: v1
kind: Service
metadata:
name: shortenerapp
labels:
app: shortenerapp
spec:
type: NodePort
selector:
app: shortenurl-app
ports:
- port: 31674
targetPort: 5001
protocol: TCP
this is my deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: shortenurl-app
labels:
app: shortenurl-app
spec:
replicas: 3
selector:
matchLabels:
app: shortenurl-app
template:
metadata:
labels:
app: shortenurl-app
spec:
containers:
- name: shortenurl-app
image: imageUri
ports:
- containerPort: 5001
this is my
load balancer on aws
I cant figure out what i do wrong, any help is appreciated , Thank you.

Ingress Nginx giving 503 on digital ocean

I have a simple configuration of a deployment, service and ingress service
app-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-depl
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: keysoutsourcedocker/app
---
apiVersion: v1
kind: Service
metadata:
name: app-srv
spec:
selector:
app: app
ports:
- name: app
protocol: TCP
port: 3000
targetPort: 3000
ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: www.keyssoft.xyz
http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: app-srv
port:
number: 3000
And my DNS config goes like this
enter image description here
But when i try to access www.keyssoft.xyz i get this 503 error
enter image description here
I've been on this for the past 14 hours. Any help would be appreciated.

Zonal Network Endpoint Group unhealthy after provisioning Ingress with managed cert via GKE

I am seeing Zonal network endpoint group unhealthy after configuring an ingress with a managed cert in GCP via
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- name: backstage
image: australia-southeast1-docker.pkg.dev/acme-dev-tooling/acme-docker/backstage:prd-v.0.35
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- secretRef:
name: backstage-secrets
---
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
annotations:
cloud.google.com/backend-config: '{"default": "backstage-ingress-backendconfig"}'
spec:
selector:
app: backstage
ports:
- name: http
protocol: TCP
port: 80
type: NodePort
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backstage-ingress-backendconfig
spec:
healthCheck:
checkIntervalSec: 15
type: HTTP
requestPath: /
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: tools-managed-cert-backstage
namespace: backstage
spec:
domains:
- tools.backstage.acme-uat.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backstage-ingress
namespace: backstage
annotations:
kubernetes.io/ingress.global-static-ip-name: "tools-backstage-external-ip"
networking.gke.io/managed-certificates: tools-managed-cert-backstage
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: backstage
port:
number: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: backstage
GCP provisions an L7 https loadbalancer that cannot access the GKE cluster due to zonal health endpoint connectivity.
The ingress reads:
All Backends are in an UNHEALTHY state.
Is there something I am missing? Does the GKE ingress configure the firewall? I've looked at the rules, there are rules for 130.211.0.0/22, 35.191.0.0/16 which is the health check address.
logs/compute.googleapis.com%2Fhealthchecks yields no probe results. Despite having enabled logging.
Any help would be much appreciated.
UPDATE - above fixed per comments, the following isn't working
kind: Service
metadata:
name: argocd-server
namespace: argocd
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/instance: argocd
helm.sh/chart: argo-cd-3.29.5
annotations:
cloud.google.com/backend-config: '{"default": "argocd-ingress-backendconfig"}'
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/neg-status: >-
{"network_endpoint_groups":{"80":"k8s1-20a3d3ad-argocd-argocd-server-80-c2ec22fa"},"zones":["australia-southeast1-a"]}
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/backend-config":"{\"default\":
\"argocd-ingress-backendconfig\"}","cloud.google.com/neg":"{\"ingress\":
true}"},"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","argocd.argoproj.io/instance":"argocd","helm.sh/chart":"argo-cd-3.29.5"},"name":"argocd-server","namespace":"argocd"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":"http"}],"selector":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"},"type":"ClusterIP"}}
status:
loadBalancer: {}
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: http
selector:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
clusterIP: 10.184.10.20
clusterIPs:
- 10.184.10.20
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
uid: fee5f91c-b431-4b8c-ab10-64daa02ec729
resourceVersion: '108355'
generation: 3
creationTimestamp: '2022-01-20T00:06:05Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
deployment.kubernetes.io/revision: '3'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"},"name":"argocd-server","namespace":"argocd"},"spec":{"replicas":1,"revisionHistoryLimit":5,"selector":{"matchLabels":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"}},"template":{"metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"}},"spec":{"containers":[{"command":["argocd-server","--staticassets","/shared/app","--repo-server","argocd-repo-server:8081","--dex-server","http://argocd-dex-server:5556","--logformat","text","--loglevel","info","--redis","argocd-redis:6379"],"image":"quay.io/argoproj/argocd:v2.2.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"server","ports":[{"containerPort":8080,"name":"server","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{},"volumeMounts":[{"mountPath":"/app/config/ssh","name":"ssh-known-hosts"},{"mountPath":"/app/config/server/tls","name":"argocd-repo-server-tls"},{"mountPath":"/home/argocd","name":"plugins-home"},{"mountPath":"/tmp","name":"tmp-dir"}]}],"serviceAccountName":"argocd-server","volumes":[{"emptyDir":{},"name":"static-files"},{"emptyDir":{},"name":"tmp-dir"},{"configMap":{"name":"argocd-ssh-known-hosts-cm"},"name":"ssh-known-hosts"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}},{"emptyDir":{},"name":"plugins-home"}]}}}}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
kubectl.kubernetes.io/restartedAt: '2022-01-20T15:44:27+11:00'
spec:
volumes:
- name: static-files
emptyDir: {}
- name: tmp-dir
emptyDir: {}
- name: ssh-known-hosts
configMap:
name: argocd-ssh-known-hosts-cm
defaultMode: 420
- name: argocd-repo-server-tls
secret:
secretName: argocd-repo-server-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
defaultMode: 420
optional: true
- name: plugins-home
emptyDir: {}
containers:
- name: server
image: quay.io/argoproj/argocd:v2.2.2
command:
- argocd-server
- '--staticassets'
- /shared/app
- '--repo-server'
- argocd-repo-server:8081
- '--dex-server'
- http://argocd-dex-server:5556
- '--logformat'
- text
- '--loglevel'
- info
- '--redis'
- argocd-redis:6379
ports:
- name: server
containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: ssh-known-hosts
mountPath: /app/config/ssh
- name: argocd-repo-server-tls
mountPath: /app/config/server/tls
- name: plugins-home
mountPath: /home/argocd
- name: tmp-dir
mountPath: /tmp
livenessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: argocd-server
serviceAccount: argocd-server
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 5
progressDeadlineSeconds: 600
Cheers
# Here is workaround for Google Cloud with ArgoCD v2.5.2
# cloudflare-key.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-key
namespace: cert-manager
type: Opaque
stringData:
key: xxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: zia#mydomain.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector: {}
dns01:
cloudflare:
email: zia#mydomain.com
apiKeySecretRef:
name: cloudflare-key
key: key
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/component: server
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"8080":{}}}'
beta.cloud.google.com/backend-config: '{"default": "argocd-backend-config"}'
name: argocd-server
spec:
ports:
- name: http8080
protocol: TCP
port: 8080
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
---
#backendconfig.yaml
kind: BackendConfig
metadata:
name: argocd-backend-config
namespace: argocd
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 10
healthyThreshold: 1
unhealthyThreshold: 5
type: HTTP
requestPath: /healthz
port: 8080
---
# FrontendConfig.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: argocd-frontend-config
namespace: argocd
spec:
redirectToHttps:
enabled: true
---
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: gce
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.global-static-ip-name: "argocd-dev"
networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
spec:
rules:
- host: argocd-dev.mydomain.com
http:
paths:
- backend:
service:
name: argocd-server
port:
name: http
path: "/"
pathType: Prefix
tls:
- hosts:
- argocd-dev.mydomain.com
secretName: argocd-secret #don't change, this is provided by ArgoCD

AWS NLB stickysession on EKS

I`m trying to apply NLB sticky session on a EKS environment.
There are 2 worker nodes(EC2) connected to NLB target group, each node has 2 nginx pods.
I wanna connect to same pod on my local system for testing.
But it looks like connected different pod every trying using 'curl' command.
this is my test yaml file and test command.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: udptest
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container
image: nginx
ports:
- containerPort: 80
nodeSelector:
zone: a
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: udptest2
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container
image: nginx
ports:
- containerPort: 80
nodeSelector:
zone: c
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nlb
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
#!/bin/bash
number=0
while :
do
if [ $number -gt 2 ]; then
break
fi
curl -L -k -s -o /dev/null -w "%{http_code}\n" <nlb dns name>
done
How can i connect to specific pod by NLB`s sticy session every attempt?
As much as i understand ClientIP value for sessionAffinity is not supported when the service type is LoadBalancer.
You can use the Nginx ingress controller and implement the affinity over there.
https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "test-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: service
servicePort: port
Good article : https://zhimin-wen.medium.com/sticky-sessions-in-kubernetes-56eb0e8f257d
You need to enable it:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip

TLS doesn't work with LoadBalancer backed Service in Kubernetes

I am trying to expose an application in my cluster by creating a service type as load balancer. The reason for this is that I want this app to have a separate channel for communication. I have a KOPS cluster. I want to use AWS's network load balancer so that it gets a static IP. When I create the Service with port 80 mapped to the port that the app is running on everything works but when I try to add port 443 it just times out.
Here is the configuration that works -
apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer
As soon as I add TLS support in the config file and deploy it. The connection to the load balancer times out. How do I add TLS support to the load balancer?
I want to do it through the service and not through an ingress.
This is the configuration that doesn't work for me and when I paste the link in the browser, it times out.
kind: Service
apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxx
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 443
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer
You can use the tls & ssl termination
apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
selector:
app: test-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
type: LoadBalancer
You can add the tls certficate in aws certificate manager and use the arn address of certificate to kubernetes service.
it's like in becked you can terminate the https connection and use the HTTP only.
you can also check this out : https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/
https://github.com/kubernetes/kubernetes/issues/73297
EDIT :1
service.beta.kubernetes.io/aws-load-balancer-type: nlb
if not work please try adding this annotation as per your loadbalancer type.
You can now deploy ingress using NLB and SSL termination (https in NLB > http in service).
Finally found a solution that worked for me, you can try to deploy the following ingress.yaml (make sure to update your cert ARN under deployment section):
---
# Source: nginx-ingress/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
---
# Source: nginx-ingress/templates/default-backend-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress-backend
---
# Source: nginx-ingress/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "controller"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
data:
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
ssl-redirect: "false"
---
# Source: nginx-ingress/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
---
# Source: nginx-ingress/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: ingress-nginx
---
# Source: nginx-ingress/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: nginx-ingress/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: ingress-nginx
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:##REPLACE WITH YOUR CERT ARN"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "controller"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: special
selector:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: controller
type: "LoadBalancer"
---
# Source: nginx-ingress/templates/default-backend-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "default-backend"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-default-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: default-backend
type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
app.kubernetes.io/component: controller
name: nginx-ingress-controller
annotations:
{}
spec:
selector:
matchLabels:
app: nginx-ingress
release: nginx-ingress
replicas: 1
revisionHistoryLimit: 10
strategy:
{}
minReadySeconds: 0
template:
metadata:
labels:
app: nginx-ingress
release: nginx-ingress
component: "controller"
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
imagePullPolicy: "IfNotPresent"
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-ingress-default-backend
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-ingress-controller
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: special
containerPort: 8000
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{}
hostNetwork: false
serviceAccountName: nginx-ingress
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/default-backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
app.kubernetes.io/component: default-backend
name: nginx-ingress-default-backend
spec:
selector:
matchLabels:
app: nginx-ingress
release: nginx-ingress
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
labels:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: default-backend
spec:
containers:
- name: nginx-ingress-default-backend
image: "k8s.gcr.io/defaultbackend-amd64:1.5"
imagePullPolicy: "IfNotPresent"
args:
securityContext:
runAsUser: 65534
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{}
serviceAccountName: nginx-ingress-backend
terminationGracePeriodSeconds: 60
Your annotation refers to https port
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
but your port is named http, change to https
spec:
externalTrafficPolicy: Local
ports:
- name: https
port: 443
protocol: TCP
targetPort: 9050