Due to security requirements, I need to install Istio offline on an AWS EKS cluster. I could not find any clear documentation on offline installation or recommendations. So I have used the “istioctl generate manifest > generated-manifest.yml” to save the installation files, I have replaced the 3 images (proxy, pilot, prometheus) with my images hosted in AWS ECR and applied the file with kubectl. I should mention that “istioctl apply manifest” times out for some reason. I have also created the “istio-system” namespace manually before installation.
I have used these 2 annotations for an internal Network Load Balancer:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
The installation creates the CRDs, the pods and services. Here are my questions, if someone can point me into the right direction:
Where can I find some proper documentation for offline installation?
There is no istio-egressgateway with the default profile. Is the egress gateway not needed if I need to expose an application?
The NLB is created with 4 listeners (80, 443, 15443, 15021) but I can connect only to 15201, the others refuse the connection. Hence only the 15021 target group is healthy, the other 3 are not.
Adding below some of the relevant code:
kubectl describe service/istio-ingressgateway -n istio-system
------
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
release=istio
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.212.217
IPs: 172.20.212.217
LoadBalancer Ingress: aaaa-bbb.elb.eu-west-2.amazonaws.com
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 32094/TCP
Endpoints: 10.0.35.41:15021
Port: http2 80/TCP
TargetPort: 80/TCP
NodePort: http2 30890/TCP
Endpoints: 10.0.35.41:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32039/TCP
Endpoints: 10.0.35.41:443
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30690/TCP
Endpoints: 10.0.35.41:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sh-4.2$ kubectl describe service/istiod -n istio-system
Name: istiod
Namespace: istio-system
Labels: app=istiod
istio=pilot
istio.io/rev=default
release=istio
Annotations: <none>
Selector: app=istiod,istio=pilot
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.25.159
IPs: 172.20.25.159
Port: grpc-xds 15010/TCP
TargetPort: 15010/TCP
Endpoints: 10.0.43.252:15010
Port: https-dns 15012/TCP
TargetPort: 15012/TCP
Endpoints: 10.0.43.252:15012
Port: https-webhook 443/TCP
TargetPort: 15017/TCP
Endpoints: 10.0.43.252:15017
Port: http-monitoring 15014/TCP
TargetPort: 15014/TCP
Endpoints: 10.0.43.252:15014
Port: dns-tls 853/TCP
TargetPort: 15053/TCP
Endpoints: 10.0.43.252:15053
Session Affinity: None
Events: <none>
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: adapters.config.istio.io
labels:
app: mixer
package: adapter
istio: mixer-adapter
chart: istio
heritage: Tiller
release: istio
annotations:
"helm.sh/resource-policy": keep
spec:
group: config.istio.io
names:
kind: adapter
plural: adapters
singular: adapter
categories:
- istio-io
- policy-istio-io
scope: Namespaced
subresources:
status: {}
versions:
- name: v1alpha2
served: true
storage: true
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: templates.config.istio.io
labels:
app: mixer
package: template
istio: mixer-template
chart: istio
heritage: Tiller
release: istio
annotations:
"helm.sh/resource-policy": keep
spec:
group: config.istio.io
names:
kind: template
plural: templates
singular: template
categories:
- istio-io
- policy-istio-io
scope: Namespaced
subresources:
status: {}
versions:
- name: v1alpha2
served: true
storage: true
---
# Cni component is disabled.
# EgressGateways istio-egressgateway component is disabled.
# Resources for IngressGateways component
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
labels:
app: istio-ingressgateway
chart: gateways
heritage: Tiller
istio: ingressgateway
release: istio
service.istio.io/canonical-name: istio-ingressgateway
service.istio.io/canonical-revision: latest
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
weight: 2
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
weight: 2
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
weight: 2
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
containers:
- args:
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
- --serviceCluster
- istio-ingressgateway
- --trust-domain=cluster.local
env:
- name: JWT_POLICY
value: third-party-jwt
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.istio-system.svc:15012
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: CANONICAL_SERVICE
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-name']
- name: CANONICAL_REVISION
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-revision']
- name: ISTIO_META_WORKLOAD_NAME
value: istio-ingressgateway
- name: ISTIO_META_OWNER
value: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
- name: ISTIO_META_MESH_ID
value: cluster.local
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
image: 777777777777.dkr.ecr.eu-west-2.amazonaws.com/istio-proxyv2:1.6.8
name: istio-proxy
ports:
- containerPort: 15021
- containerPort: 8080
- containerPort: 8443
- containerPort: 15443
- containerPort: 15011
- containerPort: 15012
- containerPort: 8060
- containerPort: 853
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/istio/config
name: config-volume
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/run/secrets/tokens
name: istio-token
readOnly: true
- mountPath: /var/run/ingress_gateway
name: ingressgatewaysdsudspath
- mountPath: /etc/istio/pod
name: podinfo
- mountPath: /etc/istio/ingressgateway-certs
name: ingressgateway-certs
readOnly: true
- mountPath: /etc/istio/ingressgateway-ca-certs
name: ingressgateway-ca-certs
readOnly: true
serviceAccountName: istio-ingressgateway-service-account
volumes:
- configMap:
name: istio-ca-root-cert
name: istiod-ca-cert
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.labels
path: labels
- fieldRef:
fieldPath: metadata.annotations
path: annotations
name: podinfo
- emptyDir: {}
name: istio-envoy
- emptyDir: {}
name: ingressgatewaysdsudspath
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
name: istio
optional: true
name: config-volume
- name: ingressgateway-certs
secret:
optional: true
secretName: istio-ingressgateway-certs
- name: ingressgateway-ca-certs
secret:
optional: true
secretName: istio-ingressgateway-ca-certs
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
spec:
minAvailable: 1
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: istio-ingressgateway-sds
namespace: istio-system
labels:
release: istio
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: istio-ingressgateway-sds
namespace: istio-system
labels:
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-ingressgateway-sds
subjects:
- kind: ServiceAccount
name: istio-ingressgateway-service-account
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: tls
port: 15443
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
type: LoadBalancer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-ingressgateway-service-account
namespace: istio-system
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
---
# IstiodRemote component is disabled.
# Resources for Pilot component
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istiod
namespace: istio-system
labels:
app: istiod
release: istio
istio.io/rev: default
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istiod
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
---
I have a AWS NLB ingress-controller and an ingress rule which routes traffic between an API and an SPA. The ingress-controller works perfectly on HTTP, but on HTTPS I'm getting a 400 Bad request - plain HTTP request sent to HTTPS port
If I understand it correctly, after TLS has been terminated the request is being redirected via an Https port rather than HTTP, but I'm struggling to find where:
ingress controller.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:XXX:certificate/XXXXX
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=ingress-nginx/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
namespace: ingress-nginx
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
sideEffects: None
admissionReviewVersions: ["v1", "v1beta1"]
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc
- --namespace=ingress-nginx
- --secret-name=ingress-nginx-admission
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy:
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=ingress-nginx
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
ingress-rules.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
# tls:
# - hosts:
# - mysite.com
# secretName: secret-name
rules:
# - host: mysite.com
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
frontend-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
spec:
selector:
app: my-performance
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
spec:
selector:
app: my-performance
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
I do have deployments behind these, but since the services themselves are working fine independently and in conjunction on http, I've ruled them out as the problem here.
Any advice is greatly appreciated!
I just lost an entire day troubleshooting this and it turned out to be the ports configuration in the Service created by Helm. The targetPort in the https configuration needs to be 80 instead of the reference "https".
Before:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
After:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 80
Here's how your service would look:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:XXX:certificate/XXXXX
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
Add to the helm values
values={
"controller": {
...
"service": {
...
"targetPorts": {"https": "80"}
}
}
}
I am seeing Zonal network endpoint group unhealthy after configuring an ingress with a managed cert in GCP via
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- name: backstage
image: australia-southeast1-docker.pkg.dev/acme-dev-tooling/acme-docker/backstage:prd-v.0.35
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- secretRef:
name: backstage-secrets
---
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
annotations:
cloud.google.com/backend-config: '{"default": "backstage-ingress-backendconfig"}'
spec:
selector:
app: backstage
ports:
- name: http
protocol: TCP
port: 80
type: NodePort
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backstage-ingress-backendconfig
spec:
healthCheck:
checkIntervalSec: 15
type: HTTP
requestPath: /
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: tools-managed-cert-backstage
namespace: backstage
spec:
domains:
- tools.backstage.acme-uat.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backstage-ingress
namespace: backstage
annotations:
kubernetes.io/ingress.global-static-ip-name: "tools-backstage-external-ip"
networking.gke.io/managed-certificates: tools-managed-cert-backstage
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: backstage
port:
number: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: backstage
GCP provisions an L7 https loadbalancer that cannot access the GKE cluster due to zonal health endpoint connectivity.
The ingress reads:
All Backends are in an UNHEALTHY state.
Is there something I am missing? Does the GKE ingress configure the firewall? I've looked at the rules, there are rules for 130.211.0.0/22, 35.191.0.0/16 which is the health check address.
logs/compute.googleapis.com%2Fhealthchecks yields no probe results. Despite having enabled logging.
Any help would be much appreciated.
UPDATE - above fixed per comments, the following isn't working
kind: Service
metadata:
name: argocd-server
namespace: argocd
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/instance: argocd
helm.sh/chart: argo-cd-3.29.5
annotations:
cloud.google.com/backend-config: '{"default": "argocd-ingress-backendconfig"}'
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/neg-status: >-
{"network_endpoint_groups":{"80":"k8s1-20a3d3ad-argocd-argocd-server-80-c2ec22fa"},"zones":["australia-southeast1-a"]}
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/backend-config":"{\"default\":
\"argocd-ingress-backendconfig\"}","cloud.google.com/neg":"{\"ingress\":
true}"},"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","argocd.argoproj.io/instance":"argocd","helm.sh/chart":"argo-cd-3.29.5"},"name":"argocd-server","namespace":"argocd"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":"http"}],"selector":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"},"type":"ClusterIP"}}
status:
loadBalancer: {}
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: http
selector:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
clusterIP: 10.184.10.20
clusterIPs:
- 10.184.10.20
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
uid: fee5f91c-b431-4b8c-ab10-64daa02ec729
resourceVersion: '108355'
generation: 3
creationTimestamp: '2022-01-20T00:06:05Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
deployment.kubernetes.io/revision: '3'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"},"name":"argocd-server","namespace":"argocd"},"spec":{"replicas":1,"revisionHistoryLimit":5,"selector":{"matchLabels":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"}},"template":{"metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"}},"spec":{"containers":[{"command":["argocd-server","--staticassets","/shared/app","--repo-server","argocd-repo-server:8081","--dex-server","http://argocd-dex-server:5556","--logformat","text","--loglevel","info","--redis","argocd-redis:6379"],"image":"quay.io/argoproj/argocd:v2.2.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"server","ports":[{"containerPort":8080,"name":"server","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{},"volumeMounts":[{"mountPath":"/app/config/ssh","name":"ssh-known-hosts"},{"mountPath":"/app/config/server/tls","name":"argocd-repo-server-tls"},{"mountPath":"/home/argocd","name":"plugins-home"},{"mountPath":"/tmp","name":"tmp-dir"}]}],"serviceAccountName":"argocd-server","volumes":[{"emptyDir":{},"name":"static-files"},{"emptyDir":{},"name":"tmp-dir"},{"configMap":{"name":"argocd-ssh-known-hosts-cm"},"name":"ssh-known-hosts"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}},{"emptyDir":{},"name":"plugins-home"}]}}}}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
kubectl.kubernetes.io/restartedAt: '2022-01-20T15:44:27+11:00'
spec:
volumes:
- name: static-files
emptyDir: {}
- name: tmp-dir
emptyDir: {}
- name: ssh-known-hosts
configMap:
name: argocd-ssh-known-hosts-cm
defaultMode: 420
- name: argocd-repo-server-tls
secret:
secretName: argocd-repo-server-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
defaultMode: 420
optional: true
- name: plugins-home
emptyDir: {}
containers:
- name: server
image: quay.io/argoproj/argocd:v2.2.2
command:
- argocd-server
- '--staticassets'
- /shared/app
- '--repo-server'
- argocd-repo-server:8081
- '--dex-server'
- http://argocd-dex-server:5556
- '--logformat'
- text
- '--loglevel'
- info
- '--redis'
- argocd-redis:6379
ports:
- name: server
containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: ssh-known-hosts
mountPath: /app/config/ssh
- name: argocd-repo-server-tls
mountPath: /app/config/server/tls
- name: plugins-home
mountPath: /home/argocd
- name: tmp-dir
mountPath: /tmp
livenessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: argocd-server
serviceAccount: argocd-server
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 5
progressDeadlineSeconds: 600
Cheers
# Here is workaround for Google Cloud with ArgoCD v2.5.2
# cloudflare-key.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-key
namespace: cert-manager
type: Opaque
stringData:
key: xxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: zia#mydomain.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector: {}
dns01:
cloudflare:
email: zia#mydomain.com
apiKeySecretRef:
name: cloudflare-key
key: key
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/component: server
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"8080":{}}}'
beta.cloud.google.com/backend-config: '{"default": "argocd-backend-config"}'
name: argocd-server
spec:
ports:
- name: http8080
protocol: TCP
port: 8080
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
---
#backendconfig.yaml
kind: BackendConfig
metadata:
name: argocd-backend-config
namespace: argocd
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 10
healthyThreshold: 1
unhealthyThreshold: 5
type: HTTP
requestPath: /healthz
port: 8080
---
# FrontendConfig.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: argocd-frontend-config
namespace: argocd
spec:
redirectToHttps:
enabled: true
---
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: gce
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.global-static-ip-name: "argocd-dev"
networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
spec:
rules:
- host: argocd-dev.mydomain.com
http:
paths:
- backend:
service:
name: argocd-server
port:
name: http
path: "/"
pathType: Prefix
tls:
- hosts:
- argocd-dev.mydomain.com
secretName: argocd-secret #don't change, this is provided by ArgoCD
I have set up k8s cluster in AWS ec2 instances with 1 parent and 2 child nodes using kops.
Deployed 2 services and running with LoadBalancer service type in browser.
Now I installed NGINX but through LB ip not able to hit my service. It is giving 504 GATEWAY_TIME_OUT exception. Googled it but no success, Where am I going wrong? Here is my sample code...[AWS FREE ACCOUNT]
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
spec:
imagePullSecrets:
- name: $IMG_PULL_SECRET
containers:
- image: $IMAGE_REG/$APP_NAME:$IMAGE_TAG
name: $APP_NAME
imagePullPolicy: Always
ports:
- containerPort: ${CONTAINER_PORT}
protocol: TCP
env:
- name: spring.cloud.config.uri
value: 'http://config-server-service'
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
type: $SERVICE_TYPE
#type: $SERVICE_TYPE
ports:
- port: 80
targetPort: ${CONTAINER_PORT}
protocol: TCP
selector:
app: $APP_NAME
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.global-static-ip-name: "my-gateway"
spec:
rules:
- http:
paths:
- path: /${APP_NAME}
backend:
serviceName: ${APP_NAME}
servicePort: 80
I am trying to expose an application in my cluster by creating a service type as load balancer. The reason for this is that I want this app to have a separate channel for communication. I have a KOPS cluster. I want to use AWS's network load balancer so that it gets a static IP. When I create the Service with port 80 mapped to the port that the app is running on everything works but when I try to add port 443 it just times out.
Here is the configuration that works -
apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer
As soon as I add TLS support in the config file and deploy it. The connection to the load balancer times out. How do I add TLS support to the load balancer?
I want to do it through the service and not through an ingress.
This is the configuration that doesn't work for me and when I paste the link in the browser, it times out.
kind: Service
apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxx
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 443
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer
You can use the tls & ssl termination
apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
selector:
app: test-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
type: LoadBalancer
You can add the tls certficate in aws certificate manager and use the arn address of certificate to kubernetes service.
it's like in becked you can terminate the https connection and use the HTTP only.
you can also check this out : https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/
https://github.com/kubernetes/kubernetes/issues/73297
EDIT :1
service.beta.kubernetes.io/aws-load-balancer-type: nlb
if not work please try adding this annotation as per your loadbalancer type.
You can now deploy ingress using NLB and SSL termination (https in NLB > http in service).
Finally found a solution that worked for me, you can try to deploy the following ingress.yaml (make sure to update your cert ARN under deployment section):
---
# Source: nginx-ingress/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
---
# Source: nginx-ingress/templates/default-backend-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress-backend
---
# Source: nginx-ingress/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "controller"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
data:
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
ssl-redirect: "false"
---
# Source: nginx-ingress/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
---
# Source: nginx-ingress/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: ingress-nginx
---
# Source: nginx-ingress/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: nginx-ingress/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: ingress-nginx
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:##REPLACE WITH YOUR CERT ARN"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "controller"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: special
selector:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: controller
type: "LoadBalancer"
---
# Source: nginx-ingress/templates/default-backend-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "default-backend"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-default-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: default-backend
type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
app.kubernetes.io/component: controller
name: nginx-ingress-controller
annotations:
{}
spec:
selector:
matchLabels:
app: nginx-ingress
release: nginx-ingress
replicas: 1
revisionHistoryLimit: 10
strategy:
{}
minReadySeconds: 0
template:
metadata:
labels:
app: nginx-ingress
release: nginx-ingress
component: "controller"
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
imagePullPolicy: "IfNotPresent"
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-ingress-default-backend
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-ingress-controller
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: special
containerPort: 8000
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{}
hostNetwork: false
serviceAccountName: nginx-ingress
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/default-backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
app.kubernetes.io/component: default-backend
name: nginx-ingress-default-backend
spec:
selector:
matchLabels:
app: nginx-ingress
release: nginx-ingress
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
labels:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: default-backend
spec:
containers:
- name: nginx-ingress-default-backend
image: "k8s.gcr.io/defaultbackend-amd64:1.5"
imagePullPolicy: "IfNotPresent"
args:
securityContext:
runAsUser: 65534
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{}
serviceAccountName: nginx-ingress-backend
terminationGracePeriodSeconds: 60
Your annotation refers to https port
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
but your port is named http, change to https
spec:
externalTrafficPolicy: Local
ports:
- name: https
port: 443
protocol: TCP
targetPort: 9050