TLS doesn't work with LoadBalancer backed Service in Kubernetes - amazon-web-services

I am trying to expose an application in my cluster by creating a service type as load balancer. The reason for this is that I want this app to have a separate channel for communication. I have a KOPS cluster. I want to use AWS's network load balancer so that it gets a static IP. When I create the Service with port 80 mapped to the port that the app is running on everything works but when I try to add port 443 it just times out.
Here is the configuration that works -
apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer
As soon as I add TLS support in the config file and deploy it. The connection to the load balancer times out. How do I add TLS support to the load balancer?
I want to do it through the service and not through an ingress.
This is the configuration that doesn't work for me and when I paste the link in the browser, it times out.
kind: Service
apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxx
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 443
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer

You can use the tls & ssl termination
apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
selector:
app: test-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
type: LoadBalancer
You can add the tls certficate in aws certificate manager and use the arn address of certificate to kubernetes service.
it's like in becked you can terminate the https connection and use the HTTP only.
you can also check this out : https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/
https://github.com/kubernetes/kubernetes/issues/73297
EDIT :1
service.beta.kubernetes.io/aws-load-balancer-type: nlb
if not work please try adding this annotation as per your loadbalancer type.

You can now deploy ingress using NLB and SSL termination (https in NLB > http in service).
Finally found a solution that worked for me, you can try to deploy the following ingress.yaml (make sure to update your cert ARN under deployment section):
---
# Source: nginx-ingress/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
---
# Source: nginx-ingress/templates/default-backend-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress-backend
---
# Source: nginx-ingress/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "controller"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
data:
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
ssl-redirect: "false"
---
# Source: nginx-ingress/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
---
# Source: nginx-ingress/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: ingress-nginx
---
# Source: nginx-ingress/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- "networking.k8s.io" # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: nginx-ingress/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
name: nginx-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: ingress-nginx
---
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:##REPLACE WITH YOUR CERT ARN"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "controller"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: special
selector:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: controller
type: "LoadBalancer"
---
# Source: nginx-ingress/templates/default-backend-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
component: "default-backend"
heritage: Helm
release: nginx-ingress
name: nginx-ingress-default-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: default-backend
type: "ClusterIP"
---
# Source: nginx-ingress/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
app.kubernetes.io/component: controller
name: nginx-ingress-controller
annotations:
{}
spec:
selector:
matchLabels:
app: nginx-ingress
release: nginx-ingress
replicas: 1
revisionHistoryLimit: 10
strategy:
{}
minReadySeconds: 0
template:
metadata:
labels:
app: nginx-ingress
release: nginx-ingress
component: "controller"
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: nginx-ingress-controller
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
imagePullPolicy: "IfNotPresent"
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-ingress-default-backend
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/nginx-ingress-controller
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: special
containerPort: 8000
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{}
hostNetwork: false
serviceAccountName: nginx-ingress
terminationGracePeriodSeconds: 60
---
# Source: nginx-ingress/templates/default-backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ingress-nginx
labels:
app: nginx-ingress
chart: nginx-ingress-1.38.0
heritage: Helm
release: nginx-ingress
app.kubernetes.io/component: default-backend
name: nginx-ingress-default-backend
spec:
selector:
matchLabels:
app: nginx-ingress
release: nginx-ingress
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
labels:
app: nginx-ingress
release: nginx-ingress
app.kubernetes.io/component: default-backend
spec:
containers:
- name: nginx-ingress-default-backend
image: "k8s.gcr.io/defaultbackend-amd64:1.5"
imagePullPolicy: "IfNotPresent"
args:
securityContext:
runAsUser: 65534
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{}
serviceAccountName: nginx-ingress-backend
terminationGracePeriodSeconds: 60

Your annotation refers to https port
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
but your port is named http, change to https
spec:
externalTrafficPolicy: Local
ports:
- name: https
port: 443
protocol: TCP
targetPort: 9050

Related

Issues installing Istio offline (on AWS EKS)

Due to security requirements, I need to install Istio offline on an AWS EKS cluster. I could not find any clear documentation on offline installation or recommendations. So I have used the “istioctl generate manifest > generated-manifest.yml” to save the installation files, I have replaced the 3 images (proxy, pilot, prometheus) with my images hosted in AWS ECR and applied the file with kubectl. I should mention that “istioctl apply manifest” times out for some reason. I have also created the “istio-system” namespace manually before installation.
I have used these 2 annotations for an internal Network Load Balancer:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
The installation creates the CRDs, the pods and services. Here are my questions, if someone can point me into the right direction:
Where can I find some proper documentation for offline installation?
There is no istio-egressgateway with the default profile. Is the egress gateway not needed if I need to expose an application?
The NLB is created with 4 listeners (80, 443, 15443, 15021) but I can connect only to 15201, the others refuse the connection. Hence only the 15021 target group is healthy, the other 3 are not.
Adding below some of the relevant code:
kubectl describe service/istio-ingressgateway -n istio-system
------
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
istio=ingressgateway
release=istio
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.212.217
IPs: 172.20.212.217
LoadBalancer Ingress: aaaa-bbb.elb.eu-west-2.amazonaws.com
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 32094/TCP
Endpoints: 10.0.35.41:15021
Port: http2 80/TCP
TargetPort: 80/TCP
NodePort: http2 30890/TCP
Endpoints: 10.0.35.41:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32039/TCP
Endpoints: 10.0.35.41:443
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30690/TCP
Endpoints: 10.0.35.41:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sh-4.2$ kubectl describe service/istiod -n istio-system
Name: istiod
Namespace: istio-system
Labels: app=istiod
istio=pilot
istio.io/rev=default
release=istio
Annotations: <none>
Selector: app=istiod,istio=pilot
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.20.25.159
IPs: 172.20.25.159
Port: grpc-xds 15010/TCP
TargetPort: 15010/TCP
Endpoints: 10.0.43.252:15010
Port: https-dns 15012/TCP
TargetPort: 15012/TCP
Endpoints: 10.0.43.252:15012
Port: https-webhook 443/TCP
TargetPort: 15017/TCP
Endpoints: 10.0.43.252:15017
Port: http-monitoring 15014/TCP
TargetPort: 15014/TCP
Endpoints: 10.0.43.252:15014
Port: dns-tls 853/TCP
TargetPort: 15053/TCP
Endpoints: 10.0.43.252:15053
Session Affinity: None
Events: <none>
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: adapters.config.istio.io
labels:
app: mixer
package: adapter
istio: mixer-adapter
chart: istio
heritage: Tiller
release: istio
annotations:
"helm.sh/resource-policy": keep
spec:
group: config.istio.io
names:
kind: adapter
plural: adapters
singular: adapter
categories:
- istio-io
- policy-istio-io
scope: Namespaced
subresources:
status: {}
versions:
- name: v1alpha2
served: true
storage: true
---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
name: templates.config.istio.io
labels:
app: mixer
package: template
istio: mixer-template
chart: istio
heritage: Tiller
release: istio
annotations:
"helm.sh/resource-policy": keep
spec:
group: config.istio.io
names:
kind: template
plural: templates
singular: template
categories:
- istio-io
- policy-istio-io
scope: Namespaced
subresources:
status: {}
versions:
- name: v1alpha2
served: true
storage: true
---
# Cni component is disabled.
# EgressGateways istio-egressgateway component is disabled.
# Resources for IngressGateways component
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
labels:
app: istio-ingressgateway
chart: gateways
heritage: Tiller
istio: ingressgateway
release: istio
service.istio.io/canonical-name: istio-ingressgateway
service.istio.io/canonical-revision: latest
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
weight: 2
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
weight: 2
- preference:
matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
weight: 2
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
containers:
- args:
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
- --serviceCluster
- istio-ingressgateway
- --trust-domain=cluster.local
env:
- name: JWT_POLICY
value: third-party-jwt
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.istio-system.svc:15012
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: CANONICAL_SERVICE
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-name']
- name: CANONICAL_REVISION
valueFrom:
fieldRef:
fieldPath: metadata.labels['service.istio.io/canonical-revision']
- name: ISTIO_META_WORKLOAD_NAME
value: istio-ingressgateway
- name: ISTIO_META_OWNER
value: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
- name: ISTIO_META_MESH_ID
value: cluster.local
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
image: 777777777777.dkr.ecr.eu-west-2.amazonaws.com/istio-proxyv2:1.6.8
name: istio-proxy
ports:
- containerPort: 15021
- containerPort: 8080
- containerPort: 8443
- containerPort: 15443
- containerPort: 15011
- containerPort: 15012
- containerPort: 8060
- containerPort: 853
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/istio/config
name: config-volume
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/run/secrets/tokens
name: istio-token
readOnly: true
- mountPath: /var/run/ingress_gateway
name: ingressgatewaysdsudspath
- mountPath: /etc/istio/pod
name: podinfo
- mountPath: /etc/istio/ingressgateway-certs
name: ingressgateway-certs
readOnly: true
- mountPath: /etc/istio/ingressgateway-ca-certs
name: ingressgateway-ca-certs
readOnly: true
serviceAccountName: istio-ingressgateway-service-account
volumes:
- configMap:
name: istio-ca-root-cert
name: istiod-ca-cert
- downwardAPI:
items:
- fieldRef:
fieldPath: metadata.labels
path: labels
- fieldRef:
fieldPath: metadata.annotations
path: annotations
name: podinfo
- emptyDir: {}
name: istio-envoy
- emptyDir: {}
name: ingressgatewaysdsudspath
- name: istio-token
projected:
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
name: istio
optional: true
name: config-volume
- name: ingressgateway-certs
secret:
optional: true
secretName: istio-ingressgateway-certs
- name: ingressgateway-ca-certs
secret:
optional: true
secretName: istio-ingressgateway-ca-certs
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
spec:
minAvailable: 1
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: istio-ingressgateway-sds
namespace: istio-system
labels:
release: istio
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: istio-ingressgateway-sds
namespace: istio-system
labels:
release: istio
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-ingressgateway-sds
subjects:
- kind: ServiceAccount
name: istio-ingressgateway-service-account
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: tls
port: 15443
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
type: LoadBalancer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: istio-ingressgateway-service-account
namespace: istio-system
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
---
# IstiodRemote component is disabled.
# Resources for Pilot component
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: istiod
namespace: istio-system
labels:
app: istiod
release: istio
istio.io/rev: default
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istiod
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
---

Zonal Network Endpoint Group unhealthy after provisioning Ingress with managed cert via GKE

I am seeing Zonal network endpoint group unhealthy after configuring an ingress with a managed cert in GCP via
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- name: backstage
image: australia-southeast1-docker.pkg.dev/acme-dev-tooling/acme-docker/backstage:prd-v.0.35
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- secretRef:
name: backstage-secrets
---
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
annotations:
cloud.google.com/backend-config: '{"default": "backstage-ingress-backendconfig"}'
spec:
selector:
app: backstage
ports:
- name: http
protocol: TCP
port: 80
type: NodePort
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backstage-ingress-backendconfig
spec:
healthCheck:
checkIntervalSec: 15
type: HTTP
requestPath: /
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: tools-managed-cert-backstage
namespace: backstage
spec:
domains:
- tools.backstage.acme-uat.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backstage-ingress
namespace: backstage
annotations:
kubernetes.io/ingress.global-static-ip-name: "tools-backstage-external-ip"
networking.gke.io/managed-certificates: tools-managed-cert-backstage
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: backstage
port:
number: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: backstage
GCP provisions an L7 https loadbalancer that cannot access the GKE cluster due to zonal health endpoint connectivity.
The ingress reads:
All Backends are in an UNHEALTHY state.
Is there something I am missing? Does the GKE ingress configure the firewall? I've looked at the rules, there are rules for 130.211.0.0/22, 35.191.0.0/16 which is the health check address.
logs/compute.googleapis.com%2Fhealthchecks yields no probe results. Despite having enabled logging.
Any help would be much appreciated.
UPDATE - above fixed per comments, the following isn't working
kind: Service
metadata:
name: argocd-server
namespace: argocd
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
argocd.argoproj.io/instance: argocd
helm.sh/chart: argo-cd-3.29.5
annotations:
cloud.google.com/backend-config: '{"default": "argocd-ingress-backendconfig"}'
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/neg-status: >-
{"network_endpoint_groups":{"80":"k8s1-20a3d3ad-argocd-argocd-server-80-c2ec22fa"},"zones":["australia-southeast1-a"]}
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/backend-config":"{\"default\":
\"argocd-ingress-backendconfig\"}","cloud.google.com/neg":"{\"ingress\":
true}"},"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","argocd.argoproj.io/instance":"argocd","helm.sh/chart":"argo-cd-3.29.5"},"name":"argocd-server","namespace":"argocd"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":"http"}],"selector":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"},"type":"ClusterIP"}}
status:
loadBalancer: {}
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: http
selector:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
clusterIP: 10.184.10.20
clusterIPs:
- 10.184.10.20
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: argocd-server
namespace: argocd
uid: fee5f91c-b431-4b8c-ab10-64daa02ec729
resourceVersion: '108355'
generation: 3
creationTimestamp: '2022-01-20T00:06:05Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: pulumi
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
deployment.kubernetes.io/revision: '3'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"pulumi","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"},"name":"argocd-server","namespace":"argocd"},"spec":{"replicas":1,"revisionHistoryLimit":5,"selector":{"matchLabels":{"app.kubernetes.io/instance":"argocd","app.kubernetes.io/name":"argocd-server"}},"template":{"metadata":{"labels":{"app.kubernetes.io/component":"server","app.kubernetes.io/instance":"argocd","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"argocd-server","app.kubernetes.io/part-of":"argocd","app.kubernetes.io/version":"v2.2.2","helm.sh/chart":"argo-cd-3.30.1"}},"spec":{"containers":[{"command":["argocd-server","--staticassets","/shared/app","--repo-server","argocd-repo-server:8081","--dex-server","http://argocd-dex-server:5556","--logformat","text","--loglevel","info","--redis","argocd-redis:6379"],"image":"quay.io/argoproj/argocd:v2.2.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"server","ports":[{"containerPort":8080,"name":"server","protocol":"TCP"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":8080},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"resources":{},"volumeMounts":[{"mountPath":"/app/config/ssh","name":"ssh-known-hosts"},{"mountPath":"/app/config/server/tls","name":"argocd-repo-server-tls"},{"mountPath":"/home/argocd","name":"plugins-home"},{"mountPath":"/tmp","name":"tmp-dir"}]}],"serviceAccountName":"argocd-server","volumes":[{"emptyDir":{},"name":"static-files"},{"emptyDir":{},"name":"tmp-dir"},{"configMap":{"name":"argocd-ssh-known-hosts-cm"},"name":"ssh-known-hosts"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}},{"emptyDir":{},"name":"plugins-home"}]}}}}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.2.2
helm.sh/chart: argo-cd-3.30.1
annotations:
kubectl.kubernetes.io/restartedAt: '2022-01-20T15:44:27+11:00'
spec:
volumes:
- name: static-files
emptyDir: {}
- name: tmp-dir
emptyDir: {}
- name: ssh-known-hosts
configMap:
name: argocd-ssh-known-hosts-cm
defaultMode: 420
- name: argocd-repo-server-tls
secret:
secretName: argocd-repo-server-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
defaultMode: 420
optional: true
- name: plugins-home
emptyDir: {}
containers:
- name: server
image: quay.io/argoproj/argocd:v2.2.2
command:
- argocd-server
- '--staticassets'
- /shared/app
- '--repo-server'
- argocd-repo-server:8081
- '--dex-server'
- http://argocd-dex-server:5556
- '--logformat'
- text
- '--loglevel'
- info
- '--redis'
- argocd-redis:6379
ports:
- name: server
containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: ssh-known-hosts
mountPath: /app/config/ssh
- name: argocd-repo-server-tls
mountPath: /app/config/server/tls
- name: plugins-home
mountPath: /home/argocd
- name: tmp-dir
mountPath: /tmp
livenessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: argocd-server
serviceAccount: argocd-server
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 5
progressDeadlineSeconds: 600
Cheers
# Here is workaround for Google Cloud with ArgoCD v2.5.2
# cloudflare-key.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-key
namespace: cert-manager
type: Opaque
stringData:
key: xxxxxxxxxxxxxxxx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: zia#mydomain.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector: {}
dns01:
cloudflare:
email: zia#mydomain.com
apiKeySecretRef:
name: cloudflare-key
key: key
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/component: server
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"8080":{}}}'
beta.cloud.google.com/backend-config: '{"default": "argocd-backend-config"}'
name: argocd-server
spec:
ports:
- name: http8080
protocol: TCP
port: 8080
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
- name: https
protocol: TCP
port: 443
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
---
#backendconfig.yaml
kind: BackendConfig
metadata:
name: argocd-backend-config
namespace: argocd
spec:
healthCheck:
checkIntervalSec: 30
timeoutSec: 10
healthyThreshold: 1
unhealthyThreshold: 5
type: HTTP
requestPath: /healthz
port: 8080
---
# FrontendConfig.yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: argocd-frontend-config
namespace: argocd
spec:
redirectToHttps:
enabled: true
---
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: gce
cert-manager.io/cluster-issuer: letsencrypt-staging
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.global-static-ip-name: "argocd-dev"
networking.gke.io/v1beta1.FrontendConfig: argocd-frontend-config
spec:
rules:
- host: argocd-dev.mydomain.com
http:
paths:
- backend:
service:
name: argocd-server
port:
name: http
path: "/"
pathType: Prefix
tls:
- hosts:
- argocd-dev.mydomain.com
secretName: argocd-secret #don't change, this is provided by ArgoCD

Creating sidecar Metricbeat with AWS EKS Fargate

I'm trying to create a deployment on AWS EKS with my application and metricbeat as sidecar, so I have the following YML:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-modules
namespace: testframework
labels:
k8s-app: metricbeat
data:
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
host: ${NODE_NAME}
hosts: [ "https://${NODE_IP}:10250" ]
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl.verification_mode: "none"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: testframework
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
processors:
- add_cloud_metadata:
- add_tags:
tags: ["EKSCORP_DEV"]
target: "cluster_test"
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.elasticsearch:
index: "metricbeat-k8s-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.name: "metricbeat-k8s"
setup.template.pattern: "metricbeat-k8s-*"
setup.ilm.enabled: false
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testframework-initializr-deploy
namespace: testframework
spec:
replicas: 1
selector:
matchLabels:
app: testframework-initializr
template:
metadata:
labels:
app: testframework-initializr
annotations:
co.elastic.logs/enabled: 'true'
co.elastic.logs/json.keys_under_root: 'true'
co.elastic.logs/json.add_error_key: 'true'
co.elastic.logs/json.message_key: 'message'
spec:
containers:
- name: testframework-initializr
image: XXXXX.dkr.ecr.us-east-1.amazonaws.com/testframework-initializr
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health/liveness
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 60
failureThreshold: 5
readinessProbe:
httpGet:
port: 8080
path: /health
initialDelaySeconds: 300
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 3
- name: metricbeat-sidecar
image: docker.elastic.co/beats/metricbeat:7.12.0
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs"
]
env:
- name: ELASTIC_CLOUD_ID
value: xxxx
- name: ELASTIC_CLOUD_AUTH
value: xxxx
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0640
name: metricbeat-modules
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prom-admin
rules:
- apiGroups: [""]
resources: ["pods", "nodes"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prom-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: testframework
roleRef:
kind: ClusterRole
name: prom-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: testframework-initializr-service
namespace: testframework
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: testframework-initializr
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: testframework-initializr-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: dev-initializr.test.net
http:
paths:
- backend:
serviceName: testframework-initializr-service
servicePort: 80
Well, after startup the POD in AWS EKS, I got the following error in Kubernetes Metricbeat Container:
INFO module/wrapper.go:259 Error fetching data for metricset kubernetes.system: error doing HTTP request to fetch 'system' Metricset data: error making http request: Get "https://IP_FROM_FARGATE_HERE:10250/stats/summary": dial tcp IP_FROM_FARGATE_HERE:10250: connect: connection refused
I tried to use the "NODE_NAME" instead "NODE_IP", but I got "No Such Host". Any idea how can I fix it?

How to configure NGINX in AWS

I have set up k8s cluster in AWS ec2 instances with 1 parent and 2 child nodes using kops.
Deployed 2 services and running with LoadBalancer service type in browser.
Now I installed NGINX but through LB ip not able to hit my service. It is giving 504 GATEWAY_TIME_OUT exception. Googled it but no success, Where am I going wrong? Here is my sample code...[AWS FREE ACCOUNT]
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
spec:
imagePullSecrets:
- name: $IMG_PULL_SECRET
containers:
- image: $IMAGE_REG/$APP_NAME:$IMAGE_TAG
name: $APP_NAME
imagePullPolicy: Always
ports:
- containerPort: ${CONTAINER_PORT}
protocol: TCP
env:
- name: spring.cloud.config.uri
value: 'http://config-server-service'
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
type: $SERVICE_TYPE
#type: $SERVICE_TYPE
ports:
- port: 80
targetPort: ${CONTAINER_PORT}
protocol: TCP
selector:
app: $APP_NAME
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.global-static-ip-name: "my-gateway"
spec:
rules:
- http:
paths:
- path: /${APP_NAME}
backend:
serviceName: ${APP_NAME}
servicePort: 80

istio: ingress with grpc and http

I have a service listening on two ports; one is http, the other is grpc.
I would like to set up an ingress that can route to both these port, with the same host.
The load balancer would redirect to the http port if http/1.1 is used, and to the grpc port if h2 is used.
Is there a way to do that with istio ?
I made a hello world demonstrating what I am trying to achieve :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-http
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-grpc
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 50051
---
I'm a bit late to the party, but for those of you stumbling on this post, I think you can do this with very little difficulty. I'm going to assume you have istio installed on a kubernetes cluster and are happy using the default istio-ingressgateway:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-istio-gate
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 50051
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-istio-vsvc
namespace: dev
spec:
hosts:
- "*"
gateways:
- hello-world-istio-gate
http:
- match:
- port: 80
route:
- destination:
host: hello-world
port:
number: 80
tcp:
- match:
- port: 50051
route:
- destination:
host: hello-world
port:
number: 50051
The above configuration omits your two Ingresses, and instead includes:
Your deployment
Your service
An istio gateway
An istio virtualservice
There is an important extra piece not shown, and I alluded to it earlier when talking about using the default ingressgateway. The following line found in "hello-world-istio-gateway" gives a clue:
istio: ingressgateway
This refers to a pod in the 'istio-system' namespace that is usually installed by default called 'istio-ingressgateway' - and this pod is exposed by a service also called 'istio-ingressgateway.' You will need to open up ports on the 'istio-ingressgateway' service.
As an example, I edited my (default) ingressgateway and added a port opening for HTTP and GRPC. The result is the following (edited for length) yaml code:
dampersand#kubetest1:~/k8s$ kubectl get service istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
<omitted for length>
labels:
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
<omitted for length>
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
<omitted for length>
- name: grpc
nodePort: 30000
port: 50051
protocol: TCP
targetPort: 50051
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
The easiest way to make the above change (for testing purposes) is to use:
kubectl edit svc -n istio-system istio-ingressgateway
For production purposes, it's probably better to edit your helm chart or your istio.yaml file or whatever you initially used to set up the ingressgateway.
As a quick aside, note that my test cluster has istio-ingressgateway set up as a NodePort, so what the above yaml file says is that my cluster is port forwarding 31380 -> 80 and 30000 -> 50051. You may (probably) have istio-ingressgateway set up as a LoadBalancer, which will be different... so plan accordingly.
Finally, the following blog post is some REALLY excellent background reading for the tools I've outlined in this post! https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/
You may be able to do something like that if you move the grpc-server and http-server containers into different pods with unique labels (i.e., different versions of the service, so to speak) and then using Istio route rules, behind the Ingress, split the traffic. A rule with a match for header Upgrade: h2 could send traffic to the grpc version and a default rule would send the rest of the traffic to http one.