I have setup a bunch of containers on k8s. Each pod runs one container. There is a reverse proxy pod that calls a service in a runtime container. I have set up two runtime pods v1 and v2. My goal is to use istio to route all traffic from the reverse proxy pod to the runtime pod v1.
I have configured istio and the screen shots below will give you an idea about the environment.
[![enter image description here][1]][1]
My k8s yaml looks like this:
#Assumes create-docker-store-secret.sh used to create dockerlogin secret
#Assumes create-secrets.sh used to create key file, sam admin, and cfgsvc secrets
apiVersion: storage.k8s.io/v1beta1
# Create StorageClass with gidallocate=true to allow non-root user access to mount
# This is used by PostgreSQL container
kind: StorageClass
metadata:
name: ibmc-file-bronze-gid
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-file
parameters:
type: "Endurance"
iopsPerGB: "2"
sizeRange: "[1-12000]Gi"
mountOptions: nfsvers=4.1,hard
billingType: "hourly"
reclaimPolicy: "Delete"
classVersion: "2"
gidAllocate: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldaplib
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapslapd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapsecauthority
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresqldata
spec:
storageClassName: ibmc-file-bronze-gid
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: isamconfig
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
labels:
app: openldap
spec:
selector:
matchLabels:
app: openldap
replicas: 1
template:
metadata:
labels:
app: openldap
spec:
volumes:
- name: ldaplib
persistentVolumeClaim:
claimName: ldaplib
- name: ldapslapd
persistentVolumeClaim:
claimName: ldapslapd
- name: ldapsecauthority
persistentVolumeClaim:
claimName: ldapsecauthority
- name: openldap-keys
secret:
secretName: openldap-keys
containers:
- name: openldap
image: ibmcom/isam-openldap:9.0.7.0
ports:
- containerPort: 636
env:
- name: LDAP_DOMAIN
value: ibm.com
- name: LDAP_ADMIN_PASSWORD
value: Passw0rd
- name: LDAP_CONFIG_PASSWORD
value: Passw0rd
volumeMounts:
- mountPath: /var/lib/ldap
name: ldaplib
- mountPath: /etc/ldap/slapd.d
name: ldapslapd
- mountPath: /var/lib/ldap.secAuthority
name: ldapsecauthority
- mountPath: /container/service/slapd/assets/certs
name: openldap-keys
# This line is needed when running on Kubernetes 1.9.4 or above
args: [ "--copy-service"]
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
# Just this line to get debug output from openldap startup
# args: [ "--loglevel" , "trace","--copy-service"]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: openldap
labels:
app: openldap
spec:
ports:
- port: 636
name: ldaps
protocol: TCP
selector:
app: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
selector:
matchLabels:
app: postgresql
replicas: 1
template:
metadata:
labels:
app: postgresql
spec:
securityContext:
runAsNonRoot: true
runAsUser: 70
fsGroup: 0
volumes:
- name: postgresqldata
persistentVolumeClaim:
claimName: postgresqldata
- name: postgresql-keys
secret:
secretName: postgresql-keys
containers:
- name: postgresql
image: ibmcom/isam-postgresql:9.0.7.0
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Passw0rd
- name: POSTGRES_DB
value: isam
- name: POSTGRES_SSL_KEYDB
value: /var/local/server.pem
- name: PGDATA
value: /var/lib/postgresql/data/db-files/
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresqldata
- mountPath: /var/local
name: postgresql-keys
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
name: postgresql
protocol: TCP
selector:
app: postgresql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamconfig
labels:
app: isamconfig
spec:
selector:
matchLabels:
app: isamconfig
replicas: 1
template:
metadata:
labels:
app: isamconfig
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
persistentVolumeClaim:
claimName: isamconfig
- name: isamconfig-logs
emptyDir: {}
containers:
- name: isamconfig
image: ibmcom/isam:9.0.7.1_IF4
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamconfig-logs
env:
- name: SERVICE
value: config
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: ADMIN_PWD
valueFrom:
secretKeyRef:
name: samadmin
key: adminpw
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 120
periodSeconds: 20
# command: [ "/sbin/bootstrap.sh" ]
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamconfig
spec:
# To make the LMI internet facing, make it a NodePort
type: NodePort
ports:
- port: 9443
name: isamconfig
protocol: TCP
# make this one statically allocated
nodePort: 30442
selector:
app: isamconfig
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrprp1
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrprp1
protocol: TCP
nodePort: 30443
selector:
app: isamwrprp1
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrpmobile
labels:
app: isamwrpmobile
spec:
selector:
matchLabels:
app: isamwrpmobile
replicas: 1
template:
metadata:
labels:
app: isamwrpmobile
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrpmobile-logs
emptyDir: {}
containers:
- name: isamwrpmobile
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrpmobile-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: mobile
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrpmobile
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrpmobile
protocol: TCP
nodePort: 30444
selector:
app: isamwrpmobile
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v1
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v1
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v2
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v2
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: isamruntime
protocol: TCP
selector:
app: isamruntime
---
I have my gateway yaml file that looks like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: isamruntime-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
serverCertificate: /tmp/tls.crt
privateKey: /tmp/tls.key
---
and my routing yaml file looks like this:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime
spec:
host: isamruntime
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
The flow goes from Postman tool -> ingress ip address -> container that runs the reverse proxy -> Runtime container
My goal is to ensure only the container on the runtime v1 pod gets the traffic. However, the traffic gets routed to both v1 and v2.
What is my mistake ? Can someone help me ?
Regards
Pranam
I tried the following but it didnt work. The traffic gets routed to v1 and v2.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
- destination:
host: isamruntime
subset: v2
port:
number: 443
weight: 0
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime-v1
spec:
host: isamruntime
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
I tried changing my virtualservice to look like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime.com
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
- destination:
host: isamruntime
subset: v2
port:
number: 443
weight: 0
---
I then used curl as shown below
pranam#UNKNOWN kubernetes % curl -k -v -H "host: isamruntime.com" https://169.50.228.2:30443
* Trying 169.50.228.2...
* TCP_NODELAY set
* Connected to 169.50.228.2 (169.50.228.2) port 30443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=US; O=Policy Director; CN=isamconfig
* start date: Feb 18 15:33:30 2018 GMT
* expire date: Feb 14 15:33:30 2038 GMT
* issuer: C=US; O=Policy Director; CN=isamconfig
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET / HTTP/1.1
> Host: isamruntime.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< content-length: 13104
< content-type: text/html
< date: Fri, 10 Jul 2020 13:45:28 GMT
< p3p: CP="NON CUR OTPi OUR NOR UNI"
< server: WebSEAL/9.0.7.1
< x-frame-options: DENY
< x-content-type-options: nosniff
< cache-control: no-store
< x-xss-protection: 1
< content-security-policy: frame-ancestors 'none'
< strict-transport-security: max-age=31536000; includeSubDomains
< pragma: no-cache
< Set-Cookie: PD-S-SESSION-ID=1_2_0_cGgEZiwrYKP0QtvDtZDa4l7-iPb6M3ZsW4I+aeUhn9HuAfAd; Path=/; Secure; HttpOnly
<
<!DOCTYPE html>
<!-- Copyright (C) 2015 IBM Corporation -->
<!-- Copyright (C) 2000 Tivoli Systems, Inc. -->
<!-- Copyright (C) 1999 IBM Corporation -->
<!-- Copyright (C) 1998 Dascom, Inc. -->
<!-- All Rights Reserved. -->
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>LoginPage</title>
<style>
The curl command returns the login page of a reverse proxy which is expected. My runtime service is behind the reverse proxy. The reverse proxy will call the runtime service. I saw somewhere in the documentation that -mesh can be used. That didn't help my cause either.
I ran another curl command that actually triggers a call to the reverse proxy and the reverse proxy calls the runtime.
curl -k -v -H "host: isamruntime.com" https://169.50.228.2:30443/mga/sps/oauth/oauth20/token
* Trying 169.50.228.2...
* TCP_NODELAY set
* Connected to 169.50.228.2 (169.50.228.2) port 30443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=US; O=Policy Director; CN=isamconfig
* start date: Feb 18 15:33:30 2018 GMT
* expire date: Feb 14 15:33:30 2038 GMT
* issuer: C=US; O=Policy Director; CN=isamconfig
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /mga/sps/oauth/oauth20/token HTTP/1.1
> Host: isamruntime.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< content-language: en-US
< content-type: application/json;charset=UTF-8
< date: Fri, 10 Jul 2020 13:56:32 GMT
< p3p: CP="NON CUR OTPi OUR NOR UNI"
< transfer-encoding: chunked
< x-frame-options: SAMEORIGIN
< cache-control: no-store, no-cache=set-cookie
< expires: Thu, 01 Dec 1994 16:00:00 GMT
< strict-transport-security: max-age=31536000; includeSubDomains
< pragma: no-cache
< Set-Cookie: AMWEBJCT!%2Fmga!JSESSIONID=00004EKuX3PlcIBBhcwGnKf50ac:9e48435e-a71f-4b8a-8fb6-ef95c5f36c51; Path=/; Secure; HttpOnly
< Set-Cookie: PD_STATEFUL_c728ed2e-159a-11e8-b9c9-0242ac120004=%2Fmga; Path=/
< Set-Cookie: PD-S-SESSION-ID=1_2_0_6kSM-YBjsgCZnwNGOCOvjA+C9KBhYXlKkyuWUKpZ7RnCKVcy; Path=/; Secure; HttpOnly
<
* Connection #0 to host 169.50.228.2 left intact
{"error_description":"FBTOAU232E The client MUST use the HTTP POST method when making access token requests.","error":"invalid_request"}* Closing connection 0
Error is expected as that is an end point that allows only HTTP POST.
[1]: https://i.stack.imgur.com/dOMnD.png
the traffic gets routed to both v1 and v2
This most likely means Istio is not handling the traffic, and K8s Service is doing simple round-robin.
I think you are seeing the exact situation covered in Debugging Istio: How to Fix a Broken Service Mesh (Cloud Next '19) session.
It's a really useful session for seeing the power of istioctl and debugging unexpected behaviours, but long story short for your case, you would need to adjust Service definition.
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: http-isamruntime # Add prefix of http
protocol: TCP
selector:
app: isamruntime
Ref: https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService
NOTE: The above http- prefix assumes you are terminating TLS before hitting the Service. Depending on your use case, you may need to adjust VirtualService as well.
I got the flow working. I did not need a gateway as my traffic is going from the reverse proxy -> runtime. The reverse proxy and runtime are inside the k8s cluser and is east-west traffic. My service needed a tcp- and my virtual service needed tcp mapping. The yaml files are given below. I thank all of you to guide me in the right direction.
my service yaml:
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: tcp-isamruntime # Add prefix of tcp to match traffic type
protocol: TCP
selector:
app: isamruntime
my virtual service and destination rule yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
tcp:
- match:
- port: 443
route:
- destination:
host: isamruntime.default.svc.cluster.local
port:
number: 443
subset: v1
weight: 0
- destination:
host: isamruntime.default.svc.cluster.local
port:
number: 443
subset: v2
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime
spec:
host: isamruntime.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
Thanks all
jt97
Thanks for looking at the question. I tried yours suggestions using this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
- destination:
host: isamruntime
subset: v2
port:
number: 443
weight: 0
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime-v1
spec:
host: isamruntime
subsets:
- name: v1
labels:
version: v1
# - name: v2
# labels:
# version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime-v2
spec:
host: isamruntime
subsets:
- name: v2
labels:
version: v2
# - name: v2
# labels:
# version: v2
But, it did not work.
Has it got to do with hostname. Does it have to have the namespace like - isamruntime.default.svc.cluster.local or should my containers be running in a non-default namespace ?
Regards
Pranam
Related
I would like to use Google Managed Certificate on GKE.
I have a GKE cluster (1.22) with the external-dns helm chart configured against a CloudDNS zone, then I tried:
$ gcloud compute ssl-certificates create managed-cert \
--description "managed-cert" \
--domains "<hostname>" \
--global
$ kubectl create ns test
$ cat <<EOF | kubectl apply -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-mc-deployment
namespace: test
spec:
selector:
matchLabels:
app: products
department: sales
replicas: 2
template:
metadata:
labels:
app: products
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: my-mc-service
namespace: test
spec:
type: NodePort
selector:
app: products
department: sales
ports:
- name: my-first-port
protocol: TCP
port: 60001
targetPort: 50001
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: test
spec:
domains:
- <hostname>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-psc-ingress
namespace: test
annotations:
networking.gke.io/managed-certificates: "managed-cert"
ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "<hostname>"
http:
paths:
- path: "/"
pathType: "ImplementationSpecific"
backend:
service:
name: "my-mc-service"
port:
number: 60001
EOF
The DNS zone is correctly updated and I am able to browse http://<hostname>.
Instead if I:
$ curl -v https://<hostname>
* Trying 34.120.218.42:443...
* Connected to <hostname> (34.120.218.42) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
* CApath: none
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.0 (IN), TLS header, Unknown (21):
* TLSv1.3 (IN), TLS alert, handshake failure (552):
* error:0A000410:SSL routines::sslv3 alert handshake failure
* Closing connection 0
curl: (35) error:0A000410:SSL routines::sslv3 alert handshake failure
$ gcloud compute ssl-certificates list
NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS
managed-cert MANAGED 2022-06-30T00:27:25.708-07:00 PROVISIONING
<hostname>: PROVISIONING
mcrt-fe44e023-3234-42cc-b009-67f57dcdc5ef MANAGED 2022-06-30T00:27:52.707-07:00 PROVISIONING
<hostname>: PROVISIONING
I do not understand why it is creating a new managed certificate (mcrt-fe44e023-3234-42cc-b009-67f57dcdc5ef) even if I am specifing it.
Any ideas?
Thanks
After a bit of experimentation I understand what is going on.
The code above works, it takes around 20 minutes for the certificate to be created and propagated.
Regarding the double certificates: it is not required to create a ssl-certificates object as the ManagedCertificate custom resource will create it for you (mcrt-*).
To recap a example:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-mc-deployment
namespace: test
spec:
selector:
matchLabels:
app: products
department: sales
replicas: 2
template:
metadata:
labels:
app: products
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: my-mc-service
namespace: test
spec:
type: NodePort
selector:
app: products
department: sales
ports:
- name: my-first-port
protocol: TCP
port: 60001
targetPort: 50001
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: test
spec:
domains:
- <hostname>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-psc-ingress
namespace: test
annotations:
networking.gke.io/managed-certificates: "managed-cert"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "<hostname>"
http:
paths:
- path: "/"
pathType: "ImplementationSpecific"
backend:
service:
name: "my-mc-service"
port:
number: 60001
I work with a LDAP application that uses 2636/2389 as LDAPS/LDAP ports, I was setting up SSL termination for the same application to reach to the main UI and other parts.
Created Certificate (letsencrypt) put that in secret
Created Gateway and VIrtual Service with Desination rules to access the application
I am able to reach the application with https://:7171/main/login
And I am also able to reach to other pages as mentioned in the Gateway and Virtual Service files
But when I try to reach the ldaps part of the application either though " curl -v ldaps://:636" or using "ldapsearch "
It does not connect and my intuition is that the handshake is failing.
The idea is to terminate the SSL (ldaps) connection at the gateway and then route the query to the backend (application)
Gateway and Virtual Service Configuration
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: fid-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 636
name: tls
protocol: TLS
tls:
mode: SIMPLE
credentialName: tr-istio-ltbdx
maxProtocolVersion: TLSV1_3
minProtocolVersion: TLSV1_2
hosts:
- host
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: tr-istio-ltbdx
hosts:
- host
- port:
number: 8090
name: https-api
protocol: HTTPS
# maxProtocolVersion: TLSV1_3
# minProtocolVersion: TLSV1_2
tls:
mode: SIMPLE
credentialName: tr-istio-ltbdx
hosts:
- host
- port:
number: 7171
name: https-cp
protocol: HTTPS
# maxProtocolVersion: TLSV1_3
# minProtocolVersion: TLSV1_2
tls:
mode: SIMPLE
credentialName: tr-istio-ltbdx
hosts:
- host
- port:
number: 9101
name: https-admin
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: tr-istio-ltbdx
# maxProtocolVersion: TLSV1_3
# minProtocolVersion: TLSV1_2
hosts:
- host
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: fid
spec:
hosts:
- host
gateways:
- fid-gateway
tcp:
- match:
- port: 636
route:
- destination:
host: fid
port:
number: 2389
http:
- match:
- port: 443
route:
- destination:
host: fid
port:
number: 8089
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: fid-cp
spec:
hosts:
- host
gateways:
- fid-gateway
http:
- match:
- port: 7171
route:
- destination:
host: fid-cp
port:
number: 7171
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: fid-api
spec:
hosts:
- fid.opcl.net
gateways:
- fid-gateway
http:
- match:
- port: 8090
route:
- destination:
host: fid-cp
port:
number: 8090
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: fid-admin
spec:
hosts:
- fid.opcl.net
gateways:
- fid-gateway
http:
- match:
- port: 9101
route:
- destination:
host: fid
port:
number: 9100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: fid-dest-rule
spec:
host: fid-cp
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 7171
tls:
mode: SIMPLE
- port:
number: 8090
tls:
mode: SIMPLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: fid-dest-rule-2
spec:
host: fid-cp
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 2636
tls:
mode: SIMPLE
tls:
mode: SIMPLE
---
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: fid
namespace: istio-ssl-termination
spec:
selector:
matchLabels:
app.kubernetes.io/core-name: fid
mtls:
mode: PERMISSIVE
portLevelMtls:
2636:
mode: PERMISSIVE
I have also opened ports on the ingress loadbalancer
The below is the application yaml
apiVersion: v1
kind: Secret
metadata:
name: fidrootcreds
type: Opaque
data:
username: Y249RGlyZWN0b3J5IE1hbmFnZXI=
password: c2VjcmV0MTIzNA==
---
apiVersion: v1
data:
ZK: "external"
ZK_CONN_STR: "zookeeper.istio-ssl-termination.svc.cluster.local:2181" # this should match the service name from zk deployment
ZK_CLUSTER: "fid"
LICENSE: ""
kind: ConfigMap
metadata:
labels:
role: fid
name: fid-environment-variables
---
apiVersion: v1
kind: Service
metadata:
name: fid
labels:
app: fid
spec:
ports:
- port: 9100
name: admin-http
- port: 2389
name: ldap
- port: 2636
name: ldaps
selector:
app: fid
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: fid-cp
labels:
app: fid-cp
spec:
ports:
- port: 7070
name: cp-http
- port: 7171
name: cp-https
- port: 8089
name: http
- port: 8090
name: https
selector:
statefulset.kubernetes.io/pod-name: fid-0
type: ClusterIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: fid
labels:
kubernetes.io/os: linux
spec:
selector:
matchLabels:
app: fid # has to match .spec.template.metadata.labels
serviceName: "fid"
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: fid # has to match .spec.selector.matchLabels
annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: 443,8090,7171
traffic.sidecar.istio.io/excludeInboundPorts: 7171,8090,2636
spec:
terminationGracePeriodSeconds: 120
securityContext:
fsGroup: 1000
initContainers:
- name: sysctl
image: busybox
imagePullPolicy: IfNotPresent
command:
[
"/bin/sh",
"-c",
"sysctl -w vm.max_map_count=262144 && set -e && ulimit -n 65536",
]
securityContext:
privileged: true
containers:
- name: fid
image: image
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
[
"/bin/sh",
"-c",
"echo Hello from the fid postStart handler > /opt/radiantone/vds/lifecycle.txt",
]
preStop:
exec:
# command: ["/opt/radiantone/vds/bin/advanced/cluster.sh","detach"]
command: ["/opt/radiantone/vds/bin/stopVDSServer.sh"]
ports:
- containerPort: 2181
name: zk-client
- containerPort: 7070
name: cp-http
- containerPort: 7171
name: cp-https
- containerPort: 9100
name: admin-http
- containerPort: 9101
name: admin-https
- containerPort: 2389
name: ldap
- containerPort: 2636
name: ldaps
- containerPort: 8089
name: http
- containerPort: 8090
name: https
readinessProbe:
tcpSocket:
port: 2389
initialDelaySeconds: 120
periodSeconds: 30
failureThreshold: 5
successThreshold: 1
livenessProbe:
tcpSocket:
port: 9100
initialDelaySeconds: 120
periodSeconds: 30
failureThreshold: 5
successThreshold: 1
envFrom:
- configMapRef:
name: fid-environment-variables
env:
- name: ZK_PASSWORD
valueFrom:
secretKeyRef:
name: fidrootcreds
key: password
volumeMounts:
- name: r1-pvc
mountPath: /opt/radiantone/vds
resources:
limits:
cpu: "4"
memory: 8Gi
requests:
cpu: "2"
memory: 4Gi
command: ["/bin/sh", "-c"]
args:
[
"if [ $HOSTNAME != fid-0 ]; then export CLUSTER=join; fi;./run.sh fg",
]
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: r1-pvc
emptyDir: {}
The curl returns the following
curl -v -k ldaps://<host>:636
* Trying 34.218.138.57:636...
* Connected to fid.opcl.net (34.218.138.57) port 636 (#0)
* LDAP local: LDAP Vendor = Microsoft Corporation. ; LDAP Version = 510
* LDAP local: ldaps://fid.opcl.net:636/
* LDAP local: trying to establish encrypted connection
* LDAP local: bind via ldap_win_bind Server Down
* Closing connection 0
curl: (38) LDAP local: bind via ldap_win_bind Server Down
Am I missing anything? Any help is appreciated
I have a GKE cluster with Istio deployed in it. I have added the clusters node-pool instance groups to the backend of a GCP HTTP(S) LB.
To perform health check on the backends I have created the following health-check:
name: gke-http-hc
path: /healthz/ready (istio-ingressgateway readinessProbe path)
port: 30302 (for this the target port is 15021, which is the status port of istio-ingressgateway)
Protocol: HTTP
I can see that the health checks are all successful. But, if I try to access my application with my app URL, I get 404 error.
But, if I apply a TCP type health check and access the application with the app URL, I get the desired response 200 OK.
The TCP health check has following config:
name: gke-tcp-hc
Protocol: TCP
Port: 31397 (for this the target post is 80)
Why does my app behave differently for HTTP and TCP health-checks? Is there any other configuration I need to do to make the HTTP health check (query istio-ingressgateway's status) work?
Following are my k8s manifests for istio-ingressgateway:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.9.5
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/path: /stats/prometheus
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
sidecar.istio.io/inject: "false"
creationTimestamp: null
labels:
app: istio-ingressgateway
chart: gateways
heritage: Tiller
install.operator.istio.io/owning-resource: unknown
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
release: istio
service.istio.io/canonical-name: istio-ingressgateway
service.istio.io/canonical-revision: latest
sidecar.istio.io/inject: "false"
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
weight: 2
- preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- ppc64le
weight: 2
- preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- s390x
weight: 2
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
containers:
- args:
- proxy
- router
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
- --serviceCluster
- istio-ingressgateway
env:
- name: JWT_POLICY
value: third-party-jwt
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.istio-system.svc:15012
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: CANONICAL_SERVICE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['service.istio.io/canonical-name']
- name: CANONICAL_REVISION
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['service.istio.io/canonical-revision']
- name: ISTIO_META_WORKLOAD_NAME
value: istio-ingressgateway
- name: ISTIO_META_OWNER
value: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
- name: ISTIO_META_UNPRIVILEGED_POD
value: "true"
- name: ISTIO_META_ROUTER_MODE
value: standard
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
image: docker.io/istio/proxyv2:1.9.5
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15021
protocol: TCP
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 15012
protocol: TCP
- containerPort: 15443
protocol: TCP
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/istio/config
name: config-volume
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/run/secrets/tokens
name: istio-token
readOnly: true
- mountPath: /var/lib/istio/data
name: istio-data
- mountPath: /etc/istio/pod
name: podinfo
- mountPath: /etc/istio/ingressgateway-certs
name: ingressgateway-certs
readOnly: true
- mountPath: /etc/istio/ingressgateway-ca-certs
name: ingressgateway-ca-certs
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1337
runAsGroup: 1337
runAsNonRoot: true
runAsUser: 1337
serviceAccount: istio-ingressgateway-service-account
serviceAccountName: istio-ingressgateway-service-account
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: istio-ca-root-cert
name: istiod-ca-cert
- downwardAPI:
defaultMode: 420
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.labels
path: labels
- fieldRef:
apiVersion: v1
fieldPath: metadata.annotations
path: annotations
- path: cpu-limit
resourceFieldRef:
containerName: istio-proxy
divisor: 1m
resource: limits.cpu
- path: cpu-request
resourceFieldRef:
containerName: istio-proxy
divisor: 1m
resource: requests.cpu
name: podinfo
- emptyDir: {}
name: istio-envoy
- emptyDir: {}
name: istio-data
- name: istio-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
defaultMode: 420
name: istio
optional: true
name: config-volume
- name: ingressgateway-certs
secret:
defaultMode: 420
optional: true
secretName: istio-ingressgateway-certs
- name: ingressgateway-ca-certs
secret:
defaultMode: 420
optional: true
secretName: istio-ingressgateway-ca-certs
Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.9.5
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
clusterIP: 10.30.192.198
externalTrafficPolicy: Cluster
ports:
- name: status-port
nodePort: 30302
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31397
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32343
port: 443
protocol: TCP
targetPort: 8443
- name: tcp-istiod
nodePort: 30255
port: 15012
protocol: TCP
targetPort: 15012
- name: tls
nodePort: 30490
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: NodePort
Here are my app manifests:
Deployment:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-st-sc5ght
release: app-st-sc5ght
heritage: Helm
chart: app-chart
name: app-st-sc5ght
namespace: app-st
spec:
replicas: 5
selector:
matchLabels:
app: app-st-sc5ght
release: app-st-sc5ght
heritage: Helm
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: app1
release: app-st-sc5ght
heritage: Helm
spec:
imagePullSecrets:
- name: registry-key
volumes:
- name: app-config
configMap:
name: app-st-config
containers:
- image: reg.org.jp/app:1.0.1
imagePullPolicy: Always
name: app
resources:
requests:
memory: "64Mi"
cpu: 0.2
limits:
memory: "256Mi"
cpu: 0.5
env:
- name: STDOUT_STACKDRIVER_LOG
value: '1'
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: app-config
mountPath: /app_config
readOnly: true
livenessProbe:
httpGet:
path: /status
port: 9000
initialDelaySeconds: 11
periodSeconds: 7
readinessProbe:
httpGet:
path: /status
port: 9000
initialDelaySeconds: 3
periodSeconds: 5
Service:
---
apiVersion: v1
kind: Service
metadata:
name: app-st-sc5ght
namespace: app-st
labels:
app: app-st-sc5ght
release: app-st-sc5ght
heritage: Helm
spec:
type: NodePort
ports:
- port: 9000
nodePort: 32098
targetPort: 9000
protocol: TCP
name: app-web
selector:
app: app-st-sc5ght
DestinationRule:
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: app-st-sc5ght
namespace: app-st
labels:
app: app-st-sc5ght
release: app-st-sc5ght
heritage: Helm
spec:
host: app-st-sc5ght.app-st.svc.cluster.local
subsets:
- name: stable
labels:
track: stable
version: stable
- name: rollout
labels:
track: rollout
version: rollout
Gateway:
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: app-st-sc5ght
namespace: app-st
labels:
app: app-st-sc5ght
release: app-st-sc5ght
heritage: Helm
track: stable
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: app-st-sc5ght
protocol: HTTP
hosts:
- st.app.org
VirtualService:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app-st-sc5ght
namespace: app-st
labels:
app: app-st-sc5ght
release: app-st-sc5ght
heritage: Helm
track: stable
spec:
gateways:
- app-st-sc5ght
hosts:
- st.app.org
http:
- match:
- uri:
prefix: /status
headers:
request:
add:
endpoint: status
response:
add:
endpoint: status
version: 1.0.1
route:
- destination:
port:
number: 9000
host: app-st-sc5ght.app-st.svc.cluster.local
subset: stable
weight: 100
- destination:
port:
number: 9000
host: app-st-sc5ght.app-st.svc.cluster.local
subset: rollout
weight: 0
- match:
- uri:
prefix: /public/list/v4/
rewrite:
uri: /list/v4/
headers:
request:
add:
endpoint: list
response:
add:
endpoint: list
route:
- destination:
port:
number: 9000
host: app-st-sc5ght.app-st.svc.cluster.local
subset: stable
weight: 100
- destination:
port:
number: 9000
host: app-st-sc5ght.app-st.svc.cluster.local
subset: rollout
weight: 0
- match:
- uri:
prefix: /
headers:
request:
add:
endpoint: home
response:
add:
endpoint: home
route:
- destination:
port:
number: 9000
host: app-st-sc5ght.app-st.svc.cluster.local
subset: stable
weight: 100
- destination:
port:
number: 9000
host: app-st-sc5ght.app-st.svc.cluster.local
subset: rollout
weight: 0
Using this config Getting No healthy upstream Http 503. If i just remove subset everything works perfectly fine.
Source: ccgf-helm-umbrella-chart/charts/ccgf-cdlg-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: ccgf-cdlg-app
service: ccgf-cdlg-app
name: ccgf-cdlg-app
namespace: cdlg-edc-devci
spec:
selector:
app: ccgf-cdlg-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
---
Source: ccgf-helm-umbrella-chart/charts/ccgf-cdlg-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ccgf-cdlg-app-production
namespace: cdlg-edc-devci
labels:
app: ccgf-cdlg-app
version: production
spec:
replicas: 1
selector:
matchLabels:
app: ccgf-cdlg-app
version: production
template:
metadata:
labels:
app: ccgf-cdlg-app
version: production
spec:
containers:
- image: edc-ccgf-ui-app:1.37
imagePullPolicy: Always
name: ccgf-cdlg-app
ports:
- name: ccgf-cdlg-app
containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 20
periodSeconds: 20
imagePullSecrets:
- name: spinnakerrepoaccess
Source: ccgf-helm-umbrella-chart/charts/ccgf-cdlg-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ccgf-cdlg-app-canary
namespace: cdlg-edc-devci
labels:
app: ccgf-cdlg-app
version: canary
spec:
replicas: 1
selector:
matchLabels:
app: ccgf-cdlg-app
version: canary
template:
metadata:
labels:
app: ccgf-cdlg-app
version: canary
spec:
containers:
- image: edc-ccgf-ui-app:1.38
imagePullPolicy: Always
name: ccgf-cdlg-app
ports:
- name: ccgf-cdlg-app
containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 20
periodSeconds: 20
imagePullSecrets:
- name: spinnakerrepoaccess
#virtual Service
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-cdlg-app
namespace: cdlg-edc-devci
spec:
hosts:
- '*'
gateways:
- ccgf-gateway
http:
- match:
- uri:
prefix: /cdlg-edc-devci/frontend
rewrite:
uri: /
route:
- destination:
host: ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local
subset: production
retries:
attempts: 3
perTryTimeout: 2s
retryOn: 'gateway-error,connect-failure,refused-stream'
weight: 50
- destination:
host: ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local
subset: canary
retries:
attempts: 3
perTryTimeout: 2s
retryOn: 'gateway-error,connect-failure,refused-stream'
weight: 50
- match:
- uri:
prefix: /static
rewrite:
uri: /static
route:
- destination:
host: ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local
retries:
attempts: 3
perTryTimeout: 2s
retryOn: 'gateway-error,connect-failure,refused-stream'
#Destination rule
kind: DestinationRule
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-cdlg-app
namespace: cdlg-edc-devci
spec:
host: ccgf-cdlg-app
subsets:
- labels:
version: canary
name: canary
- labels:
version: production
name: production
Source: ccgf-helm-umbrella-chart/charts/ccgf-gateway/templates/gateway.yaml
kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-gateway
namespace: namespace
spec:
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
selector:
release: istio-custom-ingress-gateways
I made a reproduction based on your yamls and everything works just fine, the only thing I have is basic istio ingress gateway instead of the custom one.
For start could you please change Host in DestinationRule and check if it works then?
It should be
ccgf-cdlg-app.cdlg-edc-devci.svc.cluster.local instead of ccgf-cdlg-app
Did you enable istio injection in your cdlg-edc-devci namespace?
You can check it with kubectl get namespace -L istio-injection
It should be
NAME STATUS AGE ISTIO-INJECTION
cdlg-edc-devci Active 37m enabled
And the reproduction yamls.
kubectl create namespace cdlg-edc-devci
kubectl label namespace cdlg-edc-devci istio-injection=enabled
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
namespace: cdlg-edc-devci
spec:
selector:
matchLabels:
app: ccgf-cdlg-app
version: production
replicas: 1
template:
metadata:
labels:
app: ccgf-cdlg-app
version: production
spec:
containers:
- name: nginx1
image: nginx
ports:
- name: http-dep1
containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
namespace: cdlg-edc-devci
spec:
selector:
matchLabels:
app: ccgf-cdlg-app
version: canary
replicas: 1
template:
metadata:
labels:
app: ccgf-cdlg-app
version: canary
spec:
containers:
- name: nginx2
image: nginx
ports:
- name: http-dep2
containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: cdlg-edc-devci
labels:
app: ccgf-cdlg-app
spec:
ports:
- name: http-svc
port: 80
protocol: TCP
selector:
app: ccgf-cdlg-app
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
namespace: cdlg-edc-devci
spec:
gateways:
- ccgf-gateway
hosts:
- '*'
http:
- name: production
match:
- uri:
prefix: /cdlg-edc-devci/frontend
rewrite:
uri: /
route:
- destination:
host: nginx.cdlg-edc-devci.svc.cluster.local
port:
number: 80
subset: can
weight: 50
- destination:
host: nginx.cdlg-edc-devci.svc.cluster.local
port:
number: 80
subset: prod
weight: 50
- name: canary
match:
- uri:
prefix: /s
rewrite:
uri: /
route:
- destination:
host: nginx.cdlg-edc-devci.svc.cluster.local
port:
number: 80
subset: can
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
namespace: cdlg-edc-devci
spec:
host: nginx.cdlg-edc-devci.svc.cluster.local
subsets:
- name: prod
labels:
version: production
- name: can
labels:
version: canary
kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
name: ccgf-gateway
namespace: namespace
spec:
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
apiVersion: v1
kind: Pod
metadata:
name: ubu
spec:
containers:
- name: ubu
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
Some results from ubuntu pod
curl -v external_istio-ingress_gateway_ip/cdlg-edc-devci/frontend
HTTP/1.1 200 OK
Hello nginx2
HTTP/1.1 200 OK
Hello nginx1
curl -v external_istio-ingress_gateway_ip/s
HTTP/1.1 200 OK
Hello nginx2
I hope it answer your question. Let me know if you have any more questions.
I have a service listening on two ports; one is http, the other is grpc.
I would like to set up an ingress that can route to both these port, with the same host.
The load balancer would redirect to the http port if http/1.1 is used, and to the grpc port if h2 is used.
Is there a way to do that with istio ?
I made a hello world demonstrating what I am trying to achieve :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-http
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-grpc
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 50051
---
I'm a bit late to the party, but for those of you stumbling on this post, I think you can do this with very little difficulty. I'm going to assume you have istio installed on a kubernetes cluster and are happy using the default istio-ingressgateway:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-istio-gate
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 50051
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-istio-vsvc
namespace: dev
spec:
hosts:
- "*"
gateways:
- hello-world-istio-gate
http:
- match:
- port: 80
route:
- destination:
host: hello-world
port:
number: 80
tcp:
- match:
- port: 50051
route:
- destination:
host: hello-world
port:
number: 50051
The above configuration omits your two Ingresses, and instead includes:
Your deployment
Your service
An istio gateway
An istio virtualservice
There is an important extra piece not shown, and I alluded to it earlier when talking about using the default ingressgateway. The following line found in "hello-world-istio-gateway" gives a clue:
istio: ingressgateway
This refers to a pod in the 'istio-system' namespace that is usually installed by default called 'istio-ingressgateway' - and this pod is exposed by a service also called 'istio-ingressgateway.' You will need to open up ports on the 'istio-ingressgateway' service.
As an example, I edited my (default) ingressgateway and added a port opening for HTTP and GRPC. The result is the following (edited for length) yaml code:
dampersand#kubetest1:~/k8s$ kubectl get service istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
<omitted for length>
labels:
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
<omitted for length>
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
<omitted for length>
- name: grpc
nodePort: 30000
port: 50051
protocol: TCP
targetPort: 50051
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
The easiest way to make the above change (for testing purposes) is to use:
kubectl edit svc -n istio-system istio-ingressgateway
For production purposes, it's probably better to edit your helm chart or your istio.yaml file or whatever you initially used to set up the ingressgateway.
As a quick aside, note that my test cluster has istio-ingressgateway set up as a NodePort, so what the above yaml file says is that my cluster is port forwarding 31380 -> 80 and 30000 -> 50051. You may (probably) have istio-ingressgateway set up as a LoadBalancer, which will be different... so plan accordingly.
Finally, the following blog post is some REALLY excellent background reading for the tools I've outlined in this post! https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/
You may be able to do something like that if you move the grpc-server and http-server containers into different pods with unique labels (i.e., different versions of the service, so to speak) and then using Istio route rules, behind the Ingress, split the traffic. A rule with a match for header Upgrade: h2 could send traffic to the grpc version and a default rule would send the rest of the traffic to http one.