I have an nginx container that handles html content & traffic routing via a VirtualService.
I have a separate maintenance nginx container I want to display (when I'm doing maintnenece) and on this occasion, I want all traffic to be routed to this maintenance container rather than the normal one stated in the first paragraph. I don't really want to have to tweak/patch the original traffic routes so looking for a way to have some form of override traffic routing rule.
From what I have read, the order of rules is based on the creation date so that didn't really help me.
So if anyone has any ideas how I can force all traffic to be routed to a specific "maintenance" service I would really appreciate your thoughts.
I would recommand setting a version label and work with that.
First create a DestinationRule to define your different versions and how they are identified (by labels).
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: nginx-versions
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: maintenance
labels:
version: maintenance
- name: v1
labels:
version: v1
Next setup your route in the VirtualService to point to v1.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route
spec:
hosts:
- example.com
gateways:
- mygateway
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Now you need one Service and the two Deployments.
The selector in the service will need to match both deployments. In a normal kubernetes setup this would mean, that traffic would be routed between all workloads of both deployments. But because of istio and the version setup the traffic will only be send to the currently configured version.
The deployment with the maintenance version needs to be labeled with version: maintenance and the actual version needs to be labeled with version: v1.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-maintenance
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
version: maintenance
spec:
containers:
- image: nginx-maintenance
[...]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- image: nginx-v1
[...]
If you want the traffic to be routed to the maintenance version just change the subset statement in the VirtalService and reapply it.
If you want in-cluster traffic always be send to your v1 version for some reason, you need another VirtualService that used the mesh gateway. Otherwise cluster internal traffic will be divided between all workload (v1 and maintenance).
Alternatively you could add the mesh gateway and the host to the VirtualService from above, but than cluster internal traffic will always behave like external traffic.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route-in-cluster
spec:
hosts:
- nginx.default.svc.cluster.local
gateways:
- mesh
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Furthermore you could even use more versions and test updates by sending only a portion of your traffic to the new version.
To get a better understanding and some more ideas about versioning using istio please refere to this article (it's actually quite old but the concept is still relevant).
Related
I'm trying to understand Istio configuration model but the more I read the more I get confused, especially around the hosts and host fields. In their examples, they all use the same short name and I'm not sure whether they mean the virtual service name, the Kubernetes service hostname or the dns service address.
Assuming I have the following configuration:
My Kubernetees project namespace is called poc-my-ns
Inside poc-my-ns I have my pods (both version 1 and 2) a Kubernetes route and a Kubernetes service.
The service hostname is: poc-my-ns.svc.cluster.local and the route is https://poc-my-ns.orgdevcloudapps911.myorg.org.
Everything is up and running and the service selector gets all pods from all versions as it should. (Istio virtual service suppose to do the final selection by version).
The intended Istio configuration looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: poc-my-dr
spec:
host: poc-my-ns.svc.cluster.local # ???
subsets:
- name: v1
labels:
version: 1.0
- name: v2
labels:
version: 2.0
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: poc-my-vs
spec:
hosts:
- poc-my-ns.svc.cluster.local # ???
http:
- route:
- destination:
host: poc-my-dr # ???
subset: v1
weight: 70
- destination:
host: poc-my-dr # ???
subset: v2
weight: 30
My questions are:
Is the destination rule spec/host refers to the Kubernetes service hostname?
Is the virtual service spec/hosts refers to the Kubernetes service hostname, Is it the route https://poc-my-ns.orgdevcloudapps911.myorg.org or something else?
Is the virtual service spec/http/route/destination/host refers to the destination rule name or does it suppose to point to the Kubernetes service hostname or should it be the virtual service metadata/name?
I will really appreciate clarifications.
The VirtualService and DestinationRule basically configure the envoy-proxy of the istio mesh. The VirtualService defines where to route the traffic to and the DestinationRule defines what to additionally do with the traffic.
For the VS the spec.hosts list can contain kubernetes internal and external hosts.
Say you want the define how to route traffic for api.example.com coming from outside the kubernetes cluster through the istio-ingressgateway my-gateway into the mesh. It should be routed to the rating app in the store namespace, so the VS would look like this:
spec:
hosts:
- api.example.com # external host
gateway:
- my-gateway # the ingress-gateway
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # kubernetes service
If you want to define how cluster/mesh internal traffic is routed, you set rating.store.svc.cluster.local in the spec.hosts list and define the mesh gateway (or leave it out like you did, because mesh is the default) and route it to the rating.store.svc.cluster.local service. You also add a DR where you define subsets and route all mesh internal traffic to subset v1.
# VS
[...]
spec:
hosts:
- rating.store.svc.cluster.local # cluster internal host
gateway:
- mesh # mesh internal gateway (default when omitted)
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # cluster internal host
subset: v1 # defined in destinationrule below
---
[...]
spec:
host: rating.store.svc.cluster.local # cluster internal host
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
But it could also be that you want to route traffic to a cluster external destination. In that case destination.host would be an external fqdn, like in this example from docs:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-wikipedia
spec:
hosts:
- wikipedia.org
location: MESH_EXTERNAL
ports:
- number: 80
name: example-http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-wiki-rule
spec:
hosts:
- wikipedia.org
http:
- timeout: 5s
route:
- destination:
host: wikipedia.org
Think about it as "I want to route traffic from HOST_FROM to HOST_TO", where
HOST_FROM is spec.host and spec.hosts
HOST_TO is destination.host
and both can be inside the kubernetes cluster or outside.
So to answer all your questions:
It depends: If you want to route from/to cluster internal traffic you'll use a kubernetes service fqdn. For cluster external traffic you'll use the external target fqdn.
I highly recommend reading through the docs of VirtualService and DestinationRule where you can see several examples with explanations.
Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8
I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.
Right now I'm having 3 services. A, B and C. They all are running in the same namespace. I'm making use of the EnvoyFilter to transcode the http requests to grpc calls.
Now I want to add security for those calls but I want each service to allow internal communication as well.
So I only want to check external requests for authentication.
Right now I have the following RequestAuthentication:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-authentication
spec:
selector:
matchLabels:
sup.security: jwt-authentication
jwtRules:
- issuer: "http://keycloak-http/auth/realms/supporters"
jwksUri: "http://keycloak-http/auth/realms/supporters/protocol/openid-connect/certs"
Then I added the following AuthorizationPolicy:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "auth-policy-deny-default"
spec:
selector:
matchLabels:
sup.security: jwt-authentication
action: DENY
rules: []
How do I configure istio in a way that it allows intercommunication without checking for authentication?
The recommended approach in Istio is not to think from the perspective of what you want to deny, but of what you want to allow, and then deny everything else.
To deny everything else create a catch-all deny rule as shown below:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: YOUR_NAMESPACE
spec:
{}
Now what you need to do is decide what are the cases when you want to allow requests. In your case, it would be:
All authenticated requests from within the cluster achieved with principals: ["*"].
All authenticated requests with a valid jwt token achieved with requestPrincipals: ["*"]
Putting those together give the policy below:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "allow-all-in-cluster-and-authenticated"
namespace: YOUR_NAMESPACE
spec:
rules:
- from:
- source:
principals: ["*"]
- source:
requestPrincipals: ["*"]
The field principals has a value only if a workload can identify itself via a certificate (it must have the istio proxy) during PeerAuthentication. And the field requestPrincipals is extracted from the jwt token during RequestAuthentication.
Please let me know if it doesn't work or there are tweaks needed :)
I am new to istio and was trying to set it up.
I have a question though: Is istio only meant for traffic coming in to the kube cluster via ingress or can it be used to communicate with services running inside the same kube cluster?
Sorry if it is a noob question, but i am unable to find it anywhere else. Any pointer would be greatly appreciated.
Here is what i have:
1. 2 different versions of a service deployed on the istio mesh:
kubectl get pods -n turbo -l component=rhea
NAME READY STATUS RESTARTS AGE
rhea-api-istio-1-58b957dd4b-cdn54 2/2 Running 0 46h
rhea-api-istio-2-5787d4ffd4-bfwwk 2/2 Running 0 46h
Another service deployed on the istio mesh:
kubectl get pods -n saudagar | grep readonly
saudagar-readonly-7d75c5c7d6-zvhz9 2/2 Running 0 5d
I have a kube service defined like:
apiVersion: v1
kind: Service
metadata:
name: rhea
labels:
component: rhea
namespace: turbo
spec:
selector:
component: rhea
ports:
- port: 80
targetPort: 3000
protocol: TCP
Destination rules:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rhea
spec:
host: rhea
subsets:
- name: v1
labels:
app: rhea-api-istio-1
- name: v2
labels:
app: rhea-api-istio-2
A virtual service like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rhea
namespace: turbo
spec:
hosts:
- rhea
http:
- route:
- destination:
host: rhea
subset: v1
What i am trying to test is circuit breaking, between rhea and saudagar, and traffic routing over the 2 versions of the service.
I want to test this from inside the same kube cluster. I am not able to achieve this. If i want to access rhea service from the saudagar service, what endpoint should i use so that i can see the traffic routing policy applied?
Istio can be used for controlling ingress traffic (from outside into the cluster), for controlling in-cluster traffic (between services inside the cluster) and for controlling egress traffic (from the services inside the cluster to services outside the cluster).
I have a dockerized flask application that running on kuberneetes in Google Cloud Platform with Identity-Aware Proxy enabled. I can run a "Hello World" website but when I try to use signed JWT headers then problems occur.
In my browser I am presented with
There was a problem with your request. Error code 9
My app is like this example and I use gunicorn to run the app. It seems that trouble happens in the first line
jwt = request.headers.get('x-goog-iap-jwt-assertion')
but that just makes no sense to me. But I can return a string before that line but not after. Any suggestions?
Details on the current kubernetes cluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-tools-app
spec:
selector:
matchLabels:
app: internal-tools
template:
metadata:
labels:
app: internal-tools
spec:
containers:
- name: internal-web-app
image: <<MY_IMAGE>>
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: internal-tools-backend-config
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: internal-tools-oauth
---
apiVersion: v1
kind: Service
metadata:
name: internal-tools-service
annotations:
beta.cloud.google.com/backend-config: '{"default": "internal-tools-backend-config"}'
spec:
type: NodePort
selector:
app: internal-tools
ports:
- name: it-first-port
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: internal-tools-ip
ingress.gcp.kubernetes.io/pre-shared-cert: "letsencrypt-internal-tools"
name: internal-tools-ingress
spec:
rules:
- host: <<MY_DOMAIN>>
http:
paths:
- backend:
serviceName: internal-tools-service
servicePort: it-first-port
EDIT
Further investigations show
ImportError: Error loading shared library libssl.so.45: No such file or directory (needed by /usr/local/lib/python3.6/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so)
when running the following
jwt.decode(
iap_jwt, key,
algorithms=['ES256'],
audience=expected_audience)
I just fixed this error code tonight by deleting and recreating my frontend and google-managed cert objects in GCP console. It seems to happen when I decommissioned and repurposed a cluster and deployed my app on a brand new cluster with same static IP address.
I got this answer from the Google Cloud Team bug tracker:
The Error code 9 occurs when multiple requests for re-authentication occur simultaneously (in particular, often caused by browsers reloading multiple windows/tabs at once). This flow currently requires for a temporary cookie flow to succeed first, and this cookie is unique to that flow. However if one flow starts before the previous one finishes, for example with multiple simultaneous refreshes in the same browser, this will cause the error you saw, and cause users to face that auth page.
You can try below options to overcome the issue
reboot 1 browser
clear cookies
better handling of sessions implementing
session handlers
https://issuetracker.google.com/issues/155005454