create custom sidecar and inject in istio service mesh - istio

I am new to the world of sidecar and istio. Have been reading about this for around a week. But still can't find an perfect answer.
First of all, is it possible to inject a custom sidecar using istio. Functionality that i want to achieve is, in request header i will receive 2 tokens(JWT). One for issuer (nonce), and other for sender (pop). I need to verify whether both these tokens are correct and if correct, i can allow them access to my microservice or else reject straight away.
So in order to achieve this functionality, i have created a sidecar, and now i want to deploy it using istio. But i can't find a way, to do it.
What i am able to achieve is the automatic sidecar injection that happens as soon as i install my containers. But now where i am struct is, i want custom sidecar to be injected using istio.
Let me know if anyone can give me a direction in what i am trying to achieve. Thank you.

You can do this in istio with a custom sidecar injection template.
For example to overwrite the default sidecar istio-proxy, just create an overlay for the template in the IstioOperator:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio
spec:
values:
sidecarInjectorWebhook:
templates:
custom: |
spec:
containers:
- name: istio-proxy
env:
- name: GREETING
value: hello-world
By that the proxy would get the extra GREETING environment variable. You can also inject a second/different custom sidecar. First add it in the IstioOperator manifest:
[...]
values:
sidecarInjectorWebhook:
templates:
custom: |
spec:
containers:
- name: my-custom-container
image: my-custom-image
Now you can control which sidecar will be injected by annotation the pod with the inject.istio.io/templates annotation.
To replace the default istio-proxy sidecar set it like inject.istio.io/templates=custom.
To inject it as a secondary sidecar set it like inject.istio.io/templates=sidecar,custom, where sidecar would be the default istio sidecar.
You can check the istio-sidecar-injector configmap in the istio-system namespace to inspect the default config and also to verify your changes.
More on that see docs
Note that the feature is currently still experimental.
Edit
Here is an example deployment that would utilize a second sidecar. Note the annotation in the pod template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
inject.istio.io/templates: sidecar,custom # for custom sidecar
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Related

What is spec/selector label in an Istio Gateway configuration?

I am new to Istio Gateway and my goal is to create a Ingress Gateway for a service deployed on K8s.
I am a bit confused with the Gateway example in the official document: https://istio.io/latest/docs/concepts/traffic-management/#gateway-example.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ext-host-gwy
spec:
selector:
app: my-gateway-controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- ext-host.example.com
tls:
mode: SIMPLE
credentialName: ext-host-cert
In this example, what is app: my-gateway-controller under spec/selector? Is there additional configuration/deployment needed for this my-gateway-controller?
I tried searching "my-gateway-controller" in the rest of the document, but didn't find further explanation.
Its usually istio ingress gateway pod label name which needs to be given in selector, as the external traffic will enter through ingress gateway pod. unless the name of ingress gateway is changed during istio installation.
Please mention as below in the gateway definition which will route the traffic to application.
spec:
selector:
istio: ingressgateway
Istio can be installed with different options. They have different profiles that can be used for testing, for default scenarios and custom setup. One option is to configure an ingress-controller (but you could also have non and use a different non-istio ingress-controller).
Depending on your setup you can either have no ingress-gateway, the default ingress-gateway or a custom gateway.
The default gateway has a label that's called istio: ingressgateway. You can find that in most of the example/getting started docs, e.g. in how to setup a secure ingress
Here the Gateway looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
[...]
The other option would be to setup a second ingress-gateway that might have a different name. You can for example use the IstioOperator manifest to configure this.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
name: my-gateway-controller
That ingress-gateway pod will get a label of app: my-gateway-controller. This label can than be used as it has been in the example you posted. So you can check all ingress-gateway pods you have and choose the label you need.
If you went with the default setup, you probably have the default ingress-gateway and can simple change the selector to istio: ingressgateway.
For the beginning I would recommend to stick with the tasks section for configuring your setup, because it's uses the default istio setup most people have. If you need more details or something special you can always check the docs pages.

Override all existing traffic routes

I have an nginx container that handles html content & traffic routing via a VirtualService.
I have a separate maintenance nginx container I want to display (when I'm doing maintnenece) and on this occasion, I want all traffic to be routed to this maintenance container rather than the normal one stated in the first paragraph. I don't really want to have to tweak/patch the original traffic routes so looking for a way to have some form of override traffic routing rule.
From what I have read, the order of rules is based on the creation date so that didn't really help me.
So if anyone has any ideas how I can force all traffic to be routed to a specific "maintenance" service I would really appreciate your thoughts.
I would recommand setting a version label and work with that.
First create a DestinationRule to define your different versions and how they are identified (by labels).
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: nginx-versions
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: maintenance
labels:
version: maintenance
- name: v1
labels:
version: v1
Next setup your route in the VirtualService to point to v1.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route
spec:
hosts:
- example.com
gateways:
- mygateway
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Now you need one Service and the two Deployments.
The selector in the service will need to match both deployments. In a normal kubernetes setup this would mean, that traffic would be routed between all workloads of both deployments. But because of istio and the version setup the traffic will only be send to the currently configured version.
The deployment with the maintenance version needs to be labeled with version: maintenance and the actual version needs to be labeled with version: v1.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-maintenance
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
version: maintenance
spec:
containers:
- image: nginx-maintenance
[...]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- image: nginx-v1
[...]
If you want the traffic to be routed to the maintenance version just change the subset statement in the VirtalService and reapply it.
If you want in-cluster traffic always be send to your v1 version for some reason, you need another VirtualService that used the mesh gateway. Otherwise cluster internal traffic will be divided between all workload (v1 and maintenance).
Alternatively you could add the mesh gateway and the host to the VirtualService from above, but than cluster internal traffic will always behave like external traffic.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route-in-cluster
spec:
hosts:
- nginx.default.svc.cluster.local
gateways:
- mesh
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Furthermore you could even use more versions and test updates by sending only a portion of your traffic to the new version.
To get a better understanding and some more ideas about versioning using istio please refere to this article (it's actually quite old but the concept is still relevant).

Istio Authorization Policy IP whitelisting

Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8
I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.

GKE with Identity Aware Proxy returns Error code 9

I have a dockerized flask application that running on kuberneetes in Google Cloud Platform with Identity-Aware Proxy enabled. I can run a "Hello World" website but when I try to use signed JWT headers then problems occur.
In my browser I am presented with
There was a problem with your request. Error code 9
My app is like this example and I use gunicorn to run the app. It seems that trouble happens in the first line
jwt = request.headers.get('x-goog-iap-jwt-assertion')
but that just makes no sense to me. But I can return a string before that line but not after. Any suggestions?
Details on the current kubernetes cluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-tools-app
spec:
selector:
matchLabels:
app: internal-tools
template:
metadata:
labels:
app: internal-tools
spec:
containers:
- name: internal-web-app
image: <<MY_IMAGE>>
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: internal-tools-backend-config
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: internal-tools-oauth
---
apiVersion: v1
kind: Service
metadata:
name: internal-tools-service
annotations:
beta.cloud.google.com/backend-config: '{"default": "internal-tools-backend-config"}'
spec:
type: NodePort
selector:
app: internal-tools
ports:
- name: it-first-port
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: internal-tools-ip
ingress.gcp.kubernetes.io/pre-shared-cert: "letsencrypt-internal-tools"
name: internal-tools-ingress
spec:
rules:
- host: <<MY_DOMAIN>>
http:
paths:
- backend:
serviceName: internal-tools-service
servicePort: it-first-port
EDIT
Further investigations show
ImportError: Error loading shared library libssl.so.45: No such file or directory (needed by /usr/local/lib/python3.6/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so)
when running the following
jwt.decode(
iap_jwt, key,
algorithms=['ES256'],
audience=expected_audience)
I just fixed this error code tonight by deleting and recreating my frontend and google-managed cert objects in GCP console. It seems to happen when I decommissioned and repurposed a cluster and deployed my app on a brand new cluster with same static IP address.
I got this answer from the Google Cloud Team bug tracker:
The Error code 9 occurs when multiple requests for re-authentication occur simultaneously (in particular, often caused by browsers reloading multiple windows/tabs at once). This flow currently requires for a temporary cookie flow to succeed first, and this cookie is unique to that flow. However if one flow starts before the previous one finishes, for example with multiple simultaneous refreshes in the same browser, this will cause the error you saw, and cause users to face that auth page.
You can try below options to overcome the issue
reboot 1 browser
clear cookies
better handling of sessions implementing
⁠session handlers
https://issuetracker.google.com/issues/155005454

How to integrate Kubernetes with existing AWS ALB?

I want to use existing AWS ALB for my kubernetes setup. i.e. I don't want alb-ingress-controller create or update any existing AWS resource ie. Target groups, roles etc.
How can I make ALB to communicate with Kubernetes cluster, henceforth passing the request to existing services and getting the response back to ALB to display in the front end?
I tried this but it will create new ALB for new ingress resource. I want to use the existing one.
You basically have to open a node port on the instances where the Kubernetes Pods are running. Then you need to let the ALB point to those instances. There are two ways of configuring this. Either via Pods or via Services.
To configure it via a Service you need to specify .spec.ports[].nodePort. In the default setup the port needs to be between 30000 and 32000. This port gets opened on every node and will be redirected to the specified Pods (which might be on any other node). This has the downside that there is another hop, which also can cost money when using a multi-AZ setup. An example Service could look like this:
---
apiVersion: v1
kind: Service
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
type: NodePort
selector:
app: my-frontend
ports:
- port: 8080
nodePort: 30082
To configure it via a Pod you need to specify .spec.containers[].ports[].hostPort. This can be any port number, but it has to be free on the node where the Pod gets scheduled. This means that there can only be one Pod per node and it might conflict with ports from other applications. This has the downside that not all instances will be healthy from an ALB point-of-view, since only nodes with that Pod accept traffic. You could add a sidecar container which registers the current node on the ALB, but this would mean additional complexity. An example could look like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
replicas: 3
selector:
matchLabels:
app: my-frontend
template:
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
containers:
- name: nginx
image: "nginx"
ports:
- containerPort: 80
hostPort: 8080