Istio Authorization Policy IP whitelisting - istio

Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8

I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.

Related

Istio virtual service spec host and destination rule host

I'm trying to understand Istio configuration model but the more I read the more I get confused, especially around the hosts and host fields. In their examples, they all use the same short name and I'm not sure whether they mean the virtual service name, the Kubernetes service hostname or the dns service address.
Assuming I have the following configuration:
My Kubernetees project namespace is called poc-my-ns
Inside poc-my-ns I have my pods (both version 1 and 2) a Kubernetes route and a Kubernetes service.
The service hostname is: poc-my-ns.svc.cluster.local and the route is https://poc-my-ns.orgdevcloudapps911.myorg.org.
Everything is up and running and the service selector gets all pods from all versions as it should. (Istio virtual service suppose to do the final selection by version).
The intended Istio configuration looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: poc-my-dr
spec:
host: poc-my-ns.svc.cluster.local # ???
subsets:
- name: v1
labels:
version: 1.0
- name: v2
labels:
version: 2.0
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: poc-my-vs
spec:
hosts:
- poc-my-ns.svc.cluster.local # ???
http:
- route:
- destination:
host: poc-my-dr # ???
subset: v1
weight: 70
- destination:
host: poc-my-dr # ???
subset: v2
weight: 30
My questions are:
Is the destination rule spec/host refers to the Kubernetes service hostname?
Is the virtual service spec/hosts refers to the Kubernetes service hostname, Is it the route https://poc-my-ns.orgdevcloudapps911.myorg.org or something else?
Is the virtual service spec/http/route/destination/host refers to the destination rule name or does it suppose to point to the Kubernetes service hostname or should it be the virtual service metadata/name?
I will really appreciate clarifications.
The VirtualService and DestinationRule basically configure the envoy-proxy of the istio mesh. The VirtualService defines where to route the traffic to and the DestinationRule defines what to additionally do with the traffic.
For the VS the spec.hosts list can contain kubernetes internal and external hosts.
Say you want the define how to route traffic for api.example.com coming from outside the kubernetes cluster through the istio-ingressgateway my-gateway into the mesh. It should be routed to the rating app in the store namespace, so the VS would look like this:
spec:
hosts:
- api.example.com # external host
gateway:
- my-gateway # the ingress-gateway
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # kubernetes service
If you want to define how cluster/mesh internal traffic is routed, you set rating.store.svc.cluster.local in the spec.hosts list and define the mesh gateway (or leave it out like you did, because mesh is the default) and route it to the rating.store.svc.cluster.local service. You also add a DR where you define subsets and route all mesh internal traffic to subset v1.
# VS
[...]
spec:
hosts:
- rating.store.svc.cluster.local # cluster internal host
gateway:
- mesh # mesh internal gateway (default when omitted)
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # cluster internal host
subset: v1 # defined in destinationrule below
---
[...]
spec:
host: rating.store.svc.cluster.local # cluster internal host
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
But it could also be that you want to route traffic to a cluster external destination. In that case destination.host would be an external fqdn, like in this example from docs:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-wikipedia
spec:
hosts:
- wikipedia.org
location: MESH_EXTERNAL
ports:
- number: 80
name: example-http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-wiki-rule
spec:
hosts:
- wikipedia.org
http:
- timeout: 5s
route:
- destination:
host: wikipedia.org
Think about it as "I want to route traffic from HOST_FROM to HOST_TO", where
HOST_FROM is spec.host and spec.hosts
HOST_TO is destination.host
and both can be inside the kubernetes cluster or outside.
So to answer all your questions:
It depends: If you want to route from/to cluster internal traffic you'll use a kubernetes service fqdn. For cluster external traffic you'll use the external target fqdn.
I highly recommend reading through the docs of VirtualService and DestinationRule where you can see several examples with explanations.

What is spec/selector label in an Istio Gateway configuration?

I am new to Istio Gateway and my goal is to create a Ingress Gateway for a service deployed on K8s.
I am a bit confused with the Gateway example in the official document: https://istio.io/latest/docs/concepts/traffic-management/#gateway-example.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ext-host-gwy
spec:
selector:
app: my-gateway-controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- ext-host.example.com
tls:
mode: SIMPLE
credentialName: ext-host-cert
In this example, what is app: my-gateway-controller under spec/selector? Is there additional configuration/deployment needed for this my-gateway-controller?
I tried searching "my-gateway-controller" in the rest of the document, but didn't find further explanation.
Its usually istio ingress gateway pod label name which needs to be given in selector, as the external traffic will enter through ingress gateway pod. unless the name of ingress gateway is changed during istio installation.
Please mention as below in the gateway definition which will route the traffic to application.
spec:
selector:
istio: ingressgateway
Istio can be installed with different options. They have different profiles that can be used for testing, for default scenarios and custom setup. One option is to configure an ingress-controller (but you could also have non and use a different non-istio ingress-controller).
Depending on your setup you can either have no ingress-gateway, the default ingress-gateway or a custom gateway.
The default gateway has a label that's called istio: ingressgateway. You can find that in most of the example/getting started docs, e.g. in how to setup a secure ingress
Here the Gateway looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
[...]
The other option would be to setup a second ingress-gateway that might have a different name. You can for example use the IstioOperator manifest to configure this.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
name: my-gateway-controller
That ingress-gateway pod will get a label of app: my-gateway-controller. This label can than be used as it has been in the example you posted. So you can check all ingress-gateway pods you have and choose the label you need.
If you went with the default setup, you probably have the default ingress-gateway and can simple change the selector to istio: ingressgateway.
For the beginning I would recommend to stick with the tasks section for configuring your setup, because it's uses the default istio setup most people have. If you need more details or something special you can always check the docs pages.

Override all existing traffic routes

I have an nginx container that handles html content & traffic routing via a VirtualService.
I have a separate maintenance nginx container I want to display (when I'm doing maintnenece) and on this occasion, I want all traffic to be routed to this maintenance container rather than the normal one stated in the first paragraph. I don't really want to have to tweak/patch the original traffic routes so looking for a way to have some form of override traffic routing rule.
From what I have read, the order of rules is based on the creation date so that didn't really help me.
So if anyone has any ideas how I can force all traffic to be routed to a specific "maintenance" service I would really appreciate your thoughts.
I would recommand setting a version label and work with that.
First create a DestinationRule to define your different versions and how they are identified (by labels).
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: nginx-versions
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: maintenance
labels:
version: maintenance
- name: v1
labels:
version: v1
Next setup your route in the VirtualService to point to v1.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route
spec:
hosts:
- example.com
gateways:
- mygateway
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Now you need one Service and the two Deployments.
The selector in the service will need to match both deployments. In a normal kubernetes setup this would mean, that traffic would be routed between all workloads of both deployments. But because of istio and the version setup the traffic will only be send to the currently configured version.
The deployment with the maintenance version needs to be labeled with version: maintenance and the actual version needs to be labeled with version: v1.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-maintenance
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
version: maintenance
spec:
containers:
- image: nginx-maintenance
[...]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- image: nginx-v1
[...]
If you want the traffic to be routed to the maintenance version just change the subset statement in the VirtalService and reapply it.
If you want in-cluster traffic always be send to your v1 version for some reason, you need another VirtualService that used the mesh gateway. Otherwise cluster internal traffic will be divided between all workload (v1 and maintenance).
Alternatively you could add the mesh gateway and the host to the VirtualService from above, but than cluster internal traffic will always behave like external traffic.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route-in-cluster
spec:
hosts:
- nginx.default.svc.cluster.local
gateways:
- mesh
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Furthermore you could even use more versions and test updates by sending only a portion of your traffic to the new version.
To get a better understanding and some more ideas about versioning using istio please refere to this article (it's actually quite old but the concept is still relevant).

Istio AuthorizationPolicy only for external requests

Right now I'm having 3 services. A, B and C. They all are running in the same namespace. I'm making use of the EnvoyFilter to transcode the http requests to grpc calls.
Now I want to add security for those calls but I want each service to allow internal communication as well.
So I only want to check external requests for authentication.
Right now I have the following RequestAuthentication:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-authentication
spec:
selector:
matchLabels:
sup.security: jwt-authentication
jwtRules:
- issuer: "http://keycloak-http/auth/realms/supporters"
jwksUri: "http://keycloak-http/auth/realms/supporters/protocol/openid-connect/certs"
Then I added the following AuthorizationPolicy:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "auth-policy-deny-default"
spec:
selector:
matchLabels:
sup.security: jwt-authentication
action: DENY
rules: []
How do I configure istio in a way that it allows intercommunication without checking for authentication?
The recommended approach in Istio is not to think from the perspective of what you want to deny, but of what you want to allow, and then deny everything else.
To deny everything else create a catch-all deny rule as shown below:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: YOUR_NAMESPACE
spec:
{}
Now what you need to do is decide what are the cases when you want to allow requests. In your case, it would be:
All authenticated requests from within the cluster achieved with principals: ["*"].
All authenticated requests with a valid jwt token achieved with requestPrincipals: ["*"]
Putting those together give the policy below:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "allow-all-in-cluster-and-authenticated"
namespace: YOUR_NAMESPACE
spec:
rules:
- from:
- source:
principals: ["*"]
- source:
requestPrincipals: ["*"]
The field principals has a value only if a workload can identify itself via a certificate (it must have the istio proxy) during PeerAuthentication. And the field requestPrincipals is extracted from the jwt token during RequestAuthentication.
Please let me know if it doesn't work or there are tweaks needed :)

Istio allow only specific IP CIDR and deny rest

I have a requirement where-in I would like to allow certain CIDR ranges to be able to access my service, rest all should be denied.
I have tried the Istio IP Whitelisting/Blacklisting as mentioned in the official Istio documentation.
For example 10.0.0.2/16 should get allowed and rest should be denied. This doesn't seem to work.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.0.0.2/16"] # overrides provide a static list
blacklist: true
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
So basically the istio 1.5.0 was released few days ago and if we check the istio docs white/black list are deprecated now.
Denials and White/Black Listing (Deprecated)
But there is actually a good news because there is new example for authorization on ingress gateway which should answer your question.
I am not able to get the real client IP hence not able to block/allow using authorization policy or IP based whitelisting.
Based on this new example which I tested myself if you want to see you'r source ip you have to change istio-ingressgateway externalTrafficPolicy from Cluster to Local.
Update the ingress gateway to set externalTrafficPolicy: local to preserve the original client source IP on the ingress gateway using the following command:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
And the allow example
The following example creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
I hope this answer your question. Let me know if you have any more questions.
Other solution in Istio 1.5:
Configure external traffic:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Or in Helm installation:
--set gateways.istio-ingressgateway.externalTrafficPolicy
And you can use in any namespace like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
when:
- key: request.headers[X-Envoy-External-Address]
values: ["1.2.3.4/24"] #Ip to allow
selector:
matchLabels:
app: httpbin