I have a Kubernetes app and I'm having the istio sidecar set up. Is it possible configure istio MTLS for a subset of APIs and others with simple TLS?
As I mentioned in the comments, you should be able to do that with destination rules, as you can use the tls settings mode to change the mtls for specific hosts.
Take a look at below examples from documentation:
For example, the following rule configures a client to use mutual TLS for connections to upstream database cluster.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: db-mtls
spec:
host: mydbserver.prod.svc.cluster.local
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
The following rule configures a client to use TLS when talking to a foreign service whose domain matches *.foo.com.
v1alpha3v1beta1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tls-foo
spec:
host: "*.foo.com"
trafficPolicy:
tls:
mode: SIMPLE
The following rule configures a client to use Istio mutual TLS when talking to rating services.
v1alpha3v1beta1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings-istio-mtls
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
Related
I have an nginx container that handles html content & traffic routing via a VirtualService.
I have a separate maintenance nginx container I want to display (when I'm doing maintnenece) and on this occasion, I want all traffic to be routed to this maintenance container rather than the normal one stated in the first paragraph. I don't really want to have to tweak/patch the original traffic routes so looking for a way to have some form of override traffic routing rule.
From what I have read, the order of rules is based on the creation date so that didn't really help me.
So if anyone has any ideas how I can force all traffic to be routed to a specific "maintenance" service I would really appreciate your thoughts.
I would recommand setting a version label and work with that.
First create a DestinationRule to define your different versions and how they are identified (by labels).
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: nginx-versions
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: maintenance
labels:
version: maintenance
- name: v1
labels:
version: v1
Next setup your route in the VirtualService to point to v1.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route
spec:
hosts:
- example.com
gateways:
- mygateway
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Now you need one Service and the two Deployments.
The selector in the service will need to match both deployments. In a normal kubernetes setup this would mean, that traffic would be routed between all workloads of both deployments. But because of istio and the version setup the traffic will only be send to the currently configured version.
The deployment with the maintenance version needs to be labeled with version: maintenance and the actual version needs to be labeled with version: v1.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-maintenance
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
version: maintenance
spec:
containers:
- image: nginx-maintenance
[...]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- image: nginx-v1
[...]
If you want the traffic to be routed to the maintenance version just change the subset statement in the VirtalService and reapply it.
If you want in-cluster traffic always be send to your v1 version for some reason, you need another VirtualService that used the mesh gateway. Otherwise cluster internal traffic will be divided between all workload (v1 and maintenance).
Alternatively you could add the mesh gateway and the host to the VirtualService from above, but than cluster internal traffic will always behave like external traffic.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route-in-cluster
spec:
hosts:
- nginx.default.svc.cluster.local
gateways:
- mesh
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Furthermore you could even use more versions and test updates by sending only a portion of your traffic to the new version.
To get a better understanding and some more ideas about versioning using istio please refere to this article (it's actually quite old but the concept is still relevant).
Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8
I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.
Right now I'm having 3 services. A, B and C. They all are running in the same namespace. I'm making use of the EnvoyFilter to transcode the http requests to grpc calls.
Now I want to add security for those calls but I want each service to allow internal communication as well.
So I only want to check external requests for authentication.
Right now I have the following RequestAuthentication:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-authentication
spec:
selector:
matchLabels:
sup.security: jwt-authentication
jwtRules:
- issuer: "http://keycloak-http/auth/realms/supporters"
jwksUri: "http://keycloak-http/auth/realms/supporters/protocol/openid-connect/certs"
Then I added the following AuthorizationPolicy:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "auth-policy-deny-default"
spec:
selector:
matchLabels:
sup.security: jwt-authentication
action: DENY
rules: []
How do I configure istio in a way that it allows intercommunication without checking for authentication?
The recommended approach in Istio is not to think from the perspective of what you want to deny, but of what you want to allow, and then deny everything else.
To deny everything else create a catch-all deny rule as shown below:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: YOUR_NAMESPACE
spec:
{}
Now what you need to do is decide what are the cases when you want to allow requests. In your case, it would be:
All authenticated requests from within the cluster achieved with principals: ["*"].
All authenticated requests with a valid jwt token achieved with requestPrincipals: ["*"]
Putting those together give the policy below:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "allow-all-in-cluster-and-authenticated"
namespace: YOUR_NAMESPACE
spec:
rules:
- from:
- source:
principals: ["*"]
- source:
requestPrincipals: ["*"]
The field principals has a value only if a workload can identify itself via a certificate (it must have the istio proxy) during PeerAuthentication. And the field requestPrincipals is extracted from the jwt token during RequestAuthentication.
Please let me know if it doesn't work or there are tweaks needed :)
I was created a NATS cluster without inject to Istio.
apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
name: nats
spec:
size: 2
pod:
annotations:
sidecar.istio.io/inject: "false"
version: "2.0.0"
Now i has one sidecar istio, connect to Nats cluster above, but seems istio severed connection. My nats client on application closed, and Nats server notice: "Client parser ERROR, state=0 ..."
the reason is there no mtls between the nats cluster and the sidecar? How i can fix this issue?
for istio 1.8
nats and nats streaming ymal can be found on
https://github.com/nats-io/nats-operator
https://github.com/nats-io/nats-streaming-operator
If you don't connect by node port outside from kubernetes cluster. You just use default istio settings and inject sidecar for nats pods. It works.
But if you want to connect nats by node port from outside. You need disable mtls.
My setting is default mtls, pods of nats and nats streaming inject sidecar.
And nats only accept traffic of text plain and nats only send traffic with text plain.
add following peer authentication:
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "nats"
spec:
selector:
matchLabels:
app: nats
mtls:
mode: DISABLE
---
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "nats-streaming"
spec:
selector:
matchLabels:
app: nats-streaming
mtls:
mode: DISABLE
add following destination rule:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nats
spec:
host: "nats-server.acm.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nats-server-nodeport
spec:
host: "nats-server-nodeport.acm.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nats-server-mgmt
spec:
host: "nats-server-mgmt.acm.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
I am new to istio and was trying to set it up.
I have a question though: Is istio only meant for traffic coming in to the kube cluster via ingress or can it be used to communicate with services running inside the same kube cluster?
Sorry if it is a noob question, but i am unable to find it anywhere else. Any pointer would be greatly appreciated.
Here is what i have:
1. 2 different versions of a service deployed on the istio mesh:
kubectl get pods -n turbo -l component=rhea
NAME READY STATUS RESTARTS AGE
rhea-api-istio-1-58b957dd4b-cdn54 2/2 Running 0 46h
rhea-api-istio-2-5787d4ffd4-bfwwk 2/2 Running 0 46h
Another service deployed on the istio mesh:
kubectl get pods -n saudagar | grep readonly
saudagar-readonly-7d75c5c7d6-zvhz9 2/2 Running 0 5d
I have a kube service defined like:
apiVersion: v1
kind: Service
metadata:
name: rhea
labels:
component: rhea
namespace: turbo
spec:
selector:
component: rhea
ports:
- port: 80
targetPort: 3000
protocol: TCP
Destination rules:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rhea
spec:
host: rhea
subsets:
- name: v1
labels:
app: rhea-api-istio-1
- name: v2
labels:
app: rhea-api-istio-2
A virtual service like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rhea
namespace: turbo
spec:
hosts:
- rhea
http:
- route:
- destination:
host: rhea
subset: v1
What i am trying to test is circuit breaking, between rhea and saudagar, and traffic routing over the 2 versions of the service.
I want to test this from inside the same kube cluster. I am not able to achieve this. If i want to access rhea service from the saudagar service, what endpoint should i use so that i can see the traffic routing policy applied?
Istio can be used for controlling ingress traffic (from outside into the cluster), for controlling in-cluster traffic (between services inside the cluster) and for controlling egress traffic (from the services inside the cluster to services outside the cluster).