Istio VirtualService delegate with mesh gateway - istio

I am implementing some routing logic for a set of internal services where a delegate VirtualService looks like a good solution:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#Delegate
I created some test setup similar to the one in the documentation with only one difference. In my case, the "root" VirtualService bound to the "mesh" Gateway and the "host" is then some internal service name. Is this supposed to work or does delegation only work with non-mesh Gateways?
This is the root VirtualService (the idea is that all requests are sent to worker-pool.default.svc.cluster.local and depending on some HTTP headers they are then forwarded to other VirtualServices):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: worker-pool
spec:
hosts:
- worker-pool.default.svc.cluster.local
http:
- name: "route 1"
match:
- headers:
customer-id:
exact: alice
delegate:
name: worker-for-alice
- name: "route 2"
match:
- headers:
customer-id:
exact: bob
delegate:
name: worker-for-bob
And here the other VirtualService (only showing one, both look the same):
apiVersion: v1
kind: Service
metadata:
name: worker-for-alice
labels:
app: worker-for-alice
service: worker-for-alice
spec:
...
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: worker-for-alice
spec:
http:
- route:
- destination:
host: worker-for-alice

I guess there are some points to consider:
As far as I am concerned, Delegate will only work for Gateways. So, you should need something like this:
spec:
gateways:
- mesh
hosts:
- worker-pool.default.svc.cluster.local
http:
...
It is always a good idea to use namespaces. Once you have the items defined in namespaces, add "namespace" element to delegate definition:
delegate:
name: worker-for-bob
namespace: <some-namespace>
Last, but not least, it may be necessary to setup variable PILOT_ENABLE_VIRTUAL_SERVICE_DELEGATE to "true" on istiod configuration:
kubectl edit deployment istiod -n istio-system
And then add this env variable in spec.template.spec.container.env

Related

Override Istio retry back off interval

I am trying to override Istio's default retry back off interval with the help of an EnvoyFilter.
I have three services, each calling it's successor. Service-2 has retries enabled with a VirtualService.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
labels:
app: service-2
name: service-2-vs
namespace: retry-test
spec:
hosts:
- service-2
http:
- route:
- destination:
host: service-2
retries:
attempts: 5
retryOn: 5xx
The retries are working, but when I apply an EnvoyFilter to override the default retry back off interval I see no effects.
I used the following EnvoyFilter for overriding the back off intervals.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: service-2-envoy-config
namespace: retry-test
spec:
workloadSelector:
labels:
app: service-2
configPatches:
- applyTo: HTTP_ROUTE
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: "service-2.retry-test.svc.cluster.local:5002"
patch:
operation: MERGE
value:
route:
retry_policy:
retry_back_off:
base_interval: "1s"
max_interval: "5s"
I also tried configuring the EnvoyFilter for Service-1 since this would be the service sending requests to Service-2, but this didn't work either.
When checking with Kiali I can see that the EnvoyFilter gets applied to the correct service and when looking at Envoy configs of the workload I can see the following got applied.
"route": {
"cluster": "outbound|5002||service-2.retry-test.svc.cluster.local",
"max_grpc_timeout": "0s",
"retry_policy": {
"host_selection_retry_max_attempts": "5",
"num_retries": 5,
"retry_back_off": {
"base_interval": "1s",
"max_interval": "5s"
},
...
}
}
Can someone help me to figure out how to apply the right EnvoyFilter to override the default back off interval?
Posting this if anyone runs into the same problem and finds this question.
When using an EnvoyFilter you need to apply the filter for the service that will send the requests with the vhost being the service the requests are sent to.
In this case this would mean the Envoyfilter gets applied to service-1 with vhost of the routeConfiguration being service-2.
The according yaml file looks like this:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: service-1-envoy-config
namespace: retry-test
spec:
workloadSelector:
labels:
app: service-1
configPatches:
- applyTo: HTTP_ROUTE
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: "service-2.retry-test.svc.cluster.local:5002"
patch:
operation: MERGE
value:
route:
retry_policy:
retry_back_off:
base_interval: "1s"
max_interval: "5s"

How to make the ext_authz envoy filter work on the istio cluster?

I am trying to add ext_authz filter to istio ingress-gateway for requests authentication. But when I'm adding this filter to cluster it seems like it is not added to envoy configuration, i.e. it is not working.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: authn-filter
namespace: istio-system
spec:
filters:
- insertPosition:
index: FIRST
listenerMatch:
portNumber: 433
listenerType: GATEWAY
listenerProtocol: HTTP
filterType: HTTP
filterName: "envoy.ext_authz"
filterConfig:
http_service:
server-uri:
uri: http://auth.default.svc.cluster.local:8080
cluster: outbound|8080||auth.default.svc.cluster.local
timeout: 2s
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: auth-virtualservice
spec:
hosts:
- "*"
gateways:
- gateway.default.svc.cluster.local
http:
- match:
- uri:
prefix: "/auth"
route:
- destination:
host: auth.default.svc.cluster.local
I figured that out, the problem was caused by the old version of istio deployed on the cluster

Istio Virtual Service - can I route traffic based on the calling service

Say I've got three services, ServiceA, ServiceB, and ServiceC. ServiceA and ServiceB both call ServiceC. I want to deploy a new version of ServiceC, but only want to send it traffic from ServiceB for testing. Is there a Route configuration that takes "calling service" into account?
Based on istio documentation
You can do it with virtual service, or virtual service and destination rule.
With labels, example here
Deployments with app and version labels: We recommend adding an explicit app label and version label to deployments. Add the labels to the deployment specification of pods deployed using the Kubernetes Deployment. The app and version labels add contextual information to the metrics and telemetry Istio collects.
The app label: Each deployment specification should have a distinct app label with a meaningful value. The app label is used to add contextual information in distributed tracing.
The version label: This label indicates the version of the application corresponding to the particular deployment.
Each routing rule is associated with one or more service versions (see glossary in beginning of document). Weights associated with the version determine the proportion of traffic it receives. For example, the following rule will route 25% of traffic for the “reviews” service to instances with the “v2” tag and the remaining traffic (i.e., 75%) to “v1”.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v2
weight: 25
- destination:
host: reviews.prod.svc.cluster.local
subset: v1
weight: 75
And the associated DestinationRule
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: reviews.prod.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
OR
Traffic can also be split across two entirely different services without having to define new subsets. For example, the following rule forwards 25% of traffic to reviews.com and 75% to dev.reviews.com
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route-two-domains
spec:
hosts:
- reviews.com
http:
- route:
- destination:
host: dev.reviews.com
weight: 25
- destination:
host: reviews.com
weight: 75
EDIT
So actually you would have to add labels to your serviceC 1.0 and 2.0 and virtual service would look like this.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route-two-domains
spec:
hosts:
- reviews.com
http:
- match:
- sourceLabels:
svc: A
- route:
- destination:
host: serviceC
label: v1
- match:
- sourceLabels:
svc: B
- route:
- destination:
host: serviceC
label: v2
Check my another answer where I used sourceLabels here
Let me know if you have any more questions.

Istio allow incoming traffic to a service only from a particular namespace

We want Istio to allow incoming traffic to a service only from a particular namespace. How can we do this with Istio? We are runnning Istio 1.1.3 version.
Update:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- podSelector:
matchLabels:
istio: ingress
This did not work I am able to access the service from other namespaces as well. Next i tried:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: external-api-caller
namespace: test-ns
spec:
rules:
- services: ["testapp"]
methods: ["*"]
constraints:
- key: "destination.labels[version]"
values: ["v1", "v2"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: external-api-caller
namespace: test-ns
spec:
subjects:
- properties:
source.namespace: "default"
roleRef:
kind: ServiceRole
name: "external-api-caller"
I am able to access the service from all the namespaces. Where i expected it should be allowed only from "default" namespace
I'm not sure if this is possible for particular namespace, but it will work on labels.
You can create a network policy in Istio, this is nicely explained on Traffic Routing in Kubernetes via Istio and Envoy Proxy.
...
ingress:
- from:
- podSelector:
matchLabels:
zone: trusted
...
In the example only pods with label zone: trusted will be allowed to make incoming connection to the pod.
You can read about Using Network Policy with Istio.
I would also recommend reading Security Concepts in Istio as well as Denials and White/Black Listing.
Hope this helps You.
Using k8s Network Policy: Yes it is possible. The example that is posted in the question is not allowing from a different namespace. In the Ingress rule you have to use namespace selector which will be used to specify the namespace from which you want to allow the traffic. In the example below, namespace with label 'ns-group: prod-ns' will be allowed to access the pod with label 'app: testapp' on port 80 and protocol TCP
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- namespaceSelector:
matchLabels:
ns-group: prod-ns
Using Istio White Listing Policy: You can go through white listing policy examples and attribute vocabulary
Below is the example,
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelist-namespace
spec:
compiledAdapter: listchecker
params:
overrides: ["prod-ns"]
blacklist: false
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: source-ns-instance
spec:
compiledTemplate: listentry
params:
value: source.namespace
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: rule-policy-1
spec:
match: destination.labels["app"] == "testapp" && destination.namespace == "test-ns"
actions:
- handler: whitelist-namespace
instances: [ source-ns-instance ]

create VirtualService for kiali, tracing, grafana

I am trying to expose kiali on my default gateway. I have other services working for apps in the default namespace but have not been able to route traffic to anything in the istio namespace
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- '*'
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: default
spec:
hosts:
- kiali.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
The problem was I had mTLS enabled and kiali does not have a sidecar thus can not be validated by mTLS. the solution was to add a destination rule disabling mTLS for it.
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
You should define an ingress gateway and make sure that the hosts in the gateway match the hosts in the virtual service. Also specify the port of the destination. See the Control Ingress Traffic task.
For me this worked!
I ran
istioctl proxy-config routes istio-ingressgateway-866d7949c6-68tt4 -n istio-system -o json > ./routes.json
to get the dump of all the routes. The kiali route got corrupted for some reason. I deleted the virtual service and created it again, that fixed it.
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: istio-system
spec:
gateways:
- istio-system/my-gateway
hosts:
- 'mydomain.com'
http:
- match:
- uri:
prefix: /kiali/
route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
weight: 100
---
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
---
Note: hosts needed to be set, '*' didnt work for some reason.