Unable to connect to application on EKS via Istio with deny-all GlobalNetworkPolicy - istio

My setup: Running EKS 1.14 with Calico and Istio 1.5.
Trying to get the sample bookinfo to run with specific NetworkPolicies.
I've applied a GlobalNetworkPolicy that denies all traffic:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: all()
types:
- Ingress
- Egress
I also added a GlobalNetworkPolicy for Istio to namespace and intra-namespace traffic:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-istio-system-to-ns
spec:
selector: all()
namespaceSelector: istio-injection == 'enabled'
types:
- Ingress
- Egress
ingress:
- action: Allow
source:
selector: all()
namespaceSelector: projectcalico.org/namespace == 'istio-system'
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'enabled'
egress:
- action: Allow
and a NetworkPolicy allowing all ingress and egress on istio-system
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-istio-system-all
namespace: istio-system
spec:
selector: all()
types:
- Ingress
- Egress
ingress:
- action: Allow
egress:
- action: Allow
And a NetworkPolicy to allow traffic to the ingress-gateway on ports 80 and 443. I know this one is redundant, but I was hoping to cut down the previous one to only necessary ingress.
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-istio-ingress-on-80-443
namespace: istio-system
spec:
selector: app == 'istio-ingressgateway'
ingress:
- action: Allow
protocol: TCP
destination:
ports:
- 80
- 443
Also some other, unrelated NetworkPolicies (access to kube-dns, metrics-server).
Deploying bookinfo works with and without the policies and same with deploying the Gateway.
The connection between the components works (I can exec into one pod and connect to the others).
But when I try to access the productpage via the gateway with the policies I get nothing back, instead of the HTML I get without the policies.
Am I missing some traffic that should be allowed? Should I have policies for the master nodes or for the loadbalancer somewhere else?
Edit: If I allow all ingress into istio-system and into the namespace, it works. So I think I'm just missing some other ingress from the namespace, is there a way to limit it to just the loadbalancer?

First of all there is a typo in Your allow-istio-system-to-ns yaml:
namespaceSelector: projectcalico.org/namespace == 'istio-system
There should be another ' at the end of the line.
Secondly this could be caused by the changes to policy and mixer in istio version 1.5.
According to Istio documentation:
The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.
Rate limiting: Consider using Envoy native rate limiting instead of mixer rate limiting. Istio will add support for native rate limiting API through the Istio extensions API.
Control headers and routing: Consider using Envoy ext_authz filter, lua filter, or write a filter using the Envoy-wasm sandbox.
Denials and White/Black Listing: Please use the Authorization Policy for enforcing access control to a workload.
There is a guide in istio documentation which allows to turn back on depreciated features:
For an existing Istio mesh
Check the status of policy enforcement for your mesh.
$ kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
disablePolicyChecks: true
If policy enforcement is enabled (disablePolicyChecks is false), no further action is needed.
Update the istio configuration to enable policy checks.
Execute the following command from the root Istio directory:
$ istioctl manifest apply --set values.global.disablePolicyChecks=false --set values.pilot.policy.enabled=true
configuration "istio" replaced
Validate that policy enforcement is now enabled.
$ kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
disablePolicyChecks: false
Note that on calico documentation it says it requires the following istio versions:
Istio v1.0, v1.1, v1.2, or v1.3
Hope it helps.

For some reason it seems to work when I replace projectcalico.org/namespace == 'istio-system' with a label from istio-system (e.g. istio-injection=disabled).
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-istio-system-to-ns
spec:
selector: all()
namespaceSelector: istio-injection == 'enabled'
types:
- Ingress
- Egress
ingress:
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'disabled'
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'enabled'
egress:
- action: Allow

Related

What is spec/selector label in an Istio Gateway configuration?

I am new to Istio Gateway and my goal is to create a Ingress Gateway for a service deployed on K8s.
I am a bit confused with the Gateway example in the official document: https://istio.io/latest/docs/concepts/traffic-management/#gateway-example.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ext-host-gwy
spec:
selector:
app: my-gateway-controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- ext-host.example.com
tls:
mode: SIMPLE
credentialName: ext-host-cert
In this example, what is app: my-gateway-controller under spec/selector? Is there additional configuration/deployment needed for this my-gateway-controller?
I tried searching "my-gateway-controller" in the rest of the document, but didn't find further explanation.
Its usually istio ingress gateway pod label name which needs to be given in selector, as the external traffic will enter through ingress gateway pod. unless the name of ingress gateway is changed during istio installation.
Please mention as below in the gateway definition which will route the traffic to application.
spec:
selector:
istio: ingressgateway
Istio can be installed with different options. They have different profiles that can be used for testing, for default scenarios and custom setup. One option is to configure an ingress-controller (but you could also have non and use a different non-istio ingress-controller).
Depending on your setup you can either have no ingress-gateway, the default ingress-gateway or a custom gateway.
The default gateway has a label that's called istio: ingressgateway. You can find that in most of the example/getting started docs, e.g. in how to setup a secure ingress
Here the Gateway looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
[...]
The other option would be to setup a second ingress-gateway that might have a different name. You can for example use the IstioOperator manifest to configure this.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
name: my-gateway-controller
That ingress-gateway pod will get a label of app: my-gateway-controller. This label can than be used as it has been in the example you posted. So you can check all ingress-gateway pods you have and choose the label you need.
If you went with the default setup, you probably have the default ingress-gateway and can simple change the selector to istio: ingressgateway.
For the beginning I would recommend to stick with the tasks section for configuring your setup, because it's uses the default istio setup most people have. If you need more details or something special you can always check the docs pages.

Linkerd route-based metrics do not work with GKE default ingress controller

I have a microservice running in GKE. I am trying to befriend default GKE GCE-ingress with linkerd so that I can observe route-based metrics with linkerd. The documentation says that the GCE ingress should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled. I tried the following Ingress resource, however route-based metrics are not coming. I checked with linked tap command and rt_route is not getting set.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
namespace: emojivoto
annotations:
ingress.kubernetes.io/custom-request-headers: "l5d-dst-override: web-svc.emojivoto.svc.cluster.local:80"
ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert-name"
kubernetes.io/ingress.global-static-ip-name: "static-ip-name"
linkerd.io/inject: ingress
spec:
ingressClassName: gce
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80
I suspect that linkerd.io/inject: ingress annotation should be added to ingress controller, however since it is managed by GKE, I do not know how I can add it.
The linkerd.io/inject: ingress annotation should be added to your deployment(s) or to one or more namespace for automatic injection.

Istio Authorization Policy IP whitelisting

Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8
I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.

Istio allow only specific IP CIDR and deny rest

I have a requirement where-in I would like to allow certain CIDR ranges to be able to access my service, rest all should be denied.
I have tried the Istio IP Whitelisting/Blacklisting as mentioned in the official Istio documentation.
For example 10.0.0.2/16 should get allowed and rest should be denied. This doesn't seem to work.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.0.0.2/16"] # overrides provide a static list
blacklist: true
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
So basically the istio 1.5.0 was released few days ago and if we check the istio docs white/black list are deprecated now.
Denials and White/Black Listing (Deprecated)
But there is actually a good news because there is new example for authorization on ingress gateway which should answer your question.
I am not able to get the real client IP hence not able to block/allow using authorization policy or IP based whitelisting.
Based on this new example which I tested myself if you want to see you'r source ip you have to change istio-ingressgateway externalTrafficPolicy from Cluster to Local.
Update the ingress gateway to set externalTrafficPolicy: local to preserve the original client source IP on the ingress gateway using the following command:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
And the allow example
The following example creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
I hope this answer your question. Let me know if you have any more questions.
Other solution in Istio 1.5:
Configure external traffic:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Or in Helm installation:
--set gateways.istio-ingressgateway.externalTrafficPolicy
And you can use in any namespace like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
when:
- key: request.headers[X-Envoy-External-Address]
values: ["1.2.3.4/24"] #Ip to allow
selector:
matchLabels:
app: httpbin

ISTIO service mesh for internal kube services

I am new to istio and was trying to set it up.
I have a question though: Is istio only meant for traffic coming in to the kube cluster via ingress or can it be used to communicate with services running inside the same kube cluster?
Sorry if it is a noob question, but i am unable to find it anywhere else. Any pointer would be greatly appreciated.
Here is what i have:
1. 2 different versions of a service deployed on the istio mesh:
kubectl get pods -n turbo -l component=rhea
NAME READY STATUS RESTARTS AGE
rhea-api-istio-1-58b957dd4b-cdn54 2/2 Running 0 46h
rhea-api-istio-2-5787d4ffd4-bfwwk 2/2 Running 0 46h
Another service deployed on the istio mesh:
kubectl get pods -n saudagar | grep readonly
saudagar-readonly-7d75c5c7d6-zvhz9 2/2 Running 0 5d
I have a kube service defined like:
apiVersion: v1
kind: Service
metadata:
name: rhea
labels:
component: rhea
namespace: turbo
spec:
selector:
component: rhea
ports:
- port: 80
targetPort: 3000
protocol: TCP
Destination rules:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rhea
spec:
host: rhea
subsets:
- name: v1
labels:
app: rhea-api-istio-1
- name: v2
labels:
app: rhea-api-istio-2
A virtual service like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rhea
namespace: turbo
spec:
hosts:
- rhea
http:
- route:
- destination:
host: rhea
subset: v1
What i am trying to test is circuit breaking, between rhea and saudagar, and traffic routing over the 2 versions of the service.
I want to test this from inside the same kube cluster. I am not able to achieve this. If i want to access rhea service from the saudagar service, what endpoint should i use so that i can see the traffic routing policy applied?
Istio can be used for controlling ingress traffic (from outside into the cluster), for controlling in-cluster traffic (between services inside the cluster) and for controlling egress traffic (from the services inside the cluster to services outside the cluster).