I have a requirement where-in I would like to allow certain CIDR ranges to be able to access my service, rest all should be denied.
I have tried the Istio IP Whitelisting/Blacklisting as mentioned in the official Istio documentation.
For example 10.0.0.2/16 should get allowed and rest should be denied. This doesn't seem to work.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.0.0.2/16"] # overrides provide a static list
blacklist: true
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
So basically the istio 1.5.0 was released few days ago and if we check the istio docs white/black list are deprecated now.
Denials and White/Black Listing (Deprecated)
But there is actually a good news because there is new example for authorization on ingress gateway which should answer your question.
I am not able to get the real client IP hence not able to block/allow using authorization policy or IP based whitelisting.
Based on this new example which I tested myself if you want to see you'r source ip you have to change istio-ingressgateway externalTrafficPolicy from Cluster to Local.
Update the ingress gateway to set externalTrafficPolicy: local to preserve the original client source IP on the ingress gateway using the following command:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
And the allow example
The following example creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
I hope this answer your question. Let me know if you have any more questions.
Other solution in Istio 1.5:
Configure external traffic:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Or in Helm installation:
--set gateways.istio-ingressgateway.externalTrafficPolicy
And you can use in any namespace like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
when:
- key: request.headers[X-Envoy-External-Address]
values: ["1.2.3.4/24"] #Ip to allow
selector:
matchLabels:
app: httpbin
Related
I am new to Istio Gateway and my goal is to create a Ingress Gateway for a service deployed on K8s.
I am a bit confused with the Gateway example in the official document: https://istio.io/latest/docs/concepts/traffic-management/#gateway-example.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ext-host-gwy
spec:
selector:
app: my-gateway-controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- ext-host.example.com
tls:
mode: SIMPLE
credentialName: ext-host-cert
In this example, what is app: my-gateway-controller under spec/selector? Is there additional configuration/deployment needed for this my-gateway-controller?
I tried searching "my-gateway-controller" in the rest of the document, but didn't find further explanation.
Its usually istio ingress gateway pod label name which needs to be given in selector, as the external traffic will enter through ingress gateway pod. unless the name of ingress gateway is changed during istio installation.
Please mention as below in the gateway definition which will route the traffic to application.
spec:
selector:
istio: ingressgateway
Istio can be installed with different options. They have different profiles that can be used for testing, for default scenarios and custom setup. One option is to configure an ingress-controller (but you could also have non and use a different non-istio ingress-controller).
Depending on your setup you can either have no ingress-gateway, the default ingress-gateway or a custom gateway.
The default gateway has a label that's called istio: ingressgateway. You can find that in most of the example/getting started docs, e.g. in how to setup a secure ingress
Here the Gateway looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
[...]
The other option would be to setup a second ingress-gateway that might have a different name. You can for example use the IstioOperator manifest to configure this.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
name: my-gateway-controller
That ingress-gateway pod will get a label of app: my-gateway-controller. This label can than be used as it has been in the example you posted. So you can check all ingress-gateway pods you have and choose the label you need.
If you went with the default setup, you probably have the default ingress-gateway and can simple change the selector to istio: ingressgateway.
For the beginning I would recommend to stick with the tasks section for configuring your setup, because it's uses the default istio setup most people have. If you need more details or something special you can always check the docs pages.
I am trying to get letsencrypt work with GKE LB, I know there are GCP Managed Certs but it will not work with internal LB as the challenge will not get passed. Letsencrypt DNS certification using cert-manager is there and ready to be used.
❯ k get secrets letsencrypt-prod -o yaml
apiVersion: v1
data:
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdlVZTVhXdGNZZUJpMkdadzljRFRLNzY==
kind: Secret
metadata:
creationTimestamp: "2021-01-24T15:03:39Z"
name: letsencrypt-prod
namespace: elastic-system
resourceVersion: "3636289"
selfLink: /api/v1/namespaces/elastic-system/secrets/letsencrypt-prod
uid: f4bec5a9-d3b5-4f4a-9ec6-01a4ce3ba47c
type: Opaque
spec:
tls:
- hosts:
- staging.example.com
- staging2.example.com
secretName: letsencrypt-prod
GCP Reporting this error Error syncing to GCP: error running load balancer syncing routine: error getting secrets for Ingress: secret "letsencrypt-prod" does not specify cert as string data
can anybody help me with what it is missing?
As per this, you must provide a valid format for GCP, like this from your already provided Let's Encrypt valid certs:
kubectl create secret generic letsencrypt-prod --from-file=tls.crt="cert.pem" --from-file=tls.key="privkey.pem" --dry-run -o yaml > output
kubectl apply -f output
Also, (it seems you are already using it, but better safe than sorry), you must define this in the tls section of your Ingress as per this
Actually, it is missed in doc or I am missing as example uses the same name everywhere as metadata.
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: cert-example
namespace: example
spec:
secretName: REAL_NAME_OF_SECRET << This need to include in ingress.
issuerRef:
name: letsencrypt-prod
dnsNames:
- 'staging.domain.com'
- '*.staging.domain.com'
so REAL_NAME_OF_SECRET you should put in ingress or anywhere, where you want to use tls.crt or tls.key.
Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8
I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.
My setup: Running EKS 1.14 with Calico and Istio 1.5.
Trying to get the sample bookinfo to run with specific NetworkPolicies.
I've applied a GlobalNetworkPolicy that denies all traffic:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: all()
types:
- Ingress
- Egress
I also added a GlobalNetworkPolicy for Istio to namespace and intra-namespace traffic:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-istio-system-to-ns
spec:
selector: all()
namespaceSelector: istio-injection == 'enabled'
types:
- Ingress
- Egress
ingress:
- action: Allow
source:
selector: all()
namespaceSelector: projectcalico.org/namespace == 'istio-system'
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'enabled'
egress:
- action: Allow
and a NetworkPolicy allowing all ingress and egress on istio-system
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-istio-system-all
namespace: istio-system
spec:
selector: all()
types:
- Ingress
- Egress
ingress:
- action: Allow
egress:
- action: Allow
And a NetworkPolicy to allow traffic to the ingress-gateway on ports 80 and 443. I know this one is redundant, but I was hoping to cut down the previous one to only necessary ingress.
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-istio-ingress-on-80-443
namespace: istio-system
spec:
selector: app == 'istio-ingressgateway'
ingress:
- action: Allow
protocol: TCP
destination:
ports:
- 80
- 443
Also some other, unrelated NetworkPolicies (access to kube-dns, metrics-server).
Deploying bookinfo works with and without the policies and same with deploying the Gateway.
The connection between the components works (I can exec into one pod and connect to the others).
But when I try to access the productpage via the gateway with the policies I get nothing back, instead of the HTML I get without the policies.
Am I missing some traffic that should be allowed? Should I have policies for the master nodes or for the loadbalancer somewhere else?
Edit: If I allow all ingress into istio-system and into the namespace, it works. So I think I'm just missing some other ingress from the namespace, is there a way to limit it to just the loadbalancer?
First of all there is a typo in Your allow-istio-system-to-ns yaml:
namespaceSelector: projectcalico.org/namespace == 'istio-system
There should be another ' at the end of the line.
Secondly this could be caused by the changes to policy and mixer in istio version 1.5.
According to Istio documentation:
The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.
Rate limiting: Consider using Envoy native rate limiting instead of mixer rate limiting. Istio will add support for native rate limiting API through the Istio extensions API.
Control headers and routing: Consider using Envoy ext_authz filter, lua filter, or write a filter using the Envoy-wasm sandbox.
Denials and White/Black Listing: Please use the Authorization Policy for enforcing access control to a workload.
There is a guide in istio documentation which allows to turn back on depreciated features:
For an existing Istio mesh
Check the status of policy enforcement for your mesh.
$ kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
disablePolicyChecks: true
If policy enforcement is enabled (disablePolicyChecks is false), no further action is needed.
Update the istio configuration to enable policy checks.
Execute the following command from the root Istio directory:
$ istioctl manifest apply --set values.global.disablePolicyChecks=false --set values.pilot.policy.enabled=true
configuration "istio" replaced
Validate that policy enforcement is now enabled.
$ kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
disablePolicyChecks: false
Note that on calico documentation it says it requires the following istio versions:
Istio v1.0, v1.1, v1.2, or v1.3
Hope it helps.
For some reason it seems to work when I replace projectcalico.org/namespace == 'istio-system' with a label from istio-system (e.g. istio-injection=disabled).
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-istio-system-to-ns
spec:
selector: all()
namespaceSelector: istio-injection == 'enabled'
types:
- Ingress
- Egress
ingress:
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'disabled'
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'enabled'
egress:
- action: Allow
i am fairly new to Istio - so far i have a k8s cluster (using kops) on AWS , behind ELB.
All traffic is routed via TCP.
Ingress gateway service is configured as NodePort with following config
istio-system istio-ingressgateway NodePort 100.65.241.150 <none> 15020:31038/TCP,80:30205/TCP,31400:30204/TCP,15029:31714/TCP,15030:30016/TCP,15031:32508/TCP,15032:30110/TCP,15443:32730/TCP
I have used 'demo' helm option to deploy Istio 1.4.0.
Have created gateway, VS and DR with following config -
Gateway is in istio-system namespace, VS and DR on default namespace
kind: Gateway
metadata:
name: ingress-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
hosts:
- "*"
gateways:
- ingress-gateway
http:
- route:
- destination:
host: webapp
subset: original
weight: 100
- destination:
host: webapp
subset: v2
weight: 0
---
kind: DestinationRule
apiVersion: networking.istio.io/v1alpha3
metadata:
name: webapp
namespace: default
spec:
host: webapp
subsets:
- labels:
version: original
name: original
- labels:
version: v2
name: v2
Service pods listen on port 80 - and i have tested via port forwarding - and are functioning as expected.
Although when i do curl on https://hostname externally i get a
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
i have enabled debug logging in the envoy - but dont see anything meaningful in the logs relating to the timeout.
Any suggestion on where i might be going wrong?
Do i need to add any service annotations relating to ELB in istio ingress gateway?
Any other suggestions?
I found few things which need to be fixed
1. Connect with loadbalancer
As I mentioned in comments you need to fix your ingress-gateway to automaticly get EXTERNAL-IP addres as in istio documentation, for now your ingress is a NodePort so as far as I'm concerned it won't work, you can configure it to use with nodeport, but I assume you want the loadbalancer.
The first step would be to change istio-ingressgateway svc type from NodePort to loadbalancer and check if you get the EXTERNAL-IP.
If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
It should look like there
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 172.21.109.129 130.211.10.121 80:31380/TCP,443:31390/TCP,31400:31400/TCP 17h
And then everything goes through the external-ip address which is 130.211.10.121
2. Fix your yamls
Note, for tcp traffic like that, we must match on the incoming port, in this case port 31400
Check this example from istio documentation
Specially this part with gateway, virtual service and destination rule.
You should add this to your virtual service.
tcp:
- match:
- port: 31400
3. Remember about namespaces.
In your example, because it's default it should work, but if you create another namespace, remember that if gateway and virtual service are in another namespace then your need to show virtual service where is the gateway.
Example here
Specially the part in virtual service
gateways:
- some-config-namespace/my-gateway
I hope it help you with your issues. Let me know if you have any more questions.