I have done initial setup of Istio 1.9 and deploy the bookInfo application to replicate the sample provided in Istio site for rate limiting. As we have use case in our application to implement rate limiting. I am project Istio as a solution but i am facing challenges while running the yaml provided in Istio official link itself.
Could anybody help me out?
https://istio.io/latest/docs/tasks/policy-enforcement/rate-limit/
I have deployed bookinfo sample from the following link
Envoy YAML
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
# select by label in the same namespace
labels:
istio: ingressgateway
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
# Adds the Envoy Rate Limit Filter in HTTP filter chain.
value:
name: envoy.filters.http.ratelimit
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
# domain can be anything! Match it to the ratelimter service config
domain: productpage-ratelimit
failure_mode_deny: true
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_cluster
timeout: 10s
transport_api_version: V3
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.default.svc.cluster.local
patch:
operation: ADD
# Adds the rate limit service cluster for rate limit service defined in step 1.
value:
name: rate_limit_cluster
type: STRICT_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: rate_limit_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: ratelimit.default.svc.cluster.local
port_value: 8081
Error while applying the envoy yaml:
Error from server: error when creating "envoyfilter.yaml": admission webhook "validation.istio.io" denied the request: configuration is invalid: Envoy filter: subfilter match requires filter match with envoy.http_connection_manager
As suspected earlier in the comments the issue is due to using the old version of Istio (1.7) instead of the expected 1.9. The old version was still expecting the deprecated filer names:
envoy.http_connection_manager instead of envoy.filters.network.http_connection_manager
envoy.router instead of envoy.filters.http.router
Access Logger, Listener Filter, HTTP Filter, Network Filter, Stats
Sink, and Tracer names have been deprecated in favor of the extension
name from the envoy build system.
While analyzing your issue I stumbled upon several good sources that you will find useful while learning:
Docs regarding the filters you plan to use
Understanding Envoy Filters
Extending Istio with the EnvoyFilter CRD
and of course the Istio docs
Istio version was 1.7. Due to which i got above error. i have upgraded to Istio 1.9. then it started working.
Related
Does anyone know how to do IP whitelisting properly with Istio Authorization policy? I was able to follow this https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/ to setup whitelisting on the gateway. However, is there a way to do this on a specific workload with selector? like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: app-ip-whitelisting
namespace: foo
spec:
selector:
matchLabels:
app: app1
rules:
- from:
- source:
IpBlocks:
- xx.xx.xx.xx
I was not able to get this to work. And I am using Istio 1.6.8
I'm running Istio 1.5.6 and the following is working (whitelisting) : only IP adresses in ipBlocks are allowed to execute for the specified workload, other IP's get response code 403. I find the term ipBlocks confusing : it is not blocking anything. If you want to block certain ip's (blacklisting) you 'll need to use notIpBlocks
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: peke-echo-v1-ipblock
namespace: peke-echo-v1
spec:
selector:
matchLabels:
app: peke-echo-v1
version: v1
rules:
- from:
- source:
ipBlocks:
- 173.18.180.128
- 173.18.191.159
- 173.20.58.39
ipBlocks in lower camelcase
Sometimes it takes a while before the policy is effective.
I successfully deployed a simple Voila dashboard using Google Cloud Run for Anthos. However, since I created the deployment using a GitLab CI pipeline, by default the service was assigned a long and obscure domain name (e.g. http://sudoku.dashboards-19751688-sudoku.k8s.proteinsolver.org/).
I followed the instructions in mapping custom domains to map a shorter custom domain to the service described above (e.g http://sudoku.k8s.proteinsolver.org). However, while the static assets load fine from this new custom domain, the interactive dashboard does not load, and the javascript console is populated with errors:
default.js:64 WebSocket connection to 'wss://sudoku.k8s.proteinsolver.org/api/kernels/5bcab8b9-11d5-4de0-8a64-399e35258aa1/channels?session_id=7a0eed38-77bb-40e8-ad77-d05632b5fa1b' failed: Error during WebSocket handshake: Unexpected response code: 503
_createSocket # scheduler.production.min.js:10
[...]
Is there a way to get web sockets to work with custom domains? Am I doing something wrong?
TLDR, the following yaml needs to be applied to make websocket work:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: allowconnect-cluser-local-gateway
namespace: gke-system
spec:
workloadSelector:
labels:
app: cluster-local-gateway
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 80
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
http2_protocol_options:
allow_connect: true
Here is the explanation.
For the custom domain feature, the request path is
client ---> istio-ingress envoy pods ---> cluster-local-gateway envoy pods ---> user's application.
Specifically for websocket request, it needs cluster-local-gateway envoy pods to support extended CONNECT feature.
The EnvoyFilter yaml enables the extended CONNECT feature by setting allow_connect to true within the cluster-local-gateway pods.
I tried it by myself, and it works for me.
I don't know anything about your GitLab CI pipeline. By default, Knative (Cloud Run for Anthos) assigns external domain names like {name}.{namespace}.example.com where example.com can be customized based on your domain.
You can find this domain at Cloud Console or kubectl get ksvc.
First try if this domain works correctly with websockets. If so, indeed it's a "custom domain" issue. (If you are not sure, please edit your title/question to not to mention "custom domains".)
Also, you need to explicitly mark your container port as h2c on Knative for websockets to work. See ports section below, specifically name: h2c:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: hello
spec:
template:
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
ports:
- name: h2c
containerPort: 8080
I also see that the response code to your requests is HTTP 503, likely indicating a server error. Please check your application’s logs.
My setup: Running EKS 1.14 with Calico and Istio 1.5.
Trying to get the sample bookinfo to run with specific NetworkPolicies.
I've applied a GlobalNetworkPolicy that denies all traffic:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: all()
types:
- Ingress
- Egress
I also added a GlobalNetworkPolicy for Istio to namespace and intra-namespace traffic:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-istio-system-to-ns
spec:
selector: all()
namespaceSelector: istio-injection == 'enabled'
types:
- Ingress
- Egress
ingress:
- action: Allow
source:
selector: all()
namespaceSelector: projectcalico.org/namespace == 'istio-system'
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'enabled'
egress:
- action: Allow
and a NetworkPolicy allowing all ingress and egress on istio-system
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-istio-system-all
namespace: istio-system
spec:
selector: all()
types:
- Ingress
- Egress
ingress:
- action: Allow
egress:
- action: Allow
And a NetworkPolicy to allow traffic to the ingress-gateway on ports 80 and 443. I know this one is redundant, but I was hoping to cut down the previous one to only necessary ingress.
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-istio-ingress-on-80-443
namespace: istio-system
spec:
selector: app == 'istio-ingressgateway'
ingress:
- action: Allow
protocol: TCP
destination:
ports:
- 80
- 443
Also some other, unrelated NetworkPolicies (access to kube-dns, metrics-server).
Deploying bookinfo works with and without the policies and same with deploying the Gateway.
The connection between the components works (I can exec into one pod and connect to the others).
But when I try to access the productpage via the gateway with the policies I get nothing back, instead of the HTML I get without the policies.
Am I missing some traffic that should be allowed? Should I have policies for the master nodes or for the loadbalancer somewhere else?
Edit: If I allow all ingress into istio-system and into the namespace, it works. So I think I'm just missing some other ingress from the namespace, is there a way to limit it to just the loadbalancer?
First of all there is a typo in Your allow-istio-system-to-ns yaml:
namespaceSelector: projectcalico.org/namespace == 'istio-system
There should be another ' at the end of the line.
Secondly this could be caused by the changes to policy and mixer in istio version 1.5.
According to Istio documentation:
The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.
Rate limiting: Consider using Envoy native rate limiting instead of mixer rate limiting. Istio will add support for native rate limiting API through the Istio extensions API.
Control headers and routing: Consider using Envoy ext_authz filter, lua filter, or write a filter using the Envoy-wasm sandbox.
Denials and White/Black Listing: Please use the Authorization Policy for enforcing access control to a workload.
There is a guide in istio documentation which allows to turn back on depreciated features:
For an existing Istio mesh
Check the status of policy enforcement for your mesh.
$ kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
disablePolicyChecks: true
If policy enforcement is enabled (disablePolicyChecks is false), no further action is needed.
Update the istio configuration to enable policy checks.
Execute the following command from the root Istio directory:
$ istioctl manifest apply --set values.global.disablePolicyChecks=false --set values.pilot.policy.enabled=true
configuration "istio" replaced
Validate that policy enforcement is now enabled.
$ kubectl -n istio-system get cm istio -o jsonpath="{#.data.mesh}" | grep disablePolicyChecks
disablePolicyChecks: false
Note that on calico documentation it says it requires the following istio versions:
Istio v1.0, v1.1, v1.2, or v1.3
Hope it helps.
For some reason it seems to work when I replace projectcalico.org/namespace == 'istio-system' with a label from istio-system (e.g. istio-injection=disabled).
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-istio-system-to-ns
spec:
selector: all()
namespaceSelector: istio-injection == 'enabled'
types:
- Ingress
- Egress
ingress:
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'disabled'
- action: Allow
source:
selector: all()
namespaceSelector: istio-injection == 'enabled'
egress:
- action: Allow
I have a requirement where-in I would like to allow certain CIDR ranges to be able to access my service, rest all should be denied.
I have tried the Istio IP Whitelisting/Blacklisting as mentioned in the official Istio documentation.
For example 10.0.0.2/16 should get allowed and rest should be denied. This doesn't seem to work.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.0.0.2/16"] # overrides provide a static list
blacklist: true
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
So basically the istio 1.5.0 was released few days ago and if we check the istio docs white/black list are deprecated now.
Denials and White/Black Listing (Deprecated)
But there is actually a good news because there is new example for authorization on ingress gateway which should answer your question.
I am not able to get the real client IP hence not able to block/allow using authorization policy or IP based whitelisting.
Based on this new example which I tested myself if you want to see you'r source ip you have to change istio-ingressgateway externalTrafficPolicy from Cluster to Local.
Update the ingress gateway to set externalTrafficPolicy: local to preserve the original client source IP on the ingress gateway using the following command:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
And the allow example
The following example creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
I hope this answer your question. Let me know if you have any more questions.
Other solution in Istio 1.5:
Configure external traffic:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Or in Helm installation:
--set gateways.istio-ingressgateway.externalTrafficPolicy
And you can use in any namespace like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
when:
- key: request.headers[X-Envoy-External-Address]
values: ["1.2.3.4/24"] #Ip to allow
selector:
matchLabels:
app: httpbin
I have a dockerized flask application that running on kuberneetes in Google Cloud Platform with Identity-Aware Proxy enabled. I can run a "Hello World" website but when I try to use signed JWT headers then problems occur.
In my browser I am presented with
There was a problem with your request. Error code 9
My app is like this example and I use gunicorn to run the app. It seems that trouble happens in the first line
jwt = request.headers.get('x-goog-iap-jwt-assertion')
but that just makes no sense to me. But I can return a string before that line but not after. Any suggestions?
Details on the current kubernetes cluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-tools-app
spec:
selector:
matchLabels:
app: internal-tools
template:
metadata:
labels:
app: internal-tools
spec:
containers:
- name: internal-web-app
image: <<MY_IMAGE>>
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: internal-tools-backend-config
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: internal-tools-oauth
---
apiVersion: v1
kind: Service
metadata:
name: internal-tools-service
annotations:
beta.cloud.google.com/backend-config: '{"default": "internal-tools-backend-config"}'
spec:
type: NodePort
selector:
app: internal-tools
ports:
- name: it-first-port
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: internal-tools-ip
ingress.gcp.kubernetes.io/pre-shared-cert: "letsencrypt-internal-tools"
name: internal-tools-ingress
spec:
rules:
- host: <<MY_DOMAIN>>
http:
paths:
- backend:
serviceName: internal-tools-service
servicePort: it-first-port
EDIT
Further investigations show
ImportError: Error loading shared library libssl.so.45: No such file or directory (needed by /usr/local/lib/python3.6/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so)
when running the following
jwt.decode(
iap_jwt, key,
algorithms=['ES256'],
audience=expected_audience)
I just fixed this error code tonight by deleting and recreating my frontend and google-managed cert objects in GCP console. It seems to happen when I decommissioned and repurposed a cluster and deployed my app on a brand new cluster with same static IP address.
I got this answer from the Google Cloud Team bug tracker:
The Error code 9 occurs when multiple requests for re-authentication occur simultaneously (in particular, often caused by browsers reloading multiple windows/tabs at once). This flow currently requires for a temporary cookie flow to succeed first, and this cookie is unique to that flow. However if one flow starts before the previous one finishes, for example with multiple simultaneous refreshes in the same browser, this will cause the error you saw, and cause users to face that auth page.
You can try below options to overcome the issue
reboot 1 browser
clear cookies
better handling of sessions implementing
session handlers
https://issuetracker.google.com/issues/155005454