How to enable rate limits for outside urls in Istio? - istio

Following this task I can access to external services by defining ServiceEntry configurations. And in another task I can limit the traffic to a service, it works in cluster. But I failed to limit the traffic from service in cluster to external urls like www.google.com.
This is my adapter configuration
apiVersion: "config.istio.io/v1alpha2"
kind: memquota
metadata:
name: handler
namespace: samples
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 15
validDuration: 10s
and quota configuration
apiVersion: "config.istio.io/v1alpha2"
kind: quota
metadata:
name: requestcount
namespace: samples
spec:
dimensions:
source: source.labels["app"] | source.labels["svc"] | "unknown"
destination: dnsName("www.google.com") | uri("https://www.google.com") | "unknown"
How to enable rate limits for outside urls in Istio?

You should direct the traffic through an egress gateway, and then apply the rate limiting. The issue is that in Istio, policy enforcement is performed by the destination. In case of the external services, they are represented by an egress gateway.

Related

Linkerd route-based metrics do not work with GKE default ingress controller

I have a microservice running in GKE. I am trying to befriend default GKE GCE-ingress with linkerd so that I can observe route-based metrics with linkerd. The documentation says that the GCE ingress should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled. I tried the following Ingress resource, however route-based metrics are not coming. I checked with linked tap command and rt_route is not getting set.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
namespace: emojivoto
annotations:
ingress.kubernetes.io/custom-request-headers: "l5d-dst-override: web-svc.emojivoto.svc.cluster.local:80"
ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert-name"
kubernetes.io/ingress.global-static-ip-name: "static-ip-name"
linkerd.io/inject: ingress
spec:
ingressClassName: gce
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80
I suspect that linkerd.io/inject: ingress annotation should be added to ingress controller, however since it is managed by GKE, I do not know how I can add it.
The linkerd.io/inject: ingress annotation should be added to your deployment(s) or to one or more namespace for automatic injection.

If I put the name of a K8s svc in the spec.host of Istio's DestinationRule, will all traffic going to that service go through the DestinationRule?

Istio has a custom resource called DestinationRule, and this resource has an argument called spec.host as shown below.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-destination-rule
spec:
host: my-svc
I want to make all traffic through this service go through DestinationRule.
If I enter the name of the service in the host, will all traffic go through the DestinationRule, not just the case of calling with the name of the service using Service Discovery? For example, does the DestinationRule apply even when prodding through external DNS?

GKE Ingress Rate Limiting

We are using gcp ingress gateway for our kubernetes cluster for public access url . Is there is any way or any specific annotation that can we use to restricts no of rps to our services.
With the help of below annotation you can set the RPS in Nginx-ingress.
nginx.ingress.kubernetes.io/limit-rps
As an example below, the rate limit was set to 5 requests per second.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/limit-rps: "5"

Shared istio egress gateway in multi-cluster/multi-primary mesh

We are trying to setup an egress gateway in a multi-cluster/multi-primary mesh
configuration where the egress gateway is located in only one cluster but used from both.
diagram of desired setup
The use case is that the clusters are in different network zones and we want to be able
to route traffic to one zone transparently to the clients in the other zone.
We followed this guide in one cluster and it worked fine. However we have trouble setting up the VirtualService in the second cluster
to use the egress gateway in the first cluster.
When deploying the following virtual service to the second cluster we get 503 with cluster_not_found.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- edition.cnn.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: edition.cnn.com
port:
number: 80
weight: 100
The endpoints proxy config on a pod in the second cluster misses the istio-egressgateway.istio-gateways.svc.cluster.local
endpoints (all other services are discovered and directed to the eastwest gateway of the other cluster).
We believe that this is the reason that this VirtualService doesn't work in the second cluster.
As a workaround we could redirect the egress traffic to the ingress gateway of the first cluster but this
has the disadvantage that the traffic leaves and re-enters the mesh which probably has an impact on tracing and monitoring.
Is it currently possible to setup a single egress gateway that can be used by all clusters in the mesh or do we have to go with the workaround?
According to the comments the solution should works as below:
To create a multi-cluster deployment you can use this tutorial. In this situation cross cluster workload of normal services works fine. However, there is a problem with getting the traffic to the egress gateway routed via the eastwest gateway. This can be solved with this example.
You should also change kind: VirtualService to kind: ServiceEntry in both clusters.
Like Tobias Henkel mentioned:
I got it to work fine with the service entry if I target the ingress gateway on ports 80/443 which then dispatches further to the mesh external services.
You can also use Admiral to automate traffic routing.
See also:
multi cluster mesh automation using Admiral
multi cluster service mesh od GKE
tutorial on GKE to create similar situation

How do I map external traffic to the internal service mesh on GCP Traffic Director?

I've set up a simple GKE cluster hooked to GCP Traffic Director with the Traffic Director setup with automatic Envoy injection tutorial.
The next step is how do I map external traffic into the Traffic Director backend service, which is only internal?
Basically, my goal is to have an external load balancer with an IP address that takes outside traffic and routes it to the Traffic Director service mesh to split traffic between different Network Endpoint Groups.
I tried the following:
Create an external load balancer manually in Network Services -> Load Balancing --> However the list of Backends does not include the Traffic Director backend service so I can't create one to have an external IP and redirect it to the internal service mesh.
Install the NGINX ingress controller chart and install an ingress controller via .yaml that maps to the k8s cluster service --> This creates an external load balancer but it simply goes directly to the service instead of through Traffic Director
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-body-size: 1M
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: my-host-name.hostname.com
http:
paths:
- path: "/"
backend:
serviceName: service-test
servicePort: 80
Service:
apiVersion: v1
kind: Service
metadata:
name: service-test
annotations:
cloud.google.com/neg: '{"exposed_ports":{"80":{"name": "service-test-neg"}}}'
spec:
ports:
- port: 80
name: service-test
protocol: TCP
targetPort: 8000
selector:
run: app1
type: ClusterIP
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app1
name: app1
spec:
replicas: 1
selector:
matchLabels:
run: app1
template:
metadata:
labels:
run: app1
spec:
containers:
- image: gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.1
name: app1
command:
- /bin/sh
- -c
- /serve_hostname -http=true -udp=false -port=8000
ports:
- protocol: TCP
containerPort: 8000
The deployment and service above is taken directly from the tutorial.
There seems to be a concept in the official documentation for Handling ingress traffic using a second-level gateway at the edge of your mesh, but it's only conceptual and does not provide how to actually do it.
How do I map external traffic using an external load balancer into a GCP Traffic Director-managed service mesh for advanced traffic configuration into GKE?
Traffic Director is not an endpoint to point to for routing. It is the "control plane" of your service mesh.
So you would configure your routing rules from GCP, and Traffic Director would configure your sidecars as expected. But eventually your Load Balancer should point to an Instance Group or Network Endpoint Group, not to Traffic Director.
EDIT
Traffic Director is not the one getting configured, but the one configuring. It configures the Envoy sidecars. These are L7 proxies, so the URL mapping happens on the proxies.
The Endpoint Group will be a group of IP addresses of pods. Since the pod ranges of the cluster have been added to the subnetwork; as IP alias, the VPC is capable of pulling any IP address from this range, group it, and make a backend for a HTTP load balancer on GCP.
Basically, Traffic Director is Istio, but with control plane decoupled to GCP.