What is the knative's "mesh" gateway - istio

I see that for every knative service, 2 VirtualService objects are created namely ksvc-ingress which has knative-serving/knative-ingress-gateway & knative-serving/knative-local-gateway gateways configured and ksvc-mesh which has mesh as the gateway.
I can see the knative-serving/* gateways using kubectl but I am unable to find the mesh gateway object in any namespace. I would like to understand if mesh here denotes some special object or is it an istio keyword representing something else?

The mesh name is a keyword, as you guessed. That keyword represents the East-West traffic between Pods in the Kubernetes cluster, as managed by the Istio sidecar. You can think of those VirtualServices as being programmed onto each sidecar to do the routing and traffic splitting next to the request sender, rather than needing to route to a central service / gateway.

As you noticed, knative uses istio as a service mesh.
In the Istio context mesh is not an object (or resource) like, for example, a Service. Istio About page explain what Service Mesh is:
A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term “service mesh” describes both the type of software you use to implement this pattern, and the security or network domain that is created when you use that software.
So mesh is a term that encapsulate all Istio objects (istio-proxy containers, Virtual Services, Ingress Gateways etc.), that work together to allow for traffic management inside cluster.
A Gateway is a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.

Related

Envoy proxy usage without Istio

I am researching the use of Istio service mesh and finding the Envoy proxy is a very good service proxy option to work with it. But over last couple of years, the Envoy proxy seems to have grown as a cloud-native project. In our application, we need service proxy to sit beside our app and this service-proxy should do JWT validation for all incoming requests.
Now I am wondering should i just go with Envoy proxy and setup with JWT validation like explained here
https://www.scottguymer.co.uk/post/configuring-jwt-authentication-in-envoy/
Or should i set it up with along with Istio.
Istio also does the JWT claims based validation at the ingress gateway level.
https://istio.io/latest/docs/tasks/security/authentication/jwt-route/
But my main question is, to keep architecture light without adding too many layers (if we don't have to), should Envoy proxy be used without Istio in this specific case.
I have read this online.
Service mesh like Istio acts as a control plane and uses Envoy in the data plane to do app-level processing (like app-level JWT validation per app-node) via the Sidecar pattern.
But I am wondering if I really need to use service mesh if all i need is a service proxy beside each app-instance.
If you're using Kubernetes I recommend you to use Istio as it will be much easier to manage all your proxies in case you want to use many proxies.
With Istio you can also select in which namespaces or workloads apply the automatic sidecar injection, so you could decide which apps will run with sidecar, and which apps won't.
This is adding another layer, but it's also adding more security to your environment.

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

How to expose internal services in Kuberenetes via ingress controller

I have many services deployed in K8s, some of the services are exposed externally using Ingress controller and DNS is registered in AWS Route 53.
There are some internal services which are to be used strictly internally.
what are the ways to access internal services in K8s?
FYI..I have core dns enable and service to service communication should happen via ingress controller nginx.conf file.
Regards
That depends on what you mean by internal services which are to be used strictly internally.
If the question is about services exposed inside the cluster then you can use services with a ClusterIP type
As mentioned in documentation:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
If the question is about services exposed outside the cluster then I would recommend to use the whitelist-source-range annotation.
Please refer to the following documentation to get more information about that. There is also an example.

Are there two levels of load balancing when using Istio Destination Rules?

As far as I understood, Istio Destination Rules can define load balancing policies to reach a subset of a service, e.g. subset based on different versions of the service. So the Destination Rules are the first level of load balancing.
The request will eventually reach a K8s service which is generally implemented by kube-proxy. Kube-proxy does a simple load-balancing with the pods in its back-end. Here is the second level of load balancing.
Is there a way to remove the second load-balancer? For example, could we create a lot of services instances that offer the same service and can be load-balanced by Destination Rules and then have only one pod per service instance, so that kube-proxy does not apply load-balancing?
According to istio documentation:
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
Istio’s traffic management model relies on the Envoy proxies that are deployed along with your services. All traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the architecture overview. The rest of this guide introduces Istio’s traffic management features.
This means that the istio service mesh is communicating via envoy proxy which in turn relies on kubernetes networking.
We can have an example where a VirtualService that is using istio ingress gateway load-balances it's traffic to two different services based on labels. Then those services can have multiple pods.
Istio load-balancing in this case works only on (layer 7) which results with route to specific endpoint (one of the services) and relies on kubernetes to handle connections and the rest including service round-robin load-balancing (layer 4) in case of multiple pods.
The advantage of having single service with multiple pods is obviously easier configuration and management. In case of 1 pod per service, each service would need to be reconfigured separately and loses all of its ability to scale features.
There is a great video on Youtube which partially covers this topic:
Life of a packet through Istio by Matt Turner.
I highly recommend watching as it explains how istio works on a fundamental level.

Set static response from Istio Ingress Gateway

How do you set a static 200 response in Istio's Ingress Gateway?
We have a situation where we need an endpoint to return a small bit of static content (a bootstrap URL). We could even put it in a header. Can Istio host something like that or do we need to run a pod for no other reason than to return a single word?
Specifically I am looking for a solution that returns 200 via Istio configuration, not a pod that Istio routes to (which is quite a common example and available elsewhere).
You have to do it manually by creating VirtualService to specific service connected to pod.
Of course firstly you have to create pod and then attached service to it,
even if your application will return single word.
Istio Gateway’s are responsible for opening ports on relevant
Istio gateway pods and receiving traffic for hosts. That’s it.
The VirtualService: Istio VirtualService’s are what get “attached” to
Gateways and are responsible defining the routes the gateway should implement.
You can have multiple VirtualServices attached to Gateways. But not for the
same domain.