Istio metrics destination unknown - istio

Scenario
Istio version 1.5.0 ontop of EKS 1.14.
Enabled components:
Base
Pilot
NOTE Istio 1.5.0 deprecates Mixer, moved to telemetry v2, which happens inside the envoy proxy sidecar.
I want to use Istio to support some metrics out of the box.
Here's the flow
my computer -> Gateway -> Virtual Service A -> Virtual Service B
I made sure that:
K8s Service objects have label app
K8s Deployment objects and their pod templates have label app
I can run the flow just fine, which means the configurations are correct.
The problem is with telemetry.
istio_requests_total{connection_security_policy="unknown",destination_app="unknown",destination_canonical_revision="latest",destination_canonical_service="unknown",destination_principal="spiffe://cluster.local/ns/default/sa/default",destination_service="svcb.default.svc.cluster.local",destination_service_name="svcb.default.svc.cluster.local",destination_service_namespace="unknown",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",grpc_response_status="0",instance="10.2.55.80:15090",job="envoy-stats",namespace="default",pod_name="svca-77969dc86b-964p5",reporter="source",request_protocol="grpc",response_code="200",response_flags="-",source_app="svca",source_canonical_revision="latest",source_canonical_service="svca",source_principal="spiffe://cluster.local/ns/default/sa/default",source_version="unknown",source_workload="svca",source_workload_namespace="default"}
Question
Why are most destination-* labels unknown?
The official istio mesh dashboard typically filter metrics by reporter=destination. Why do all of my istio_requests_total series have reporter=source?

Oh right, after much digging, here's the answer.
Istio supports proxying all TCP traffic by default, but in order to provide additional capabilities, such as routing and rich metrics, the protocol must be determined. This can be done automatically or explicitly specified
I didn't specify the port name in my Service resource. Once I did that, the problem is resolved.

Related

Envoy proxy usage without Istio

I am researching the use of Istio service mesh and finding the Envoy proxy is a very good service proxy option to work with it. But over last couple of years, the Envoy proxy seems to have grown as a cloud-native project. In our application, we need service proxy to sit beside our app and this service-proxy should do JWT validation for all incoming requests.
Now I am wondering should i just go with Envoy proxy and setup with JWT validation like explained here
https://www.scottguymer.co.uk/post/configuring-jwt-authentication-in-envoy/
Or should i set it up with along with Istio.
Istio also does the JWT claims based validation at the ingress gateway level.
https://istio.io/latest/docs/tasks/security/authentication/jwt-route/
But my main question is, to keep architecture light without adding too many layers (if we don't have to), should Envoy proxy be used without Istio in this specific case.
I have read this online.
Service mesh like Istio acts as a control plane and uses Envoy in the data plane to do app-level processing (like app-level JWT validation per app-node) via the Sidecar pattern.
But I am wondering if I really need to use service mesh if all i need is a service proxy beside each app-instance.
If you're using Kubernetes I recommend you to use Istio as it will be much easier to manage all your proxies in case you want to use many proxies.
With Istio you can also select in which namespaces or workloads apply the automatic sidecar injection, so you could decide which apps will run with sidecar, and which apps won't.
This is adding another layer, but it's also adding more security to your environment.

What is the knative's "mesh" gateway

I see that for every knative service, 2 VirtualService objects are created namely ksvc-ingress which has knative-serving/knative-ingress-gateway & knative-serving/knative-local-gateway gateways configured and ksvc-mesh which has mesh as the gateway.
I can see the knative-serving/* gateways using kubectl but I am unable to find the mesh gateway object in any namespace. I would like to understand if mesh here denotes some special object or is it an istio keyword representing something else?
The mesh name is a keyword, as you guessed. That keyword represents the East-West traffic between Pods in the Kubernetes cluster, as managed by the Istio sidecar. You can think of those VirtualServices as being programmed onto each sidecar to do the routing and traffic splitting next to the request sender, rather than needing to route to a central service / gateway.
As you noticed, knative uses istio as a service mesh.
In the Istio context mesh is not an object (or resource) like, for example, a Service. Istio About page explain what Service Mesh is:
A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term “service mesh” describes both the type of software you use to implement this pattern, and the security or network domain that is created when you use that software.
So mesh is a term that encapsulate all Istio objects (istio-proxy containers, Virtual Services, Ingress Gateways etc.), that work together to allow for traffic management inside cluster.
A Gateway is a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

Whats the difference between itsio and ESP in gcp?

Both seem to do the same things. From what I've gathered Istio does routing at Inngress level and ESP at container level. I'm still understanding Istio.
According to Google cloud documentation:
Extensible Service Proxy
The Extensible Service Proxy (ESP) is an Nginx-based high-performance, scalable proxy that runs in front of an OpenAPI or gRPC API backend and provides API management features such as authentication, monitoring, and logging. See About Endpoints and Endpoints: Architectural overview for more information.
Extensible Service Proxy V2 Beta
The Extensible Service Proxy V2 Beta (ESPv2 Beta) is an Envoy-based high-performance, scalable proxy that runs in front of an OpenAPI API backend and provides API management features such as authentication, monitoring, and logging. See About Endpoints and Endpoints: Architectural overview for more information.
ESPv2 Beta supports version 2 of the OpenAPI Specification. ESPv2 Beta does not currently support gRPC.
ESPv2 Beta is only supported for use for the Beta versions of Endpoints for Cloud Functions and for Cloud Run. ESPv2 Beta is not supported for Endpoints for App Engine, GKE, Compute Engine, or Kubernetes.
According to istio github documentation:
Introduction
Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.
Istio is composed of these components:
Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.
Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.
Mixer - Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
Pilot - A component responsible for configuring the proxies at runtime.
Citadel - A centralized component responsible for certificate issuance and rotation.
Citadel Agent - A per-node component responsible for certificate issuance and rotation.
Galley- Central component for validating, ingesting, aggregating, transforming and distributing config within Istio.
Operator- The component provides user friendly options to operate the Istio service mesh.
Istio currently supports Kubernetes and Consul-based environments. We plan support for additional platforms such as Cloud Foundry, and Mesos in the near future.
The ESP v2 Beta is also based on Envoy proxy just like Istio. However there are advanced features that Istio has that ESP v2 does not have yet as it is still in beta. As for ESP v1 it is more like nginx ingress. All of these tools are able to do the routing tasks, however each tool has different mechanisms under the hood and offer different amount of configuration flexibility and complexity.
Hope it helps.

Are there two levels of load balancing when using Istio Destination Rules?

As far as I understood, Istio Destination Rules can define load balancing policies to reach a subset of a service, e.g. subset based on different versions of the service. So the Destination Rules are the first level of load balancing.
The request will eventually reach a K8s service which is generally implemented by kube-proxy. Kube-proxy does a simple load-balancing with the pods in its back-end. Here is the second level of load balancing.
Is there a way to remove the second load-balancer? For example, could we create a lot of services instances that offer the same service and can be load-balanced by Destination Rules and then have only one pod per service instance, so that kube-proxy does not apply load-balancing?
According to istio documentation:
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
Istio’s traffic management model relies on the Envoy proxies that are deployed along with your services. All traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the architecture overview. The rest of this guide introduces Istio’s traffic management features.
This means that the istio service mesh is communicating via envoy proxy which in turn relies on kubernetes networking.
We can have an example where a VirtualService that is using istio ingress gateway load-balances it's traffic to two different services based on labels. Then those services can have multiple pods.
Istio load-balancing in this case works only on (layer 7) which results with route to specific endpoint (one of the services) and relies on kubernetes to handle connections and the rest including service round-robin load-balancing (layer 4) in case of multiple pods.
The advantage of having single service with multiple pods is obviously easier configuration and management. In case of 1 pod per service, each service would need to be reconfigured separately and loses all of its ability to scale features.
There is a great video on Youtube which partially covers this topic:
Life of a packet through Istio by Matt Turner.
I highly recommend watching as it explains how istio works on a fundamental level.