I read this very interesting article that shows how to use sidecarless envoy:
https://events.istio.io/istiocon-2022/sessions/sidecarless-ebpf-envoy/
But it does not show any hands on as to how to deploy sidecarless envoy for Istio virtual services.
Any practical hello world example that shows how to deploy sidecarless envoy for Istio would really help!
I believe Idit is referring to Cilium Service Mesh when she talks of sidecarless service mesh with eBPF. You can find how to deploy Cilium Service Mesh in the Cilium documentation.
Note that Cilium Service Mesh is an alternative to Istio Service Mesh, the latter relying on sidecar instances of Envoy. Cilium also supports an integration with Istio if you want to use Istio for the Service Mesh but Cilium as the underlying CNI.
Related
I'm interested in putting a vendor provided application running in an AWS EC2 Instance behind my Istio gateway. It sounds like the ideal scenario is to use a WorkloadEntry to define the endpoint and make it easy to flex should I ever get this into the cluster, etc.
In the documentation I've read, there is mention of using a sidecar in the VM to enable this. What I've failed to find is how to use a sidecar in a VM. There's lots of good stuff about sidecars in a pod, but I'm not sure what it takes to implement on the VM and how I would even go about doing that. Maybe the integration needed for the sidecar would be to complex to implement in a 3rd party app? Maybe I can do this better without a Sidecar?
How do I find details on VM Sidecars and getting them integrated into the mesh?
When do you decide between implementing this as a WorkloadEntry vs simply a MESH_EXTERNAL ServiceEntry?
If you want to integrate a VM into your k8s Istio environment, you need to setup Istio on your VM :
https://istio.io/latest/docs/ops/deployment/vm-architecture/
I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.
Both seem to do the same things. From what I've gathered Istio does routing at Inngress level and ESP at container level. I'm still understanding Istio.
According to Google cloud documentation:
Extensible Service Proxy
The Extensible Service Proxy (ESP) is an Nginx-based high-performance, scalable proxy that runs in front of an OpenAPI or gRPC API backend and provides API management features such as authentication, monitoring, and logging. See About Endpoints and Endpoints: Architectural overview for more information.
Extensible Service Proxy V2 Beta
The Extensible Service Proxy V2 Beta (ESPv2 Beta) is an Envoy-based high-performance, scalable proxy that runs in front of an OpenAPI API backend and provides API management features such as authentication, monitoring, and logging. See About Endpoints and Endpoints: Architectural overview for more information.
ESPv2 Beta supports version 2 of the OpenAPI Specification. ESPv2 Beta does not currently support gRPC.
ESPv2 Beta is only supported for use for the Beta versions of Endpoints for Cloud Functions and for Cloud Run. ESPv2 Beta is not supported for Endpoints for App Engine, GKE, Compute Engine, or Kubernetes.
According to istio github documentation:
Introduction
Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.
Istio is composed of these components:
Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.
Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.
Mixer - Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
Pilot - A component responsible for configuring the proxies at runtime.
Citadel - A centralized component responsible for certificate issuance and rotation.
Citadel Agent - A per-node component responsible for certificate issuance and rotation.
Galley- Central component for validating, ingesting, aggregating, transforming and distributing config within Istio.
Operator- The component provides user friendly options to operate the Istio service mesh.
Istio currently supports Kubernetes and Consul-based environments. We plan support for additional platforms such as Cloud Foundry, and Mesos in the near future.
The ESP v2 Beta is also based on Envoy proxy just like Istio. However there are advanced features that Istio has that ESP v2 does not have yet as it is still in beta. As for ESP v1 it is more like nginx ingress. All of these tools are able to do the routing tasks, however each tool has different mechanisms under the hood and offer different amount of configuration flexibility and complexity.
Hope it helps.
Scenario
Istio version 1.5.0 ontop of EKS 1.14.
Enabled components:
Base
Pilot
NOTE Istio 1.5.0 deprecates Mixer, moved to telemetry v2, which happens inside the envoy proxy sidecar.
I want to use Istio to support some metrics out of the box.
Here's the flow
my computer -> Gateway -> Virtual Service A -> Virtual Service B
I made sure that:
K8s Service objects have label app
K8s Deployment objects and their pod templates have label app
I can run the flow just fine, which means the configurations are correct.
The problem is with telemetry.
istio_requests_total{connection_security_policy="unknown",destination_app="unknown",destination_canonical_revision="latest",destination_canonical_service="unknown",destination_principal="spiffe://cluster.local/ns/default/sa/default",destination_service="svcb.default.svc.cluster.local",destination_service_name="svcb.default.svc.cluster.local",destination_service_namespace="unknown",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",grpc_response_status="0",instance="10.2.55.80:15090",job="envoy-stats",namespace="default",pod_name="svca-77969dc86b-964p5",reporter="source",request_protocol="grpc",response_code="200",response_flags="-",source_app="svca",source_canonical_revision="latest",source_canonical_service="svca",source_principal="spiffe://cluster.local/ns/default/sa/default",source_version="unknown",source_workload="svca",source_workload_namespace="default"}
Question
Why are most destination-* labels unknown?
The official istio mesh dashboard typically filter metrics by reporter=destination. Why do all of my istio_requests_total series have reporter=source?
Oh right, after much digging, here's the answer.
Istio supports proxying all TCP traffic by default, but in order to provide additional capabilities, such as routing and rich metrics, the protocol must be determined. This can be done automatically or explicitly specified
I didn't specify the port name in my Service resource. Once I did that, the problem is resolved.
I'd like to use istio in an environment which hasn't adapted to kubernetes or consul yet.
Unfortunately, it is impossible to make this from out the box without Kubernetes or consul. There are only two options in documentation: Kubernetes or Nomad & Consul. It is required storage so you need to use Kubernetes or Nomad & Consul to place all Istio services. However, you can take the code of Istio and adapt it to your purpose. Istio is an open source product and it allows you any modifications.