Whats the difference between itsio and ESP in gcp? - google-cloud-platform

Both seem to do the same things. From what I've gathered Istio does routing at Inngress level and ESP at container level. I'm still understanding Istio.

According to Google cloud documentation:
Extensible Service Proxy
The Extensible Service Proxy (ESP) is an Nginx-based high-performance, scalable proxy that runs in front of an OpenAPI or gRPC API backend and provides API management features such as authentication, monitoring, and logging. See About Endpoints and Endpoints: Architectural overview for more information.
Extensible Service Proxy V2 Beta
The Extensible Service Proxy V2 Beta (ESPv2 Beta) is an Envoy-based high-performance, scalable proxy that runs in front of an OpenAPI API backend and provides API management features such as authentication, monitoring, and logging. See About Endpoints and Endpoints: Architectural overview for more information.
ESPv2 Beta supports version 2 of the OpenAPI Specification. ESPv2 Beta does not currently support gRPC.
ESPv2 Beta is only supported for use for the Beta versions of Endpoints for Cloud Functions and for Cloud Run. ESPv2 Beta is not supported for Endpoints for App Engine, GKE, Compute Engine, or Kubernetes.
According to istio github documentation:
Introduction
Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.
Istio is composed of these components:
Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.
Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.
Mixer - Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
Pilot - A component responsible for configuring the proxies at runtime.
Citadel - A centralized component responsible for certificate issuance and rotation.
Citadel Agent - A per-node component responsible for certificate issuance and rotation.
Galley- Central component for validating, ingesting, aggregating, transforming and distributing config within Istio.
Operator- The component provides user friendly options to operate the Istio service mesh.
Istio currently supports Kubernetes and Consul-based environments. We plan support for additional platforms such as Cloud Foundry, and Mesos in the near future.
The ESP v2 Beta is also based on Envoy proxy just like Istio. However there are advanced features that Istio has that ESP v2 does not have yet as it is still in beta. As for ESP v1 it is more like nginx ingress. All of these tools are able to do the routing tasks, however each tool has different mechanisms under the hood and offer different amount of configuration flexibility and complexity.
Hope it helps.

Related

Envoy proxy usage without Istio

I am researching the use of Istio service mesh and finding the Envoy proxy is a very good service proxy option to work with it. But over last couple of years, the Envoy proxy seems to have grown as a cloud-native project. In our application, we need service proxy to sit beside our app and this service-proxy should do JWT validation for all incoming requests.
Now I am wondering should i just go with Envoy proxy and setup with JWT validation like explained here
https://www.scottguymer.co.uk/post/configuring-jwt-authentication-in-envoy/
Or should i set it up with along with Istio.
Istio also does the JWT claims based validation at the ingress gateway level.
https://istio.io/latest/docs/tasks/security/authentication/jwt-route/
But my main question is, to keep architecture light without adding too many layers (if we don't have to), should Envoy proxy be used without Istio in this specific case.
I have read this online.
Service mesh like Istio acts as a control plane and uses Envoy in the data plane to do app-level processing (like app-level JWT validation per app-node) via the Sidecar pattern.
But I am wondering if I really need to use service mesh if all i need is a service proxy beside each app-instance.
If you're using Kubernetes I recommend you to use Istio as it will be much easier to manage all your proxies in case you want to use many proxies.
With Istio you can also select in which namespaces or workloads apply the automatic sidecar injection, so you could decide which apps will run with sidecar, and which apps won't.
This is adding another layer, but it's also adding more security to your environment.

Spring boot microservice( Api Gatway) on aws

and trying to deploy micro-services build in spring boot on aws but didn't know which aws service is suitable for perticular spring micro-service(Could Config, Service Discovery, Api Gatway, and vault).
I build an api gateway service on spring boot, but when it comes to deployment on aws i got confused with the aws api gateway.
Do we need both of the to work together? or we can just setup springBoot Api gatway on ec2 instance.
And its out of context but, do we need separate ec2 for small service like 'Service Discovery', 'Config Service' etc.
thanks
API Gateway is just a kind of routing to your application, no matter if it is hosted on serverless platform or on EC2 container.
You can try to deploy your Spring Boot app on AWS Lambda environment and this way you don't have to think of configuring the server environment. You have to be awarded the cold start of the application in this case. You can google more about it how to solve this problem.
API Gateway is like facade in front of your microservices for communication with external services. There are several ways to use/implement API gateway depending on requirements such as Request Routing, API composition(calling multiple services and combining responses), Authentication, Caching etc.
AWS API gateway is good if you need request routing feature but it can't perform API composition. In such case you need to implement your own custom API gateway using technologies such as Spring Cloud Gateway & Reactive programming.
GraphQL is another popular technology to implement API Gateway.
P.S. - Service Discovery is another concept. In real life you will use Kubernetes or Service Mesh which will internally do Service Registry and Discovery.

Istio metrics destination unknown

Scenario
Istio version 1.5.0 ontop of EKS 1.14.
Enabled components:
Base
Pilot
NOTE Istio 1.5.0 deprecates Mixer, moved to telemetry v2, which happens inside the envoy proxy sidecar.
I want to use Istio to support some metrics out of the box.
Here's the flow
my computer -> Gateway -> Virtual Service A -> Virtual Service B
I made sure that:
K8s Service objects have label app
K8s Deployment objects and their pod templates have label app
I can run the flow just fine, which means the configurations are correct.
The problem is with telemetry.
istio_requests_total{connection_security_policy="unknown",destination_app="unknown",destination_canonical_revision="latest",destination_canonical_service="unknown",destination_principal="spiffe://cluster.local/ns/default/sa/default",destination_service="svcb.default.svc.cluster.local",destination_service_name="svcb.default.svc.cluster.local",destination_service_namespace="unknown",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",grpc_response_status="0",instance="10.2.55.80:15090",job="envoy-stats",namespace="default",pod_name="svca-77969dc86b-964p5",reporter="source",request_protocol="grpc",response_code="200",response_flags="-",source_app="svca",source_canonical_revision="latest",source_canonical_service="svca",source_principal="spiffe://cluster.local/ns/default/sa/default",source_version="unknown",source_workload="svca",source_workload_namespace="default"}
Question
Why are most destination-* labels unknown?
The official istio mesh dashboard typically filter metrics by reporter=destination. Why do all of my istio_requests_total series have reporter=source?
Oh right, after much digging, here's the answer.
Istio supports proxying all TCP traffic by default, but in order to provide additional capabilities, such as routing and rich metrics, the protocol must be determined. This can be done automatically or explicitly specified
I didn't specify the port name in my Service resource. Once I did that, the problem is resolved.

Are there two levels of load balancing when using Istio Destination Rules?

As far as I understood, Istio Destination Rules can define load balancing policies to reach a subset of a service, e.g. subset based on different versions of the service. So the Destination Rules are the first level of load balancing.
The request will eventually reach a K8s service which is generally implemented by kube-proxy. Kube-proxy does a simple load-balancing with the pods in its back-end. Here is the second level of load balancing.
Is there a way to remove the second load-balancer? For example, could we create a lot of services instances that offer the same service and can be load-balanced by Destination Rules and then have only one pod per service instance, so that kube-proxy does not apply load-balancing?
According to istio documentation:
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
Istio’s traffic management model relies on the Envoy proxies that are deployed along with your services. All traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the architecture overview. The rest of this guide introduces Istio’s traffic management features.
This means that the istio service mesh is communicating via envoy proxy which in turn relies on kubernetes networking.
We can have an example where a VirtualService that is using istio ingress gateway load-balances it's traffic to two different services based on labels. Then those services can have multiple pods.
Istio load-balancing in this case works only on (layer 7) which results with route to specific endpoint (one of the services) and relies on kubernetes to handle connections and the rest including service round-robin load-balancing (layer 4) in case of multiple pods.
The advantage of having single service with multiple pods is obviously easier configuration and management. In case of 1 pod per service, each service would need to be reconfigured separately and loses all of its ability to scale features.
There is a great video on Youtube which partially covers this topic:
Life of a packet through Istio by Matt Turner.
I highly recommend watching as it explains how istio works on a fundamental level.

Kubernetes DNS/service/kubeproxy/configMap vs Netflix ribbon/eureka/cloud config vs AWS ALB/S3

I am new to netflix cloud concepts and AWS and kubernetes and trying to associate the concepts of various technologies and how they relate in terms of each other.
Load Balancing-What components does ribbon map to in AWS(ALB?
although it is server side load balancing) and Kubernetes(Service
with Kube DNS?)
Service Registry-What components does eureka map to in AWS(ALB?) and Kubernetes (etcd/kubeproxy?)
Configuration Management-What components does cloud config server map to in
AWS(s3?) and Kubernetes(ConfigMap?)
I understand that Netflix ribbon is an IPC
library and there is no such thing in Kubernetes.
You might be interested in gRPC as the part of CNCF.
In Kubernetes is a registration support for configurable private DNS zones (often called stub domains) and external upstream DNS nameservers.
The DNS pod is exposed as a Kubernetes Service with a static IP.
kubectl command line tool provides command line interface to create ConfigMap for Kubernetes.
For integrating plans Netflix OSP Tools and Kubernetes take a look at:
kubeflix.
That software components are made for integration Netflix Open Source Platform
Tools with Kubernetes and provides:
Kubernetes Instance Discovery for Turbine and Ribbon and also management for configuration and images for
Turbine Server and Hystix Dashboard.