In Istio, we define the circuit breaker configuration for a target service using a destination rule. This configuration is then applied for all the clients calling the target service.
For example, if I set up a destination rule for service A with maxRequests = 100.
spec:
host: serviceA
trafficPolicy:
connectionPool:
http:
http2MaxRequests: 100
If serviceB, serviceC, and serviceD are calling serviceA, then this rule is application for all three.
Is there a way to configure the circuit breaker separately for serviceB, serviceC, and serviceD?
Thanks
With istio circuit breaker it is currently (istio v1.7) not possible for a single service to configure circuit breaker per other service. This is because circuit breaker works on specific connection pool.
Alternatively you could split service A into 3 separate services that would work individually with services B, C and D:
Service B > Service A1
Service C > Service A2
Service D > Service A3
This approach obviously would require more resources and could introduce other issues.
More useful information about circuit breaker that is not included in istio documentation can be found on envoy documentation, as istio uses envoy circuit breaker.
Related
I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.
When I read the docs for Circuit breaker I see a lot of references to ejecting hosts etc. This is cool but I'd like to eject by path. Is this possible?
For example:
https://example.com/good/* always responds quickly with 200s etc so we leave it be. But
https://example.com/bad/* is responding with 500s or timing out so we want to somehow block calls to it.
Destination rules seem to be the only way to configure this and they seem to be a host-level only thing?
Thanks in advance.
You can split tarffic via VirtualService using match statemnt (and route this traffic into different services)
http:
- match:
- uri:
prefix: /reviews
route:
- destination:
host: reviews
After that you can use different Destination rules for those services (with porper connection pool and circuit breaker settings)
Along with virtual services, destination rules are a key part of Istio’s traffic routing functionality. You can think of virtual services as how you route your traffic to a given destination, and then you use destination rules to configure what happens to traffic for that destination. Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s “real” destination.
Alternatively, you can use the same service with match statement and use subsets in order to route the traffic to different subset of the same services. From this point it's possible to create different traffic policies for different subsets of the same service.
As far as I understood, Istio Destination Rules can define load balancing policies to reach a subset of a service, e.g. subset based on different versions of the service. So the Destination Rules are the first level of load balancing.
The request will eventually reach a K8s service which is generally implemented by kube-proxy. Kube-proxy does a simple load-balancing with the pods in its back-end. Here is the second level of load balancing.
Is there a way to remove the second load-balancer? For example, could we create a lot of services instances that offer the same service and can be load-balanced by Destination Rules and then have only one pod per service instance, so that kube-proxy does not apply load-balancing?
According to istio documentation:
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box failure recovery features that help make your application more robust against failures of dependent services or the network.
Istio’s traffic management model relies on the Envoy proxies that are deployed along with your services. All traffic that your mesh services send and receive (data plane traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services.
If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the architecture overview. The rest of this guide introduces Istio’s traffic management features.
This means that the istio service mesh is communicating via envoy proxy which in turn relies on kubernetes networking.
We can have an example where a VirtualService that is using istio ingress gateway load-balances it's traffic to two different services based on labels. Then those services can have multiple pods.
Istio load-balancing in this case works only on (layer 7) which results with route to specific endpoint (one of the services) and relies on kubernetes to handle connections and the rest including service round-robin load-balancing (layer 4) in case of multiple pods.
The advantage of having single service with multiple pods is obviously easier configuration and management. In case of 1 pod per service, each service would need to be reconfigured separately and loses all of its ability to scale features.
There is a great video on Youtube which partially covers this topic:
Life of a packet through Istio by Matt Turner.
I highly recommend watching as it explains how istio works on a fundamental level.
I have three nodes, the master and two workers inside my cluster. I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Thanks for the help
Warok
Edit
Apparently, it's possible to route the traffic of one specific user to a specific version https://istio.io/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity. But the question is still open
Edit 2
Assume that my nodes name are node1 and node2, does the following yaml file is right?
apiVersion: networking.istio.io/v2alpha3
kind: VirtualService
metadata:
name: node1
...
spec:
hosts:
- nod1
tcp:
-match:
-port: 27017 #for now, i will just specify this port
- route:
- destination:
host: node2
I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Quick answer, No.
Istio is working as a sidecar container that is injected into a pod. You can read at What is Istio?
Istio lets you connect, secure, control, and observe services.
...
It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.
...
You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices
I also recommend reading What is Istio? The Kubernetes service mesh explained.
It's also important to know why would you want to redirect traffic from one node to the other.
Without knowing that I cannot advice any solutions.
Using the K8S ServiceAccount to authenticate service in Istio is quite interesting. Does it mean that we have to enable mTLS in order to use this feature?
Actually, Istio represents security concepts with two separate methods for identification and traffic management actions: Authentication and Authorization. However, as every security infrastructure, Istio mesh requires to define the Identity in front of the target service in order to obtain decision from underlying Authentication policies how the transport and service-to-service authentication should be established initially by Pilot; Authorization policy can be used to enable Role-based Access Control (RBAC) mechanism within entire namespace, service-level and method-level access control within objects along Istio mesh inside Mixer.
Mutual TLS authentication is the secure way for service-to-service communication through Envoy proxy within a particular Istio mesh object where Envoy sidecar is enabled; therefore Citadel creates TLS certificate inventory and enforce policy rules.
You might be able to launch mTLS authenication globally or either via manual and automatic sidecar injection.
Authentication policy propagates its rules on the client side within Destination rules, therefore you may choose trafficPolicy: TLS mode enabled: mode: ISTIO_MUTUAL or disabled:
trafficPolicy:
tls:
mode: DISABLE
I encourage you to find some related information about ISTIO security model with the relevant practical examples here.