Istio CircuitBreaker by URI path instead of host? - istio

When I read the docs for Circuit breaker I see a lot of references to ejecting hosts etc. This is cool but I'd like to eject by path. Is this possible?
For example:
https://example.com/good/* always responds quickly with 200s etc so we leave it be. But
https://example.com/bad/* is responding with 500s or timing out so we want to somehow block calls to it.
Destination rules seem to be the only way to configure this and they seem to be a host-level only thing?
Thanks in advance.

You can split tarffic via VirtualService using match statemnt (and route this traffic into different services)
http:
- match:
- uri:
prefix: /reviews
route:
- destination:
host: reviews
After that you can use different Destination rules for those services (with porper connection pool and circuit breaker settings)
Along with virtual services, destination rules are a key part of Istio’s traffic routing functionality. You can think of virtual services as how you route your traffic to a given destination, and then you use destination rules to configure what happens to traffic for that destination. Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s “real” destination.
Alternatively, you can use the same service with match statement and use subsets in order to route the traffic to different subset of the same services. From this point it's possible to create different traffic policies for different subsets of the same service.

Related

How to communicate securely to a k8s service via istio?

I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.

is it possible to achieve path based routing behind aws NLB?

I have a use case:- where my I have a lot of traffic coming to my webservers so I need greater performs and better latency however there are 2 paths to which traffic is incoming.
as per my understanding this is achievable with aws NLB which scales to 1000's request per second and 100 ms sub latency.
however I have www.jatin.com and www.jatin.com/somepath which means It needs path based routing which Is supported by aws ALB.
I need performance as well as path based routing achievable with NLB?
achievable with NLB?
Sadly its not possible. Concepts of url, paths or dns hostnames are only defined for Layer 7 - Application of the OSI model. However, NLB operates at layer 4 - transport. Subsequently, NLB is not able to differentiate between any url domain names or paths.
The only distribution of incoming traffic you can obtained for NLB, is based on port number, as per my knowledge. So you can have one listener for port 80, other listener for port 88, 443 and so on. This will work because ports, just like IP addresses, are part of layer 4.
Only ALB operates at layer 7 and partially CLB, thus it can do path based routing. So you either have to use ALB, or maybe try to look for a third party load balancer which you can deploy on AWS.

Relationship between Forwarding Rules, Target HTTP Proxy, URL Map, and Backend Service, in GCP

I'm new to GCP and pretty confused by the load balancing setup if you have an HTTP service (I asked a different question about TCP load balancing here: purpose of Target Pools in GCP).
It seems like, if you have a service which uses HTTP and you want to use Load Balancing, you have to create a lot of different components to make it happen.
In the tutorial I'm going through in Qwiklabs (https://google.qwiklabs.com/focuses/558?parent=catalog), you need to set things up so that requests flow like this: Forwarding Rule -> Target HTTP Proxy -> URL Map -> Backend Service -> Managed Instance Group. However, it doesn't really explain the relationship between these things.
I think the purpose of the Managed Instance Group is clear, but I don't understand the relationship between the others or their purpose. Can you provide an easy definition of the other components and describe how they are different from each other?
All these entities are not different components - they are just a way to model the configuration in a more flexible and structured way.
Forwarding Rule: This is just a mapping of IP & port to target proxy. You can have multiple forwarding rules pointing to the same target proxy - this is handy when you want to add another IP address or enable IPv6 or additional ports later on without redeploying the whole loadbalancer.
Target Proxy: This is all about how to handle connections. In your case with a target HTTP proxy, it sets up HTTP handling. With a target HTTPS proxy, you can configure SSL certificates as well.
URL Map: This only makes sense in the HTTP/HTTPS case - since the HTTP/HTTPS proxy parses requests, it can make decisions based on the requested URL. With a URL map, you can send different parts of your website to different services - this is for example great for microservice architectures.
Backend Service: This encapsulates the concept of a group of servers / endpoints that can handle a class of requests. The backend service lets you fine-tune some aspects of load balancing like session affinity, how long to wait for backends, what to do if they're unhealthy and how to detect it. The set of backends can be identified by an instance group (with or without autoscaling etc.) but can also be something like a GCS bucket for serving static content.
The reason for having those all separate entities is to let you mix and match or reuse parts as makes sense. For example, if you had some sort of real-time communication platform, you might have forwarding rules for web and RTC traffic. The web traffic might go through a HTTP(S) proxy with a URL map, serving static content from a GCS bucket. The RTC traffic might go through a target TCP proxy or even a UDP network level load balancer but point at the same set of backends / the same instance group.

Is it possible to make rules in Azure like you can do in AWS WAF?

I'm planning to move the development environment from AWS to Azure,
and I'm researching on how to make rules in Azure like I made rules in AWS WAF.
In AWS WAF, I made a rule with 2 conditions.
a) When a request originates from certain IP addresses
and
b) When a request matches at least one of the filters in the string match condition
(This includes body contains xxx or query string contains xxx)
Is it possible to do the same thing in Azure as well?
We can create Network Security Groups(NSG) and attach them(one or more) a Virtual Machine NIC(s) or the Subnet itself in Azure. You can create rules on NSG to allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol.
More Info here - https://learn.microsoft.com/en-us/azure/virtual-network/security-overview

How to parse custom PROXY protocol v2 header for custom routing in HAProxy configuration?

How can one parse the PROXY protocol version 2 header and use the parsed values to select a backend?
Specifically, I am making a connection from one AWS account to another using a VPC PrivateLink endpoint with PROXY v2 enabled. This includes the endpoint ID according to the docs.
The Proxy Protocol header also includes the ID of the endpoint. This information is encoded using a custom Type-Length-Value (TLV) vector as follows.
My goal is to connect from a resource A in account 1 to a resource B in account 2. The plan is resource A -> PrivateLink -> NLB (with PROXY v2 enabled) -> HAProxy -> resource B.
I need to detect the VPC PrivateLink endpoint ID in the HAProxy frontend to select the correct backend. How can this be done? I'm not clear on how to call a custom parser in the HAProxy configuration, or if this is even possible? Is it? If so, how can this be done?
Reason I can't just use source IP: It is possible for private IP spaces to overlap in my architecture. There will be several accounts acting as account 1 in the example above, so I have to do destination routing based on the endpoint ID rather than the source IP exposed by the PROXY usage.
Examples
Not good
This is our current scenario. In it, two inbound connections from different VPC's having the same private IP address space cannot be distinguished.
frontend salt_4506_acctA_front
bind 10.0.1.32:4506 accept-proxy
mode tcp
default_backend salt_4506_acctA_back
backend salt_4506_acctA_back
balance roundrobin
mode tcp
server salt-master-ecs 192.168.0.88:32768
If we need to route connections for acctB's VPC using the same IP, there would be no way to distinguish.
Ideal
An ideal solution would be to modify this to something like the following (though I recognize this is won't work; it is just pseudo-configuration).
frontend salt_4506_acctA_front
bind *:4506 accept-proxy if endpointID == vpce-xxxxxxx1
mode tcp
default_backend salt_4506_acctA_back
backend salt_4506_acctA_back
balance roundrobin
mode tcp
server salt-master-ecs 192.168.0.88:32768
Any other options in place of HAProxy for destination routing based on the endpoint ID are also acceptable, but HAProxy seemed like the obvious candidate.
Looks like AWS use the "2.2.7. Reserved type ranges" as described in https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt therefore you will need to parse this part by your own.
This could be possible in lua, maybe I'm not an expert in lua, yet ;-)