Envoy Filter is only getting applied on calls to External Load Balancer but not on Traffic to inbound ingress for Specific Service - amazon-web-services

I have a K8s Cluster, exposed its address using the command : kubectl get svc istio-ingressgateway -n istio-system -> Let's assume the address was a467.kongz.com
There's an Envoyfilter in my cluster that utilizes ExtAuth and attaches an extra header Is-Kong-Verified to the response headers
Consider Something similar to this -> (https://stackoverflow.com/a/67350456/10858217)
Now when I make an API call to -> curl -v a467.kongz.com/stream/1
This reaches External Auth System and the response header has the Is-Kong-Verified attached and then finally reaches the Upstream Service Kong Stream Service Pod(/stream/1)
Now when I make calls to the Kong Stream Service which is exposed to the Public via Ingress as https://stream.kongz.com/stream/1, the Request is not being picked up by the Envoy Filter but reaches the end service.
End Goal
Need the EnvoyFilter to be applied on all incoming requests inside the cluster, even if the Service is directly requested
NOTE: The target Service/deployment has Istio-Sidecar injected
I have checked a few documents and realized that there should be a Gateway Service like Ambassador, Nginx that acts as a Proxy to the services. So when the client/user makes calls to the Nginx Proxy it routes the traffic to the ALB or Cluster Address then it goes through the EnvoyFilter and then reaches the Upstream Service.
Is it possible to achieve the end goal without any proxy system(nginx, ambassdor) or am I missing something?
Thanks in Advance for the answer.

Finally, I have found a solution for my query
As compared to the spec.configPatches.context=GATEWAY configuration in the following doc
envoy filter to intercept upstream response
It should be changed to spec.configPatches.context=SIDECARD_INBOUND
and the workload selector needs to be changed to the labels that match target pods under any namespace
Use the following doc for gaining more context
https://istio.io/latest/docs/reference/config/networking/envoy-filter/#EnvoyFilter-PatchContext
Now this will allow the API calls to the specific service via Ingress intercepted by the Envoy Filter
Make sure the Pods have the label that matches the Workload Selector spec.workloadSelector.labels
For Instance, it should be similar to the one as follows
spec:
workloadSelector:
labels:
externalAuth: enabled
And your Pod(s) have the label externalAuth: enabled

Related

Inbound Istio Ingress Gateway Metrics

I deployed a custom Istio Ingress Gateway deployment with the default IstioOperator telemetry config and no EnvoyFilter to modify the stats.
When I checked the Prometheus stats via:
kubectl exec -n web-ingress-gateway web-ingress-gateway-6966569988-876mp -c istio-proxy -- pilot-agent request GET /stats/prometheus | grep istio_requests_total
It returns a bunch of
istio_requests_total{response_code="200",reporter="source",source_workload="web-ingress-gateway",source_workload_namespace="web-ingress-gateway",source_principal="unknown",source_app="web-ingress-gateway",source_version="unknown",source_cluster="redacted",destination_workload="redacted",destination_workload_namespace="redacted",destination_principal="unknown",destination_app="unknown",destination_version="unknown",destination_service="redacted",destination_service_name="redacted",destination_service_namespace="redacted",destination_cluster="redacted",request_protocol="http",response_flags="-",grpc_response_status="",connection_security_policy="unknown",source_canonical_service="web-ingress-gateway",destination_canonical_service="redacted",source_canonical_revision="latest",destination_canonical_revision="latest"} 28
...
It seems that only reporter="source" labels are there, but no reporter="destionation" which is usually also present in the sidecar.
Is there a way to get the incoming requests metrics?
I followed this doc.
But it does not really break down to the detail that I need, as it only gives you the response_code_class only.
# TYPE envoy_http_outbound_0_0_0_0_9282_downstream_rq counter
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="1xx"} 0
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="2xx"} 1062279
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="3xx"} 130245
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="4xx"} 12532
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="5xx"} 578
According to documentation:
Reporter: This identifies the reporter of the request. It is set to destination if report is from a server Istio proxy and source if report is from a client Istio proxy or a gateway.
This means that requests with reporter="source" are sent from source pod (web-ingress-gateway in your case), and reported by prometheus as such in this pod.
Requests with reporter="destination" are sent from another pod to a desitnation pod in your case (that would be web-ingress-gateway -> ), and are reported as such in the destination pod.
If you issue the same command, but to the application pod (not ingress-gateway), you will see requests with reporter="destination".
Using the Bookinfo application as an example, we can se the requests from productpage to details pods
productpage is a source pod, and is reported as reporter="source"
details is a destination pod, and is reported as reporter="destination"
In case I misunderstood your question, and you want to see metrics of requests coming from outside, to your ingress gateway - this is currently not possible. Ingress only emits metrics with reporter as source [source]
The reporter="destination" only available on HTTP connection metric. You need to check your connection protocol. Istio can automatically detect the connection protocol, but when it doesn't it'll default to TCP. Then you need to explicitly select the protocol using name of port or appProtocol. See more protocol here https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/

How to communicate securely to a k8s service via istio?

I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

Set static response from Istio Ingress Gateway

How do you set a static 200 response in Istio's Ingress Gateway?
We have a situation where we need an endpoint to return a small bit of static content (a bootstrap URL). We could even put it in a header. Can Istio host something like that or do we need to run a pod for no other reason than to return a single word?
Specifically I am looking for a solution that returns 200 via Istio configuration, not a pod that Istio routes to (which is quite a common example and available elsewhere).
You have to do it manually by creating VirtualService to specific service connected to pod.
Of course firstly you have to create pod and then attached service to it,
even if your application will return single word.
Istio Gateway’s are responsible for opening ports on relevant
Istio gateway pods and receiving traffic for hosts. That’s it.
The VirtualService: Istio VirtualService’s are what get “attached” to
Gateways and are responsible defining the routes the gateway should implement.
You can have multiple VirtualServices attached to Gateways. But not for the
same domain.

Exposing Istio Ingress Gateway as NodePort to GKE and run health check

I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.