Set ignore_port_in_host_matching or strip_any_host_port in Istio EnvoyFilter - istio

Related with the issue https://github.com/envoyproxy/envoy/issues/23650
EnvoyFilter is used to modify the request's host, and re-route request to other host(or so called cluster).
In lua script, if replaced api is the same port with outbound listener's port, replacing host in lua filter works. But with different ports, we cannot re-route the request to other cluster through modifying host header in http request.
As the answer of envoy community, we should use ignore_port_in_host_matching or strip_any_host_port route configuration in envoy.
But how can we set those configs in istio?
Any help is appreciated.
Thank you.
Codes can be found here: https://github.com/envoyproxy/envoy/issues/23650

Related

Envoy Filter is only getting applied on calls to External Load Balancer but not on Traffic to inbound ingress for Specific Service

I have a K8s Cluster, exposed its address using the command : kubectl get svc istio-ingressgateway -n istio-system -> Let's assume the address was a467.kongz.com
There's an Envoyfilter in my cluster that utilizes ExtAuth and attaches an extra header Is-Kong-Verified to the response headers
Consider Something similar to this -> (https://stackoverflow.com/a/67350456/10858217)
Now when I make an API call to -> curl -v a467.kongz.com/stream/1
This reaches External Auth System and the response header has the Is-Kong-Verified attached and then finally reaches the Upstream Service Kong Stream Service Pod(/stream/1)
Now when I make calls to the Kong Stream Service which is exposed to the Public via Ingress as https://stream.kongz.com/stream/1, the Request is not being picked up by the Envoy Filter but reaches the end service.
End Goal
Need the EnvoyFilter to be applied on all incoming requests inside the cluster, even if the Service is directly requested
NOTE: The target Service/deployment has Istio-Sidecar injected
I have checked a few documents and realized that there should be a Gateway Service like Ambassador, Nginx that acts as a Proxy to the services. So when the client/user makes calls to the Nginx Proxy it routes the traffic to the ALB or Cluster Address then it goes through the EnvoyFilter and then reaches the Upstream Service.
Is it possible to achieve the end goal without any proxy system(nginx, ambassdor) or am I missing something?
Thanks in Advance for the answer.
Finally, I have found a solution for my query
As compared to the spec.configPatches.context=GATEWAY configuration in the following doc
envoy filter to intercept upstream response
It should be changed to spec.configPatches.context=SIDECARD_INBOUND
and the workload selector needs to be changed to the labels that match target pods under any namespace
Use the following doc for gaining more context
https://istio.io/latest/docs/reference/config/networking/envoy-filter/#EnvoyFilter-PatchContext
Now this will allow the API calls to the specific service via Ingress intercepted by the Envoy Filter
Make sure the Pods have the label that matches the Workload Selector spec.workloadSelector.labels
For Instance, it should be similar to the one as follows
spec:
workloadSelector:
labels:
externalAuth: enabled
And your Pod(s) have the label externalAuth: enabled

Inbound Istio Ingress Gateway Metrics

I deployed a custom Istio Ingress Gateway deployment with the default IstioOperator telemetry config and no EnvoyFilter to modify the stats.
When I checked the Prometheus stats via:
kubectl exec -n web-ingress-gateway web-ingress-gateway-6966569988-876mp -c istio-proxy -- pilot-agent request GET /stats/prometheus | grep istio_requests_total
It returns a bunch of
istio_requests_total{response_code="200",reporter="source",source_workload="web-ingress-gateway",source_workload_namespace="web-ingress-gateway",source_principal="unknown",source_app="web-ingress-gateway",source_version="unknown",source_cluster="redacted",destination_workload="redacted",destination_workload_namespace="redacted",destination_principal="unknown",destination_app="unknown",destination_version="unknown",destination_service="redacted",destination_service_name="redacted",destination_service_namespace="redacted",destination_cluster="redacted",request_protocol="http",response_flags="-",grpc_response_status="",connection_security_policy="unknown",source_canonical_service="web-ingress-gateway",destination_canonical_service="redacted",source_canonical_revision="latest",destination_canonical_revision="latest"} 28
...
It seems that only reporter="source" labels are there, but no reporter="destionation" which is usually also present in the sidecar.
Is there a way to get the incoming requests metrics?
I followed this doc.
But it does not really break down to the detail that I need, as it only gives you the response_code_class only.
# TYPE envoy_http_outbound_0_0_0_0_9282_downstream_rq counter
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="1xx"} 0
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="2xx"} 1062279
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="3xx"} 130245
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="4xx"} 12532
envoy_http_outbound_0_0_0_0_9282_downstream_rq{response_code_class="5xx"} 578
According to documentation:
Reporter: This identifies the reporter of the request. It is set to destination if report is from a server Istio proxy and source if report is from a client Istio proxy or a gateway.
This means that requests with reporter="source" are sent from source pod (web-ingress-gateway in your case), and reported by prometheus as such in this pod.
Requests with reporter="destination" are sent from another pod to a desitnation pod in your case (that would be web-ingress-gateway -> ), and are reported as such in the destination pod.
If you issue the same command, but to the application pod (not ingress-gateway), you will see requests with reporter="destination".
Using the Bookinfo application as an example, we can se the requests from productpage to details pods
productpage is a source pod, and is reported as reporter="source"
details is a destination pod, and is reported as reporter="destination"
In case I misunderstood your question, and you want to see metrics of requests coming from outside, to your ingress gateway - this is currently not possible. Ingress only emits metrics with reporter as source [source]
The reporter="destination" only available on HTTP connection metric. You need to check your connection protocol. Istio can automatically detect the connection protocol, but when it doesn't it'll default to TCP. Then you need to explicitly select the protocol using name of port or appProtocol. See more protocol here https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/

How to communicate securely to a k8s service via istio?

I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.

Set static response from Istio Ingress Gateway

How do you set a static 200 response in Istio's Ingress Gateway?
We have a situation where we need an endpoint to return a small bit of static content (a bootstrap URL). We could even put it in a header. Can Istio host something like that or do we need to run a pod for no other reason than to return a single word?
Specifically I am looking for a solution that returns 200 via Istio configuration, not a pod that Istio routes to (which is quite a common example and available elsewhere).
You have to do it manually by creating VirtualService to specific service connected to pod.
Of course firstly you have to create pod and then attached service to it,
even if your application will return single word.
Istio Gateway’s are responsible for opening ports on relevant
Istio gateway pods and receiving traffic for hosts. That’s it.
The VirtualService: Istio VirtualService’s are what get “attached” to
Gateways and are responsible defining the routes the gateway should implement.
You can have multiple VirtualServices attached to Gateways. But not for the
same domain.

How to access client IP of an HTTP request from Google Container Engine?

I'm running a gunicorn+flask service in a docker container with Google Container Engine. I set up the cluster following the tutorial at http://kubernetes.io/docs/hellonode/
The REMOTE_ADDR environmental variable always contains an internal address in the Kubernetes cluster. What I was looking for is HTTP_X_FORWARDED_FOR but it's missing from the request headers. Is it possible to configure the service to retain the external client ip in the requests?
If anyone gets stuck on this there is a better approach.
You can use the following annotations depending on your kubernetes version:
service.spec.externalTrafficPolicy: Local
on 1.7
or
service.beta.kubernetes.io/external-traffic: OnlyLocal
on 1.5-1.6
before this is not supported
source: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
note that there are caveats:
https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#caveats-and-limitations-when-preserving-source-ips
I assume you set up your service by setting the service's type to LoadBalancer? It's an unfortunate limitation of the way incoming network-load-balanced packets are routed through Kubernetes right now that the client IP gets lost.
Instead of using the service's LoadBalancer type, you could set up an Ingress object to integrate your service with a Google Cloud HTTP(s) Load Balancer, which will add the X-Forwarded-For header to incoming requests.