Routing traffic between namespace without Istio to namespace with Istio - istio

In my cluster I have multiple namespaces. All my 1st-party services are running in one namespace and all 3rd-party services are running in their own namespaces.
I have Istio enabled on my 1st-party namespace (let’s call it ns-1 ). Istio is not enabled for any of my 3rd-party namespaces.
I have a 3rd-party service that needs to connect to my 1st-party service. The 3rd-party service will use k8s DNS like service1.ns-1.svc.cluster.local to connect to my service. The 3rd-party service can communicate to my 1st-party service without TLS. When I enable TLS between the service it fails and I don’t know how/where to terminate the TLS?
Is it possible to define a Gateway that can route traffic between namespaces? or it is possible to route traffic between namespaces through Istio IngressGateway

After playing with Istio & Minikube with echo-server exams this is what I found. First let me define the namespaces and services so it will be easy to explain
ns-1 - namespace 1 with Istio enabled
ns-2 - namespace 2 without Istio
service-1 - service 1 in ns-1 namespace
service-2- service 2 in ns-2 namespace
Below are connections status between these 2 services
service-1 can communicate to service-2.ns-2.svc.cluster.local with no TLS
service-2 can communicate to service-1.ns-1.svc.cluster.local with no TLS
service-1 can communicate to service-2.ns-2.svc.cluster.local with TLS
service-2 cannot communicate to service-1.ns-1.svc.cluster.local with TLS
You may know already in 4th case above (service-2 with TLS) the TLS is not terminated by any Istio objects which is causing this to failure.
If the TLS can be terminate by a sidecar this can work. added tls to sidecar api is what I am looking for but it is not in current Istio release (1.12.2 as of this answer)
What I ended up doing?
I deployed another istio-ingressGateway with service type as ClusterIP set it to route traffic for my ports. This will be the gateway for all my 3rd-party services to reach my 1st-party services and it will terminate TLS for incoming traffic and do mTLS to services in the istio namespace (ns-1). Since this is a ClusterIP it will be visible only inside the cluster. I then configured Istio Gateway and VirtualService objects to route traffic to my services based on port numbers.

Related

ASM Envoy proxy no logs traffic

I'm installed ASM Managed in a GKE Cluster (Autopilot and Standard) and I'm using a simple Ingress Gateway an VirtualService to access to a HTTPS service. In my local environment (Minikube) using Istio, all communication between my Services is catched by Envoy Proxy and shows in its logs, but the same scenario in GCP I cannot get any log, last Envoy message is "Envoy proxy is ready". Functionality and communication work fine.
In my GKE I only enable ASM Mesh checkbox when clusters were created.

How to communicate securely to a k8s service via istio?

I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.

Istio Envoy Sidecar - run it as Gateway along side with application

Is it possible to configure Istio sidecar(envoy) that runs along side with application to terminate tls as Istio Ingress Gateway?
The goal is to terminate my application TLS on outbound/inbound and encrypt with Istio mTLS when connecting to other sidecar and encrypt it back with my certificates before forwarding the traffic to upstream.
If so, please refer to some documentation.

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

Exposing Istio Ingress Gateway as NodePort to GKE and run health check

I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.