I'm installed ASM Managed in a GKE Cluster (Autopilot and Standard) and I'm using a simple Ingress Gateway an VirtualService to access to a HTTPS service. In my local environment (Minikube) using Istio, all communication between my Services is catched by Envoy Proxy and shows in its logs, but the same scenario in GCP I cannot get any log, last Envoy message is "Envoy proxy is ready". Functionality and communication work fine.
In my GKE I only enable ASM Mesh checkbox when clusters were created.
Related
I'm facing an issue on the Service annotation that enables ALPN policy in an AWS load balancer.
I'm testing an application in production, managed by EKS. I need to enable a Network Load Balancer (NLB) on AWS to manage some ingress rules (tls cert and so on...).
Among annotations is available:
service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred
I think I need this to enable ALPN in the TLS handshake.
The issue is that it does not apply to my load balancer (other annotations works), I can confirm it by accessing the AWS dashboard or by executing curl -s -vv https://my.example.com. To enable this ALPN policy I must apply this patch manually, e.g. through the dashboard.
What am I missing? I wonder if that annotation could only be available for the load balancer controller and not for the base Service for NLBs.
EDIT: I found some github issues that requested for this feature in the legacy mode without using a third party controller, here is a comment that resumes all. Since it seems to be an unavailable feature (for now), how can I achieve the configuration result using terraform for example? Do I need to create the NLB first and then attach to my Service?
Here is what exists and works OK:
Kubernetes cluster in Google Cloud with deployed 8 workloads - basically GraphQL microservices.
Each of the workloads has a service that exposes port 80 via NEG (Network Endpoint Group). So, each workload has its ClusterIP in the 10.12.0.0/20 network. Each of the services has a custom namespace "microservices".
One of the workloads (API gateway) is exposed to the Internet via Global HTTP(S) Load Balancer. Its purpose is to handle all requests and route them to the right microservice.
Now, I needed to expose all of the workloads to the outside world so they can be reached individually without going through the gateway.
For this, I have created:
a Global Load Balancer, added backends (which referer to NEGs), configured routing (URL path defines which workload the request will go), and external IP
a Health Check that is used by Load Balancer for each of the backends
a firewall rule that allows traffic on TCP port 80 from the Google Health Check services 35.191.0.0/16, 130.211.0.0/22 to all hosts in the network.
The problem: Health Check fails and thus the load balancer does not work - it gives error 502.
What I checked:
logs show that the firewall rule allows traffic
logs for the Health Check show only changes I do to it and no other activities so I do not know what happens inside.
connected via SSH to the VM which hosts the Kubernetes node and checked that the clusterIPs (10.12.xx.xx) of each of workload return HTTP Status 200.
connected via SSH to a VM created for test purposes. From this VM I cannot reach any of the ClusterIPs (10.12.xx.xx)
It seems that for some reason traffic from the Health Check or my test VM does not get to the destination. What did I miss?
Is it possible to configure Istio sidecar(envoy) that runs along side with application to terminate tls as Istio Ingress Gateway?
The goal is to terminate my application TLS on outbound/inbound and encrypt with Istio mTLS when connecting to other sidecar and encrypt it back with my certificates before forwarding the traffic to upstream.
If so, please refer to some documentation.
I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.
I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.