istio default installation - traffic blocked? - google-cloud-platform

I'm quite new to Istio.
I just installed Istio on a k8 cluster in GCP. I have 2 services in my private cluster. one of them needs to talk to a Redis memorystore (over internal private IP - 10.x.x.x).
I'm seeing errors trying to connect to redis. What am I missing in my Istio configuration?
Update: I have found that the redis error is misleading. The real issue it seems is something else - see one my comments below. I don't understand what that error means.
Some additional background: this is for a Tyk installation. The issue it seems is communication between the Tyk Dashboard and Tyk Gateway pods. I'm seeing the SSL error (see comments below) when trying to connect from Gateway to Dashboard (Dashboard to Gateway is fine). The error goes away if I rebuild everything without Istio. I must be doing something really silly. :( Both pods are in the same cluster, same namespace.

I managed to fix the issue. Redis wasn't the issue. Issue was communication from Tyk Gateway -> Tyk Dashboard was failing. The gateway talks to the dashboard to register its presence. The connection logs showed what looked like a tls origination issue with Istio envoy proxy when it is routing the traffic. I configured a DestinationRule that explicitly turned off mtls for the dashboard and the problem went away.

Related

AWS EKS cluster with Istio sidecar auto inject problem and pod ext. db connection issue

I built a new cluster with Terraform for a AWS EKS, single node group with a single node.
This cluster is using 1.22 and cant seem to get anything to work correctly.
So Istio will install fine, i have installed versions 1.12.1, 1.13.2, 1.13.3 & 1.13.4 and all seem to have the same issue with auto injecting the sidecar.
Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": context deadline exceeded
But there are also other issues with the cluster, even without using Istio. My application is pulled and the pod will build fine but can not connect to the database. This is an external DB to the cluster - no other build (running on Azure) have any issues connecting to the DB
I am not sure if this is the issue with the application not connecting to the ext. DB but the sidecar issue could have something to do with BoundServiceAccountTokenVolume?
There is a warming about it being enabled on all clusters from 1.21 - a little odd as i have another applications with istio, running on another cluster with 1.21 on AWS EKS!
I also have this application running with istio without any issues in Azure on 1.22
I seem to have fix it :)
It seems to be a port issue with the security groups. I was letting terraform build its own group.
When I opened all the ports up in the 'inbound' section it seemed to work.
I then closed them all again and only opened 80 and 443 - which again stopped Istio from auto-injecting its sidecar
My app was requesting to talk to Istio on port 15017, so i opened just that port, along sided ports 80 and 443.
Once that port was opened, my app started to work and got the sidecar from Istio without any issue.
So it seems like the security group stops pod-to-pod communication... unless i have completely messed up my terraform build in some way

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

Kubernetes with an UDP loadbalancer with sticky sessions based on IP

I'm trying to deploy an UDP-based application on kubernetes, but I'm having troubles finding a suitable cloud provider that has an UDP loadbalancer with IP-based sticky sessions.
I have tried using DigitalOcean Kubernetes Service (DOKS) but they don't support UDP loadbalancers.
EKS (AWS' kubernetes service) provides UDP support with NLB for example, but they don't seem to have sticky sessions on that type of loadbalancer, only on the classic LB.
Is there another cloud provider (I'm thinking of GCE or Azure) that provides my required functionalities out of the box?
I'm asking this here to know if anyone else has had the same problem and maybe has already tried various solutions, and has already found the perfect fit.
I know in Nginx Ingress Controller (which I know works with AWS and NLB with UDP support as you stated) can expose UDP services and supports sticky sessions. I have not done this in AWS or any other cloud provider, but I have with similar use cases on bare-metal.
As #jordanm posted, the answer was to apply the stickiness parameter through the ec2 console.

Dask Hub/JupyterHub - Cannot start Python Kernel

I deployed Dask Hub (Dask Gateway + Jupyterhub) on Google Kubernetes Engine using helm. I am experiencing trouble fetching the Python Kernel when I start up my jupyter notebook instance. This error occurs when I am on my company's VPN, but no error occurs when I'm not on my company's VPN. I'm going to guess that this is firewall related, but I don't know enough about the internal of the Jupyterhub kernel to understand why it's being blocked. Could someone please shed some light on this?
I can't see anything wrong from inspecting the logs of the jupyter pod:
From the Browser's Developer Console, here is the error:
Update:
I added the following to the Jupyterhub config:
jupyterhub:
hub:
extraConfig: |
c.JupyterHub.hub_connect_ip = '0.0.0.0'
c.JupyterHub.hub_bind_url = 'http://127.0.0.1:8000'
singleuser:
extraEnv:
DASK_GATEWAY__CLUSTER__OPTIONS__IMAGE: '{JUPYTER_IMAGE_SPEC}'
This has definitely something to do with the routing of your VPN. I don't know which spawner you are using, but here are some possible solutions:
Check if you have the correct settings for the following configuration options. The hub.connect.ip is important for the internal workings of Jupyterhub. bind_url is important for external traffic.
c.JupyterHub.hub_connect_ip = '0.0.0.0'
c.JupyterHub.bind_url = 'http://127.0.0.1:8000'
Switch protocols for your VPN if possible. Try switching from UDP to TCP (if possible at all).
Enforce an SSL connection for Jupyterhub. The VPN provider of your company could block non-secure connections. Read the documentation for Jupyterhub to enable SSL. Alternatively, you could also go for GKE managed certificates, more information can be found here.

AWS Cognito failing to authenticate after adding istio sidecar to pods

I added istio to my eks cluster. Sidecars are getting added to every pod and my Kiali dashboard is also up.
But after that I am not able to authenticate my APIs. I checked all the logs, came out to be that my pods are not able to connect to Cognito Server. I am getting following error:
Unhandled rejection TypeError: Unable to generate certificate due to
RequestError: Error: connect ECONNREFUSED 13.235.142.215:443
I went inside my pod to check if it can connect to any public DNS, I was able to ping google.com but not to aws.amazon.com
To crossverify, I removed istio from my cluster and it started working.
Got a github issue somewhat matching my issue, but that has also been closed without any solution (https://github.com/istio/istio/issues/10848).
Can anyone help me with this issue.
Thanks
Got the issue, my istio is trying to connect to aws cognito through ssl and it doesn't have certificates. Putting certificates in istio solved this.