I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.
Related
tl;dr
Can Istio do proxy resolution at mesh level? I want to be able to avoid defining HTTPS_PROXY in my service container.
Here is the setup:
My enterprise has an AKS cluster
In the AKS cluster we have Istio up and running.
Kubernetes Version 1.22.6
Istio Version 1.14.1
My service is running inside AKS and my pod has sidecar injection enabled.
SCENARIO:
My service needs to access two external services - www.google.com and secured.my-enterprise.com.
secured.my-enterprise.com is a service that is running in a private network on enterprise premises, NOT on Azure Cloud.
The only way to access secured.my-enterprise.com is to specify an HTTPS_PROXY which points to proxy.enterprise-proxy.svc.cluster.local.
This proxy (running inside AKS) knows how to get requests from AKS to on-premise infrastructure, navigating all the fancy network peering.
I dont however need the HTTPS_PROXY to access www.google.com.
Currently I am solving this problem the usual way - with NO_PROXY .google.com.
What I would like to be able to do:
Define a Virtual Service / Service Entry for host: www.google.com and egress out of AKS.
Define a Virtual Service / Service Entry for host: secured.my-enterprise.com, defining proxy.enterprise-proxy.svc.cluster.local as the HTTPS_PROXY to use for it, and egress out of AKS.
What would I achieve with this?
My service does not have to manage a NO_PROXY based on what it needs to do, and the service mesh can handle figuring out how to reach an external resource.
Is there something in the Istio toolset that can help me achieve this?
I have gone through Service Entry documentation and can't for the life of me understand how to add a proxy into the mix :)
Thanks!!
PS: Created a topic for this on discuss.istio.io as well
Scenario
Istio version 1.5.0 ontop of EKS 1.14.
Enabled components:
Base
Pilot
NOTE Istio 1.5.0 deprecates Mixer, moved to telemetry v2, which happens inside the envoy proxy sidecar.
I want to use Istio to support some metrics out of the box.
Here's the flow
my computer -> Gateway -> Virtual Service A -> Virtual Service B
I made sure that:
K8s Service objects have label app
K8s Deployment objects and their pod templates have label app
I can run the flow just fine, which means the configurations are correct.
The problem is with telemetry.
istio_requests_total{connection_security_policy="unknown",destination_app="unknown",destination_canonical_revision="latest",destination_canonical_service="unknown",destination_principal="spiffe://cluster.local/ns/default/sa/default",destination_service="svcb.default.svc.cluster.local",destination_service_name="svcb.default.svc.cluster.local",destination_service_namespace="unknown",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",grpc_response_status="0",instance="10.2.55.80:15090",job="envoy-stats",namespace="default",pod_name="svca-77969dc86b-964p5",reporter="source",request_protocol="grpc",response_code="200",response_flags="-",source_app="svca",source_canonical_revision="latest",source_canonical_service="svca",source_principal="spiffe://cluster.local/ns/default/sa/default",source_version="unknown",source_workload="svca",source_workload_namespace="default"}
Question
Why are most destination-* labels unknown?
The official istio mesh dashboard typically filter metrics by reporter=destination. Why do all of my istio_requests_total series have reporter=source?
Oh right, after much digging, here's the answer.
Istio supports proxying all TCP traffic by default, but in order to provide additional capabilities, such as routing and rich metrics, the protocol must be determined. This can be done automatically or explicitly specified
I didn't specify the port name in my Service resource. Once I did that, the problem is resolved.
i am running a minimal stateful database service on GKE. single node cluster. i've setup a database as a stateful set on a single pod as of now. the database has exposed a management console on a particular port along with the mandatory database port. i am attempting to do two things.
expose management port over a global HTTP(S) load balancer
expose database port outside of GKE to be consumed by the likes of Cloud Functions or App Engine Applications.
My stateful set is running fine and i can see from the container logs that the database is properly booted up and is listening on required ports.
i am attempting to setup a standalone NEG (ref: https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg) using a simple ClusterIP service.
the cluster service comes up fine and i can see it using
kubectl get service service-name
but i dont see the NEG setup as such... the following command returns nothing
$ gcloud compute network-endpoint-groups list
Listed 0 items.
my pod exposes the port 8080 my service maps 51000 to 8080 and i have provided the neg annotation
cloud.google.com/neg: '{"exposed_ports": {"51000":{}}'
I dont see any errors as such but neither do i see a NEG created/listed.
Any suggestions on how i would go about debugging this.
As a followup question...
when exposing NEG over global load balancer, how do i enforce authn?
im ok with either of service account roles or oauth/openid.
would i be able to expose multiple ports using a single NEG? for
e.g. if i wanted to expose one port to my global load balancer and
another to local services, is this possible with a single NEG or
should i expose each port using a dedicated ClusterIP service?
where can i find documentation/specification for google kubernetes
annotations. i tried to expose two ports on the neg using the
following annotation syntax. is that even supported/meaningful?
cloud.google.com/neg: '{"exposed_ports": {"51000":{},"51010":{}}'
Thanks in advance!
In order to create the service that is backed by a network endpoint group, you need to be working on a GKE Cluster that is VPC Native:
https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#before_you_begin
When you create a new cluster, this option is disabled by default and you must enable it upon creation. You can confirm if your cluster is VPC Native going to your Cluster details in GKE. It should appear like this:
VPC-native (alias IP) Enabled
If the cluster is not VPC Native, you won’t be able to use this feature as described on their restrictions:
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions
In case you have VPC Native enabled, make sure as well that the pods have the same labels “purpose:” and “topic:” to make sure they are members of the service:
kubectl get pods --show-labels
You can also create multi-port services as it is described on Kubernetes documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.
Followed https://www.mesosphere.com/amazon/ I created a DCOS cluster on Amazon AWS.
Then I followed http://kubernetes.io/v1.1/docs/getting-started-guides/dcos.html and installed Kubernete on it.
Then I followed http://kubernetes.io/v1.1/docs/user-guide/quick-start.html
I was able to launch pods successfully.
Then I ran into problem with expose the service to public.
$ dcos kubectl expose rc my-nginx --port=80 --type=LoadBalancer
service "my-nginx" exposed
$ dcos ssun$ dcos kubectl get svc my-nginx
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
my-nginx 10.10.10.32 80/TCP run=my-nginx 8s
The EXTERNAL_IP address does not exists. According to the tutorial, it should. So I'm thinking it has something to do with the fact that my Kubernete is inside DCOS.
Please help. Thank you very much!
Kubernetes on Mesos/DCOS does not support automatic LoadBalancer creation yet.
As the quick start states:
Through integration with some cloud providers (for example Google Compute Engine and AWS EC2), Kubernetes enables you to request that it provision a public IP address for your application.
AFAIK, only GCE, GKE, and AWS support automatic LoadBalancer creation so far.
Another key difference about DCOS (compared to kubernetes) is that it comes by default with two zones: public and private. So nothing scheduled on the private nodes is externally accessible without a reverse-proxy on the public nodes.
Additionally, Kubernetes on DCOS does not yet support IP-per-container. Support for IP-per-container is under development with the DCOS/Calico integration. Some community members have also reportedly attempted using cluster-wide docker overlay networking.
For now, there are a few alternative options for reaching your pod externally:
Deploy your pod on all the public slaves (using resource role annotations) and hostPort:80. Then hit the address of the DCOS public slave AWS ELB.
Create your own load balancer nginx pod (e.g. service-loadbalancer and schedule it on the public slaves with hostPort:80. Then hit the IP of the host node it's on.
It's definitely a priority of the Mesosphere Kubernetes Team to make this experience smoother on DCOS. Hopefully the solution will include automatic LoadBalancer creation.