Can't create/get Istio objects via Kubernetes REST API - istio

We can't get to Istio objects via Kubernetes REST API.
Example:
kubectl get gateways works and shows all Istio gateways in the default namespace.
curl ..../api/v1/namespaces/default/pods shows all the pods
deployed in the default name space.
curl ..../api/v1/namespaces/default/gateways returns 404.
Same is true for virtualservices,serviceentries, and any other Istio objects.
We have one REST API server running in the cluster. We are guessing the problem may be caused by it supporting API version v1 while Istio object creation YAML files refrence API version networking.istio.io/v1alpha3.
This is kinda confusing since we can create and get Istio objects via kubectl command but cannot do the same by issuing an HTTP request to kubernetes REST API server. Any insight would be welcome. Thanks.

I got a couple of ideas when I checked the logs of the kubernetes REST server. The rest server was discovered as a pod running in the namespace kube-system.
Anyhow, every time you need to use kubernetes REST server to get an Istio object created via API version networking.istio.io/v1alpha3 instead of issuing HTTP request to kubernetes REST server like so .../api/v1/namespaces/default/gateways do instead .../apis/networking.istio.io/v1alpha3/gateways. Replace gateways with the name of your Istio object of interest.

Related

AWSEKS - Non Istio mesh Pod to pod connection issue after installing Istio 1.13.0

In kubernetes (AWS EKS) v1.20 have a default namespace with two pods, connected with a service type loadbalancer (CLB). Requesting the uri to the CLB worked fine and routed to either of the pods as required.
Post installation of 1.13.0 of Istio with istio-injection=enabled label set on a different namespace, the communication of the non-istio pods with no sidecar injection doesnt seem to work.
What I mean by doesnt work: (below 3 scenarios always worked without istio)
curl requests sent to https://default-nspods/apicall always worked with the non-istio pods.
i.e., the CLB always forwarded requests to to the 2 pods as required.
curl request after logging into the pod1 to pod2s IP worked and vice versa.
curl request to pod2 uri from the Node1 of pod1 worked and vice versa.
Post
Post installation, 2 and 3 doesnt work. The CLB also has trouble reading the nodeport of the pods at times.
Ive checked istioctl proxy-config endpoints and checked the deployments where the sidecar injection is enabled, the output doesnt show any other non mesh service/pod details.
Istio Version: 1.13.0
Ingress Gateway: Enabled (Loadbalancer mode)
Egress Gateway: Disabled
No addons like Kiali, Prometheus
Istio Operator based installation with modified yaml values.
Single cluster installation i.e., ISTIO_MESH_ROUTER_MODE='standard'
Istio pods, envoy sidecars, proxy-config dont show any errors.
Am kind of stuck, please let me know if I need to check kube-proxy, ip-tables or some where else.
Ive uninstalled istio using the "istioctl x uninstall --purge" option and re-installed , but the non-mesh pods seem to be not working now with Istio installed or not.
Istio pods and Istio injection namespace pods dont have issues.

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

Set static response from Istio Ingress Gateway

How do you set a static 200 response in Istio's Ingress Gateway?
We have a situation where we need an endpoint to return a small bit of static content (a bootstrap URL). We could even put it in a header. Can Istio host something like that or do we need to run a pod for no other reason than to return a single word?
Specifically I am looking for a solution that returns 200 via Istio configuration, not a pod that Istio routes to (which is quite a common example and available elsewhere).
You have to do it manually by creating VirtualService to specific service connected to pod.
Of course firstly you have to create pod and then attached service to it,
even if your application will return single word.
Istio Gateway’s are responsible for opening ports on relevant
Istio gateway pods and receiving traffic for hosts. That’s it.
The VirtualService: Istio VirtualService’s are what get “attached” to
Gateways and are responsible defining the routes the gateway should implement.
You can have multiple VirtualServices attached to Gateways. But not for the
same domain.

Host and Port to access the Kubernetes api

I can access the kubernetes api to get the deployments using Kubernetes proxy.
I get the list of deployments with:
127.0.0.1:8001/apis/apps/v1/deployments
This is getting the deployments locally. But what should I use the HOST and PORT to access the deployments from the cluster not locally but using the aws server.
I am new to Kubernetes, if the question is not understandable please let me know.
Any help is appreciated.
kubectl proxy forwards your traffic localy adding your authentication for you
Your public api endpoint can be exposed in different ways (or it can be completely inaccessible from public network) depending on your cluster setup.
In most cases it would be exposed on something like ie. https://api.my.cluster.fqdn or with custom port like https://api.my.cluster.fqdn:6443 and it would require authentication by ie. getting a baerer token or using client certificate. It is reasonable to use some client library to connect to API.

Kubernetes front end deployment timing out when requesting api deployment

Let me start this by saying I am fairly new to k8s. I'm using kops on aws.
I currently have 3 deployments on a cluster.
FrontEnd nginx image serving an angular web app. One pod. External service.
socket.io server. Internal service. (this is a chat application, and we decided to separate this server from our api. Was this a good idea?)
API that is requested by both the socket.io server and the web application. Internal Service (should it be external?)
The socket.io deployment and API seem to be able to communicate through the cluster ips and corresponding services I have set up for the deployments; however, the webapp times out when querying the API.
From the web app, I am querying the API using the API's cluster IP address. Should I be requesting a different address?
Additionally, what is the best way to configure these addresses in my files without having to change the addresses in the files each time I create a new deployment? (the cluster ip addresses change every time you tare down and recreate the deployment)
If I understood correctly your frontend web application depends on API server, so that it sends requests to it. In such case, your API service should be available from outside of the cluster. It means it should be exposed as the NodePort or LoadBalancer service type.
P.S. you can refer to service using ClusterIP only inside of the cluster.