Unable to view kiali dashboard - istio

I installed Istion version 1.6.9 with below steps
Install Istio Version 1.6.9
wget https://github.com/istio/istio/releases/download/1.6.9/istio-1.6.9-linux-amd64.tar.gz
tar -xzvf istio-1.6.9-linux-amd64.tar.gz
cd istio-1.6.9
cd bin/
sudo mv istioctl /usr/local/bin/
istioctl --version
istioctl install --set profile=demo
I want to access kiali dashboard but I am unable to figure out how to access!
I can see kiali is running in pod:
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-5dc4b4676c-wcb59 1/1 Running 0 32h
istio-egressgateway-5889bb8976-stlqd 1/1 Running 0 32h
istio-ingressgateway-699d97bdbf-w6x46 1/1 Running 0 32h
istio-tracing-8584b4d7f9-p66wh 1/1 Running 0 32h
istiod-86d4497c9-xv2km 1/1 Running 0 32h
kiali-6f457f5964-6sssn 1/1 Running 0 32h
prometheus-5d64cf8b79-2kdww 2/2 Running 0 32h
I am able to see the kiali as services as well:
kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.100.101.71 <none> 3000/TCP 32h
istio-egressgateway ClusterIP 10.100.34.75 <none> 80/TCP,443/TCP,15443/TCP 32h
istio-ingressgateway LoadBalancer 10.100.84.203 a736b038af6b5478087f0682ddb4dbbb-1317589033.ap-southeast-2.elb.amazonaws.com 15021:31918/TCP,80:32736/TCP,443:30611/TCP,31400:30637/TCP,15443:31579/TCP 32h
istiod ClusterIP 10.100.111.159 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP 32h
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 32h
jaeger-collector ClusterIP 10.100.84.202 <none> 14267/TCP,14268/TCP,14250/TCP 32h
jaeger-collector-headless ClusterIP None <none> 14250/TCP 32h
jaeger-query ClusterIP 10.100.165.216 <none> 16686/TCP 32h
kiali ClusterIP 10.100.159.127 <none> 20001/TCP 32h
prometheus ClusterIP 10.100.113.255 <none> 9090/TCP 32h
tracing ClusterIP 10.100.77.39 <none> 80/TCP 32h
zipkin ClusterIP 10.100.247.201 <none> 9411/TCP
I also can see secret is also deployed as below:
kubectl get secrets
NAME TYPE DATA AGE
default-token-ghz6r kubernetes.io/service-account-token 3 8d
sh.helm.release.v1.aws-efs-csi-driver.v1 helm.sh/release.v1 1 6d
[centos#ip-10-0-0-61 ~]$ kubectl get secrets -n istio-system
NAME TYPE DATA AGE
default-token-z6t2v kubernetes.io/service-account-token 3 32h
istio-ca-secret istio.io/ca-root 5 32h
istio-egressgateway-service-account-token-c8hfp kubernetes.io/service-account-token 3 32h
istio-ingressgateway-service-account-token-fx65w kubernetes.io/service-account-token 3 32h
istio-reader-service-account-token-hxsll kubernetes.io/service-account-token 3 32h
istiod-service-account-token-zmtsv kubernetes.io/service-account-token 3 32h
kiali Opaque 2 32h
kiali-service-account-token-82gk7 kubernetes.io/service-account-token 3 32h
prometheus-token-vs4f6 kubernetes.io/service-account-token 3 32h
I ran all of the above commands on my Linux bastion host, I am hoping that if I open port 20001 on my Linux bastion as well as SG I should be able to access it admin/admin credentials? as like below:
http://10.100.159.127:20001/
My second question is ISTIO as the software is running on my Linux Bastion Server or on my EKS CLuster?
My feeling is it is running on the local Bastion Server, but since we used the below commands
kubectl label ns default istio-injection=enabled
kubectl get ns
kubectl label ns jenkins istio-injection=enabled
kubectl label ns spinnaker istio-injection=enabled
Any pods running in these namespaces will have Envoy proxy pod injected automatically, correct?
P.S: I did the below:
nohup istioctl dashboard kiali &
Opened port at the SG level and at the OS level too... still not able to access the Kiali dashboard
http://3.25.217.61:40235/kiali
[centos#ip-10-0-0-61 ~]$ wget http://3.25.217.61:40235/kiali
--2020-09-11 15:56:18-- http://3.25.217.61:40235/kiali
Connecting to 3.25.217.61:40235... failed: Connection refused.
curl ifconfig.co
3.25.217.61
sudo netstat -nap|grep 40235
tcp 0 0 127.0.0.1:40235 0.0.0.0:* LISTEN 29654/istioctl
tcp6 0 0 ::1:40235 :::* LISTEN 29654/istioctl
Truly, unable to understand what is going wrong...

Just run istioctl dashboard kiali.
Istioctl will create a proxy. Now log in with admin/admin credentials.
To answer the second question:
Istio is running on your cluster and is configure with istioctl, installed on your bastion.
By labeling a namespace with istio-injection=enabled the sidecar will be injected automatically. If necessary, you can disable the injection for a pod by annotating it like this:
spec:
selector:
matchLabels:
...
template:
metadata:
labels:
...
annotations:
sidecar.istio.io/inject: "false"
Update
To access kiali without istioctl/kubectl proxy, you have three options. As you found correctly, it depends on the kiali service type:
ClusterIP (default)
To use the default, set up a route from gateway to kiali service. This is done using VirtualService and DestinationRule. You can than access kiali by eg <ingress-gateway-loadbalancer-id>.amazonaws.com/kiali
NodePort
You can change type to NodePort by setting the corresponding value on istio installation and access kiali by <ingress-gateway-loadbalancer-id>.amazonaws.com:20001/kiali``
LoadBalancer
You can change type to LoadBalancer by setting the corresponding value on istio installation. A second elastic load balancer will be created on aws and the kiali service will have an external ip, like the ingressgateway service does. You can now access it by <kiali-loadbalancer-id>.amazonaws.com/kiali
I would recommend option 1. It's best practice for production and you don't have to dig to deep into istio installation config, which can be overwhelming in the beginning.

Check the port and its type for kiali service by following command.
kubectl get svc -n istio-system
If the type is NodePort then you can check localhost:(port of kiali service) otherwise if the type is clusterIP then you have to expose it by forwarding it.
Expose Kiali either via Kubernetes port forwarding or via a gateway. The following forwarding command exposes Kiali on localhost, port 20001:
kubectl -n istio-system port-forward svc/kiali 20001:20001 &
Then check localhost:20001 for kiali dashboard.

using Kubernetes: https://{domain or ingress ip}/kiali
kubectl get ingress kiali -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Or (for any kind of platform)
oc port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward $(kubectl get pod -n istio-system -l app=kiali -o jsonpath='{.items[0].metadata.name}') -n istio-system 20001

Related

Why can't I externally access the example Istio endpoint when using a namespace? (w/ minikube)

I am following the instructions provided here using minikube. The main difference is I am doing everything in the 'test' namespace. Everything looks to be correct I get an IP and a port...
~/Code/lib/tmp >echo "http://$GATEWAY_URL/productpage"
http://192.168.49.2:31914/productpage
But when I try to access that in the browser it times out. I did notice in the gateway-spec it is pointing at 8080...
~/Code/lib/tmp >kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports}'
[{"name":"status-port","nodePort":31745,"port":15021,"protocol":"TCP","targetPort":15021},{"name":"http2","nodePort":31914,"port":80,"protocol":"TCP","targetPort":8080},{"name":"https","nodePort":31244,"port":443,"protocol":"TCP","targetPort":8443},{"name":"tcp","nodePort":30472,"port":31400,"protocol":"TCP","targetPort":31400},{"name":"tls","nodePort":32306,"port":15443,"protocol":"TCP","targetPort":15443}]%
Whereas the gateway on the namespace side is using 9080...
route:
- destination:
host: productpage
port:
number: 9080
But I also could be off base. The response I get is...
The server at 192.168.49.2 is taking too long to respond.
Gateway
~/Code/lib/tmp >kubectl describe Gateway bookinfo-gateway -n test
Name: bookinfo-gateway
Namespace: test
Labels: <none>
Annotations: <none>
API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
...
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
~/Code/lib/tmp >kubectl get Gateway -n test
NAME AGE
bookinfo-gateway 33m
Update
I tried using a simple load balancer like
kubectl expose deployment productpage-v1 --type=LoadBalancer --port=9080 -n test
This works fine going to localhost:9080
~/Code/lib/tmp >kubectl get services -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.103.184.105 <none> 9080/TCP 6h43m
productpage ClusterIP 10.109.150.132 <none> 9080/TCP 6h43m
productpage-v1 LoadBalancer 10.98.252.50 127.0.0.1 9080:32172/TCP 14s
ratings ClusterIP 10.98.189.253 <none> 9080/TCP 6h43m
reviews ClusterIP 10.106.74.190 <none> 9080/TCP 6h43m
So it must be something from the Istio side.
Update 2
The Gateway and Virtual Service are both on the test namespace...
~/Code/lib/tmp >kubectl get GATEWAY -A
NAMESPACE NAME AGE
test bookinfo-gateway 4m52s
~/Code/lib/tmp >kubectl get virtualservice -A
NAMESPACE NAME GATEWAYS HOSTS AGE
test bookinfo ["bookinfo-gateway"] ["*"] 5m3s
Here are the list of steps I followed...
minikube delete
minikube start --memory=16384 --cpus=4 --kubernetes-version=v1.20.2
istioctl install --set profile=demo -y
Kubectl create namespace test
kubectl label namespace test istio-injection=enabled
kubectl apply -f bookinfo.yml -n test
kubectl apply -f bookinfo-gateway.yml -n test
istioctl analyze -n test
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="https")].nodePort}')
export INGRESS_HOST=$(minikube ip)
minikube tunnel (separate window)
Here is what happens with the tunnel command...
~ >minikube tunnel
❗ The service/ingress istio-ingressgateway requires privileged ports to be exposed: [80 443]
🔑 sudo permission will be asked for it.
🏃 Starting tunnel for service istio-ingressgateway.
Password:
After I enter my password there is no more text. It does seem to be running because when I ctrl+c it stops the command.

Not able to communicate with Pods locally created in AWS EC2 instance with kubernetes

I have created simple nginx deplopyment in Ubuntu EC2 instance and exposed to port through service in kubernetes cluster, but I am unable to ping the pods even in local envirnoment. My Pods are running fine and service is also created successfully. I am sharing some outputs of commands below
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-39-226 Ready <none> 2d19h v1.16.1
master-node Ready master 2d20h v1.16.1
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-54f57cf6bf-dqt5v 1/1 Running 0 101m 192.168.39.17 ip-172-31-39-226 <none> <none>
nginx-deployment-54f57cf6bf-gh4fz 1/1 Running 0 101m 192.168.39.16 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-2rcvt 1/1 Running 0 20m 192.168.39.18 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-tjh82 1/1 Running 0 20m 192.168.39.19 ip-172-31-39-226 <none> <none>
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
nginx-deployment NodePort 10.101.133.21 <none> 80:31165/TCP 50m
sample-nginx LoadBalancer 10.100.77.31 <pending> 80:31854/TCP 19m
kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 14 Oct 2019 06:28:13 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica...
Selector: app=nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-54f57cf6bf (2/2 replicas created)
Events: <none>
Now I am unable to ping 192.168.39.17/16/18/19 from master, also not able to access curl 172.31.39.226:31165/31854 from master as well. Any help will be highly appreciated..
From the information, you have provided. And from the discussion we had the worker node has the Nginx pod running. And you have attached a NodePort Service and Load balancer Service to it.
The only thing which is missing here is the server from which you are trying to access this.
So, I tried to reach this URL 52.201.242.84:31165. I think all you need to do is whitelist this port for public access or the IP. This can be done via security group for the worker node EC2.
Now the URL above is constructed from the public IP of the worker node plus(+) the NodePort svc which is attached. Thus here is a simple formula you can use to get the exact address of the pod running.
Pod Access URL = Public IP of Worker Node + The NodePort

How to resolve Failed to start service controller: WARNING: no cloud provider provided

Background:
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.108.245.210 <pending> 80:30742/TCP,443:31028/TCP 41m
$ kubectl cluster-info dump | grep LoadBalancer
14:35:47.072444 1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
k8s cluster is up and running fine. -
$ ls /etc/kubernetes/manifests
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
ingress-nginx default-http-backend ClusterIP 10.100.2.163 <none> 80/TCP 21h
ingress-nginx ingress-nginx LoadBalancer 10.108.221.18 <pending> 80:32010/TCP,443:31271/TCP 18h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 21h
How do I link the cloud provider to kubernetes cluster in the existing setup?
I would expect grep -r cloud-provider= /etc/kubernetes/manifests to either show you where the flag is being explicitly set to --cloud-provider= (that is, the empty value), or let you know that there is no such flag, in which case you'll need(?) to add them in three places:
kube-apiserver.yaml
kube-cloud-provider.yaml
in kubelet.service or however you are currently running kubelet
I said "need(?)" because I thought that I read one upon a time that the kubernetes components were good enough at auto-detecting their cloud environment, and thus those flags were only required if you needed to improve or alter the default behavior. However, I just checked the v1.13 page and there doesn't seem to be any "optional" about it. They've even gone so far as to now make --cloud-config= seemingly mandatory, too

unable to find "istio-egress" service with minikube installation

I followed the instruction provided on "https://istio.io/docs/setup/kubernetes/quick-start.html" that says "Ensure the following Kubernetes services are deployed: istio-pilot, istio-mixer, istio-ingress, istio-egress"
However, when I do "kubectl get svc --all-namespaces", only the following services show up:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m
istio-system istio-ca-ilb LoadBalancer 10.98.239.183 <pending> 8060:31894/TCP 7m
istio-system istio-ingress LoadBalancer 10.103.124.167 <pending> 80:30341/TCP,443:30145/TCP 24m
istio-system istio-mixer ClusterIP 10.104.162.202 <none> 9091/TCP,15004/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP 24m
istio-system istio-pilot ClusterIP 10.106.111.86 <none> 15003/TCP,443/TCP 24m
istio-system istio-pilot-ilb LoadBalancer 10.106.250.124 <pending> 8080:32752/TCP 7m
istio-system mixer-ilb LoadBalancer 10.103.131.44 <pending> 9091:30549/TCP 7m
kube-system dns-ilb LoadBalancer 10.98.70.111 <pending> 53:30347/UDP 7m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 38m
kube-system kubernetes-dashboard NodePort 10.110.39.52 <none> 80:30000/TCP 38m
Then I search the keyword "istio-egress" in "install/kubernetes/istio.yaml" and unable to find any... Is this something new to v0.3.0 ???
Thanks for the report. It seems like the document issue, since istio-egress was removed (see https://github.com/istio/istio/pull/1202).
So, don't worry for now. Filed https://github.com/istio/istio.github.io/issues/792

kubernetes guesbook example on aws

I'm trying to run through the kubernetes example in AWS. I created the master and 4 nodes with the kube-up.sh script and trying to get the frontend exposed via a load balancer.
Here are the pods
root#ip-172-20-0-9:~/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-2q0at 1/1 Running 0 5m
frontend-5hmxq 1/1 Running 0 5m
frontend-s7i0r 1/1 Running 0 5m
redis-master-y6160 1/1 Running 0 53m
redis-slave-49gya 1/1 Running 0 24m
redis-slave-85u1r 1/1 Running 0 24m
Here are the services
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 37m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 24m
I edited the yml for the frontend service to try to add a load balancer but its not showing up
root#ip-172-20-0-9:~/kubernetes# cat examples/guestbook/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
name: frontend
Here the commands i ran
root#ip-172-20-0-9:~/kubernetes# kubectl create -f examples/guestbook/frontend-controller.yaml
replicationcontroller "frontend" created
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 39m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 26m
If I remove the loadbalancer it loads up but with no external IP
Looks like the external IP might only be there for Google's platform. in AWS it creates a ELB and doesn't show the external IP of the ELB.