no Istio pods in namespace "istio-system" - istio

I have installed istioctl/1.4.8, istioctl is not able to talk to my cluster, using command (istioctl version -c platform)
#kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
# kubectl get pods -A | grep -i istio | grep pilot
istio-platform istio-pilot-7c5adrgcd89-wt9k 2/2 Running 4 1d
#istioctl version
2020-06-14T11:26:13.636825Z warn will use `--remote=false` to retrieve version info due to `no Istio pods in namespace "istio-system"`
1.4.8
# istioctl version -c istio-platform
2020-06-14T11:27:59.121013Z warn will use `--remote=false` to retrieve version info due to `no Istio pods in namespace "istio-system"`
1.4.8
istio is running in namespace : istio-platform
What could be the issue here, any hints ?

You have to provide the Istio namespace if its not in istio-system:
istioctl version -i istio-platform
cf. https://istio.io/latest/docs/reference/commands/istioctl/

Related

kubectl wait - error: no matching resources found

I am installing metallb, but need to wait for resources to be created.
kubectl wait --for=condition=ready --timeout=60s -n metallb-system --all pods
But I get:
error: no matching resources found
If I dont wait I get:
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.106.91.126:443: connect: connection refused
Do you know how to wait for resources to be created before actually be able to wait for condition.
Info:
kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:29:58Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/arm64"}
For the error “no matching resources found”:
Wait for a minute and try again, It will be resolved.
You can find the explanation about that error in the following link Setting up Config Connector
For the error STDIN:
Follow the steps mentioned below:
You are getting this error because API server is NOT able to connect to the webhook
1)Check your Firewall Rules allowing TCP port 443 or not.
2)Temporarily disable the operator
kubectl -n config-management-system scale deployment config-management-operator --replicas=0
deployment.apps/config-management-operator scaled
Delete the deployment
kubectl delete deployments.apps -n <namespace> -system <namespace>-controller-manager
deployment.apps "namespace-controller-manager" deleted
3)create a configmap in the default namespace
kubectl create configmap foo
configmap/foo created
4)Check that configmap does not work with the label on the object
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
labels:
configmanagement.gke.io/debug-force-validation-webhook: "true"
name: foo
EOF
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "debug-validation.namespace.sh": failed to call webhook: Post "https://namespace-webhook-service.namespace-system.svc:443/v1/admit?timeout=3s": no endpoints available for service "namespace-webhook-service"
5)And finally do clean up by using the below commands :
kubectl delete configmap foo
configmap "foo" deleted
kubectl -n config-management-system scale deployment config-management-operator --replicas=1
deployment.apps/config-management-operator scaled

kubectl timeout error when trying to deploy on a private GKE cluster

kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:30:46Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout

Not able to access my Flask server through ingress

I am new to the concepts of Flask and k8s and trying to implement a very simple Flask server via k8s for getting familiar with the concept.
Am able to access it via NodePort. But after adding ingress service and tweaking my host file in windows machine. When I tried to access the host URL, which I added in ingress-srv.yaml, I am getting a 404 error.
Here is the project Github link: https://github.com/bijay-ps/flask-poc
can someone help me out??
You need to describe which k8s you are using. (ex. minikube, gcp, azure etc) And version of client and server
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.5-gke.6", GitCommit:"de3e4dcd39464bc1601edd66681e663bff1fe530", GitTreeState:"clean", BuildDate:"2020-05-12T16:10:21Z", GoVersion:"go1.13.9b4", Compiler:"gc", Platform:"linux/amd64"}
Make sure you install Nginx Ingress Controller. There's no default(pre-installed) ingress.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/cloud/deploy.yaml
Your yaml configs looks ok.

Specify kubectl client version installed via gcloud SDK

I probably missed this in the docs somewhere, but since I haven't found it yet, I'll ask: How can I specify the version of kubectl CLI when installing with the gcloud SDK?
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.9-2+4a03651a7e7e04", GitCommit:"4a03651a7e7e04a0021b2ef087963dfb7bd0a17e", GitTreeState:"clean", BuildDate:"2019-08-16T19:08:17Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
$ gcloud components update
All components are up to date.
$ which kubectl
/Users/me/Projects/googlecloud/google-cloud-sdk/bin/kubectl
$ which gcloud
/Users/me/Projects/googlecloud/google-cloud-sdk/bin/gcloud
$ ls -nL /Users/me/Projects/googlecloud/google-cloud-sdk/bin | grep kubectl
-rwxr-xr-x 1 501 20 44296840 Aug 16 12:08 kubectl
-rwxr-xr-x 1 501 20 54985744 Apr 30 21:56 kubectl.1.11
-rwxr-xr-x 1 501 20 56860112 Jul 7 21:34 kubectl.1.12
-rwxr-xr-x 1 501 20 44329928 Aug 5 02:52 kubectl.1.13
-rwxr-xr-x 1 501 20 48698616 Aug 5 02:55 kubectl.1.14
-rwxr-xr-x 1 501 20 48591440 Aug 5 02:57 kubectl.1.15
So I'm using the gcloud-installed kubectl, and I see that the version I want is locally installed. The gcloud components update command run previously indicated that kubectl would be set to the default version of 1.13, but I haven't caught any indication of how to change the default version.
I imagine I could create a link, or copy the version I want onto Users/me/Projects/googlecloud/google-cloud-sdk/bin/kubectl, but I'm leary of messing with the managed environs of gcloud.
Whelp, I went ahead and ran the following
KUBE_BIN=$(which kubectl)
rm $KUBE_BIN
ln ~/googlecloud/google-cloud-sdk/bin/kubectl.1.15 $KUBE_BIN
and now i get
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
and everything seems to be working just fine...
IIRC you cannot.
But, as you show, you have multiple major-minor versions available and, because kubectl is distributed as a static binary, you can, e.g.
kubectl1.15 version
GKE only
Do not change kubectl version if you're working only with GKE, because kubectl supports only one version forward and backward skew.
For example, if you use kubectl 1.16 against GKE 1.14, you may experience some bugs, such as --watch flag is not working properly.
gcloud provides just the right version for the current version of GKE.
Multiple cluster versions
explicit version
If you're working with different Kubernetes cluster, I'd suggest using the gcloud version of kubectl as a default one. For any specific version of kubectl, just create a dir ~/bin/kubectl, put there kubectl1.15, kubectl1.16, etc. and add the dir to your PATH.
With such setup you can explicitly use appropriate version:
$ # Working with GKE
$ kubectl ...
$ # Working with K8s 1.15
$ kubectl1.15 ...
implicit version
Using direnv you can make switching between versions transparent.
There are many ways of doing this, here is on example.
Let's say you have a project which requires kubectl 1.15. Inside the project dir create env/bin subdir and link there all binaries you need (kubectl1.15, helm2, etc.), create .envrc file with the following content:
export PATH="$(PWD)/env/bin:${PATH}"
Run direnv allow in the project dir (it's needed only once for any new .envrc). After that you'll have all binaries from env/bin in your path.
And then, in the dir and all subdirs:
$ # Invokes kubectl 1.15
$ kubectl ...
$ # Invokes Helm 2
$ helm ...

Kubectl returns Error from server (NotAcceptable): unknown (get nodes)

According to this: https://github.com/kubernetes/kops#compatibility-matrix
the versions should be fine. When I run kubectl get node I get the following output:
Error from server (NotAcceptable): unknown (get nodes)
kubectl version:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.16", GitCommit:"e8846c1d7e7e632d4bd5ed46160eff3dc4c993c5", GitTreeState:"clean", BuildDate:"2018-04-04T08:47:13Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
kops version:
Version 1.9.2 (git-cb54c6a52)
This is the info about the nodes I got when running kops update:
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-eu-central-1a Ready 0 1 1 1 1
nodes Ready 0 2 2 2 2
I missunderstood this. It's pretty obvious now: Client: 1.13.0, Server: 1.7.16.
The server must be the clients version (+/-1 is supported, see here for more). So I edited the server version using kops edit cluster and updated it with kops update cluster.