kubectl timeout error when trying to deploy on a private GKE cluster - google-cloud-platform

kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:30:46Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout

Related

RKE : Failed to apply the ServiceAccount needed for job execution

Failed to apply the ServiceAccount needed for job execution: Post \"https://44.198.185.122:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=30s\": read tcp 192.168.1.6:63871->44.198.185.122:6443: wsarecv: An existing connection was forcibly closed by the remote host
kubectl verion :
PS D:\AWS\k8s\RKE> kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", `enter code here`GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", `enter code here`GoVersion:"go1.16.6", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: read tcp 192.168.1.6:63575->3.236.179.125:6443: wsarecv: An existing connection was forcibly closed by the remote host.

no Istio pods in namespace "istio-system"

I have installed istioctl/1.4.8, istioctl is not able to talk to my cluster, using command (istioctl version -c platform)
#kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
# kubectl get pods -A | grep -i istio | grep pilot
istio-platform istio-pilot-7c5adrgcd89-wt9k 2/2 Running 4 1d
#istioctl version
2020-06-14T11:26:13.636825Z warn will use `--remote=false` to retrieve version info due to `no Istio pods in namespace "istio-system"`
1.4.8
# istioctl version -c istio-platform
2020-06-14T11:27:59.121013Z warn will use `--remote=false` to retrieve version info due to `no Istio pods in namespace "istio-system"`
1.4.8
istio is running in namespace : istio-platform
What could be the issue here, any hints ?
You have to provide the Istio namespace if its not in istio-system:
istioctl version -i istio-platform
cf. https://istio.io/latest/docs/reference/commands/istioctl/

Not able to access my Flask server through ingress

I am new to the concepts of Flask and k8s and trying to implement a very simple Flask server via k8s for getting familiar with the concept.
Am able to access it via NodePort. But after adding ingress service and tweaking my host file in windows machine. When I tried to access the host URL, which I added in ingress-srv.yaml, I am getting a 404 error.
Here is the project Github link: https://github.com/bijay-ps/flask-poc
can someone help me out??
You need to describe which k8s you are using. (ex. minikube, gcp, azure etc) And version of client and server
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.5-gke.6", GitCommit:"de3e4dcd39464bc1601edd66681e663bff1fe530", GitTreeState:"clean", BuildDate:"2020-05-12T16:10:21Z", GoVersion:"go1.13.9b4", Compiler:"gc", Platform:"linux/amd64"}
Make sure you install Nginx Ingress Controller. There's no default(pre-installed) ingress.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/cloud/deploy.yaml
Your yaml configs looks ok.

Kubectl returns Error from server (NotAcceptable): unknown (get nodes)

According to this: https://github.com/kubernetes/kops#compatibility-matrix
the versions should be fine. When I run kubectl get node I get the following output:
Error from server (NotAcceptable): unknown (get nodes)
kubectl version:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.16", GitCommit:"e8846c1d7e7e632d4bd5ed46160eff3dc4c993c5", GitTreeState:"clean", BuildDate:"2018-04-04T08:47:13Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
kops version:
Version 1.9.2 (git-cb54c6a52)
This is the info about the nodes I got when running kops update:
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-eu-central-1a Ready 0 1 1 1 1
nodes Ready 0 2 2 2 2
I missunderstood this. It's pretty obvious now: Client: 1.13.0, Server: 1.7.16.
The server must be the clients version (+/-1 is supported, see here for more). So I edited the server version using kops edit cluster and updated it with kops update cluster.

Error from server (Forbidden): the server does not allow access to the requested resource (post replicationcontrollers)

When run some command,
error occur:
/home/kubernetes/cluster/ubuntu/binaries# ./kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Error from server (Forbidden): the server does not allow access to the requested resource
What's wrong with my configuration?
Thanks a lot
kube-apiserver --service-cluster-ip-range=10.1.0.1/24 --insecure-bind-address=0.0.0.0 --etcd-servers=http://127.0.0.1:4001 --secure-port=0 --insecure-port=35001 --allow-privileged=True --advertise-address=100.109.165.127 --bind-address=100.109.165.127 --insecure-bind-address=100.109.165.127
./kubectl version -s 100.109.165.127:35001
in .profile
add all the ip in no_proxy
suck as
export no_proxy="127.0.0.1,localhost,100.109.196.103,100.109.165.127"
the issue reolved