Kubectl : No resource found - kubectl

I’ve installed ICP4Data successfully. I am pretty green in respect to ICP4Data and Kubernetes. I’m trying to use kubectl command for listing the pods in ICP4D but “kubectl get pods” returns “No resource found”. Am I missing something?

icp4d uses 'zen' namespaces to logically separate its assets and resources from the core native icp/kube platform. In the default installation of ICP4D, there are no pods deployed on 'default' namespace and hence you get "no resources found" cause if you don't provide the namespace while trying to get pods, kubectl assumes its default namespace.
To List the pods from zen namespace
kubectl get pods -n zen
To list all the namespaces available to you - try
kubectl get namespaces
To list pods from all the namespaces, you might want to append --all-namespaces
kubectl get pods --all-namespaces
This should list all the pods from zen, kubesystem and possibly others.

Please try adding namespace to the command as well. In the case for ICP4D try kubectl get pods -n zen.

On the other hand, you could switch your namespace to zen at the beginning by
kubectl config set-context --current --namespace=zen
Then you will be able to see all the information by running without the -n argument
kubectl get pods

Check you are currently on which namespace.
To find out your pod is created in which namespace, you can run this command
kubectl get pods --all-namespaces

Also just to add, since I was in default workspace and I wanted to get logs of a pod in another namespace, just doing
kubectl get logs -f <pod_name>
was giving output "Error from server (NotFound): pods "pod_name" not found".
So I specified the namespace as well.
kubectl logs -f <pod_name> -n namespace

Related

Delete attempt of Kubernetes resource reports not found, even though it can be listed with "kubectl get"

I am running Kubeflow pipeline on a single node Rancher K3S cluster. Katib is deployed to create training jobs (Kind: TFJob) along with experiments (a CRD).
I can list the experiment resources with kubectl get experiments -n <namespace>. However, when trying to delete using kubectl delete experiment exp_name -n namespace the API server returns NotFound.
kubectl version is 1.22.12
kubeflow 1.6
How can a(any) resource be deleted when it is listed by "kubectl get , but a direct kubectl delete says the resource cannot be found?
Hopefully there is a general answer applicable for any resource.
Example:
kc get experiments -n <namespace>
NAME TYPE STATUS AGE
mnist-e2e Running True 21h
kc delete experiment mnist-e2e -n namespace
Error from server (NotFound): experiments.kubeflow.org "mnist-e2e" not found
I have tried these methods, but all involve the use of the resource name (mnist-e2e) and result in "NotFound".
I tried patching the manifest to empty the finalizers list:
kubectl patch experiment mnist-e2e \
-n namespace \
-p '{"metadata":{"finalizers":[]}}' \
--type=merge
I tried dumping a manifest of the "orphaned" resource and then deleting using that manifest:
kubectl get experiment mnist-e2e -n namespace -o yaml > exp.yaml
kubectl delete -f exp.yaml
Delete attempts from the Kubeflow UI Experiments (AutoML) page fail.
Thanks

Not able to exec into AKS pod

I'm trying to exec into one of the pods of my AKS instance using the following command: kubectl exec -n hermes --stdin --tty hermes-deployment-ddb88855b-dzgvt -- /bin/sh. I'm using just the regular CMD from Windows, but I'm getting the following error:
error: v1.Pod: ObjectMeta: v1.ObjectMeta: readObjectFieldAsBytes: expect : after object field, parsing 3193 ...:{},"k:{\"... at {"kind":"Pod","apiVersion":"v1","metadata"
The error is quite long with some meta data and environment variables I believe. I can't share the meta data due to corporate restrictions. I'm logged into Azure using az login, and also have the correct context in my kubeconfig.
Does anyone know what this error exactly entails?

Istio question, where is pilot-discovery command?

Istio question, where is pilot-discovery command?
i can found. In istio-1.8.0 directory has no command named pilot-discovery.
pilot-discovery command is command used by pilot, which is part of istiod now.
istiod unifies functionality that Pilot, Galley, Citadel and the sidecar injector previously performed, into a single binary.
You can get your istio pods with
kubectl get pods -n istio-system
Use kubectl exec to get into your istiod container with
kubectl exec -ti <istiod-pod-name> -c discovery -n istio-system -- /bin/bash
Use pilot-discovery commands as mentioned in istio documentation.
e.g.
istio-proxy#istiod-f49cbf7c7-fn5fb:/$ pilot-discovery version
version.BuildInfo{Version:"1.8.0", GitRevision:"c87a4c874df27e37a3e6c25fa3d1ef6279685d23", GolangVersion:"go1.15.5", BuildStatus:"Clean", GitTag:"1.8.0-rc.1"}
In case you are interested in the code: https://github.com/istio/istio/blob/release-1.8/pilot/cmd/pilot-discovery/main.go
I compile the binary by myself.
1 download istio project.
2 make build
3 set golang proxy
4 cd out
You will see the binary.

Determining the cause of kubernetes pod restart

I'm running into issues with our Kubernetes deployment. Recently we are running into a problem with one of the pods being restarted frequently.
The service inside is using C++, with Google Logging and should dump a stacktrace on a crash (it does do that when run locally).
Unfortunately, the only log message I was able to find, related to the pod restart is from containerd, just saying "shim reaped".
Do I need to turn on some extra logging/monitoring to have the reasons for restart retained?
Your can check crashed pod log by running
$ kubectl logs -f <pod name> -n <namespace> --previous
The pod could have been terminated for reasons like out of memory. Use kubectl describe pod <podname> which contains the information.
There should be output like this (could also be a different reason than OOM):
Last State: Terminated
Reason: OOMKilled

Gurantee context for kubectl command - "kubectl get po use-context bla"?

I am writing a script which involves kubectl delete , it is of course existential to run in against correct context-cluster.
The problem is that from what I observe, if you open two terminals and do:
kubectl config use-context bla
in one window, then the other one will switch as well. Therefore, concern if something switches context during script execution my delete operation will start deleting resources in the wrong cluster.
I understand that I could use labels on my pods or different namespaces, but in my case namespaces are the same and there are no labels.
So is there a way to specify for each command individually which context it should execute against? Something like:
kubectl get po use-context bla
Use the --context flag:
kubectl get po --context bla
If you run any kubectl command, you'll also see it says you can run kubectl options to see a list of global options that can be applied to any command. --context is one such global option.