kubectl how to rename a context - kubectl

I have many contexts, one for staging, one for production, and many for dev clusters. Copy and pasting the default cluster names is tedious and hard, especially over time. How can I rename them to make context switching easier?

Renaming contexts is easy!
$ kubectl config rename-context old-name new-name
Confirm the change by
$ kubectl config get-contexts

If you are using kubectx try
kubectx new-context-name=old-context-name

Related

Using kubeconfig contexts simply

I need to use now multiple cluster, currently what I did is simple put all the kubeconfig
under .kube folder and any time update the config file with the cluster which I need , e.g.
mv config cluserone
vi config
insert new kubeconfig to the config file and start working with the new cluster,
Let say inside the /Users/i033346/.kube I've all the kubeconfig file one by one.
is there a way to use them as contexts without creating a new file which contain all of them.
I try to use also kubectx however when I use:
export KUBECONFIG=/Users/i033346/.kube/trial
and
export KUBECONFIG=/Users/i033346/.kube/prod
and use kubectx I always get the last one and doenst get list of the defined contexts,any idea?
KUBECONFIG env var supports multiple files, comma-separated:
export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"
This should be enough to see all of them in kubectx.
You can even merge all configs to 1 file:
export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"
kubectl config view --flatten > ~/.kube/config
What I used to do in this scenario is to create multiple aliases pointing to different config files.
e.g
in your .bashrc/.zshrc
edited in your ~/.bashrc or your ~/.zshrc
alias k-cluster1="kubectl --kubeconfig /my_path/config_cluster1"
alias k-cluster2="kubectl --kubeconfig /my_path/config_cluster2"
after loading the terminal k-cluster1 get pods or k-cluster2 get pods should work

Gurantee context for kubectl command - "kubectl get po use-context bla"?

I am writing a script which involves kubectl delete , it is of course existential to run in against correct context-cluster.
The problem is that from what I observe, if you open two terminals and do:
kubectl config use-context bla
in one window, then the other one will switch as well. Therefore, concern if something switches context during script execution my delete operation will start deleting resources in the wrong cluster.
I understand that I could use labels on my pods or different namespaces, but in my case namespaces are the same and there are no labels.
So is there a way to specify for each command individually which context it should execute against? Something like:
kubectl get po use-context bla
Use the --context flag:
kubectl get po --context bla
If you run any kubectl command, you'll also see it says you can run kubectl options to see a list of global options that can be applied to any command. --context is one such global option.

How do I switch between different Kubernetes contexts described in distinct kubeconfig yaml files fast?

I need to access several Kubernetes clusters. For each of them, I got a kubeconfig yaml file, e.g. kubeconfig-cluster1.yaml and kubeconfig-cluster2.yaml.
How can I easily switch between these configurations? I mean, without setting the KUBECONFIG environment variable manually to one of these files?
You can declare all contexts in the KUBECONFIG environment variable:
The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited.
To autodetect the contexts based on the kubeconfig files, assuming they're all located in the ~/.kube folder, and assign them as a colon-separated list to the KUBECONFIG environment variable, you could add a script to your ~/.bashrc or ~/.zshrc:
# Autodetect kubeconfig files to enable switching between them with kubectx
export KUBECONFIG=`ls -1 ~/.kube/kubeconfig-* | paste -sd ":" -`
Then, to switch between these kubectl contexts (with autocomplete!), have a look at the kubectx utility.
The kubectx README page contains installation instructions.
$ kubectx cluster1
Switched to context "cluster1".
$ kubectx cluster2
Switched to context "cluster2".
I also had multiple kubernetes cluster to manage. I wrote a script to switch kubeconfig and namespace easily. Hope it can help you.
. k-use -k <kubeconfig> -n <namespace>
https://github.com/kingonion/k-use

Changed API-server manifest file, but changes aren't implemented

I am adding the flag --cloud-provider=aws to /etc/kubernetes/manifests/kube-apiserver.yaml and kube-controller-manager.yaml. When I describe the pods I can see that they pick up the change and are recreated, however the flags have not changed.
Running on Centos7 machines in AWS. I have tried restarting the Kubelet service and tried using kubectl apply.
There are couple of ways to achieve this. But seems like you have choosen the DynamicKubeletConfig way but you didn't configure DynamicKubeletConfig! To do live changes to your cluster you need to enable DynamicKubeletonfig first then follow the steps here
Another Way [Ref]
TL;DR (do this at your own risk!)
Step 1: kubeadm config view > kubeadm-config.yaml
Step 2: edit kubeadm-config.yaml to add your changes [Reference for flags ]
Step 3: kubeadm upgrade apply --config kubeadm-config.yaml

Kubectl : No resource found

I’ve installed ICP4Data successfully. I am pretty green in respect to ICP4Data and Kubernetes. I’m trying to use kubectl command for listing the pods in ICP4D but “kubectl get pods” returns “No resource found”. Am I missing something?
icp4d uses 'zen' namespaces to logically separate its assets and resources from the core native icp/kube platform. In the default installation of ICP4D, there are no pods deployed on 'default' namespace and hence you get "no resources found" cause if you don't provide the namespace while trying to get pods, kubectl assumes its default namespace.
To List the pods from zen namespace
kubectl get pods -n zen
To list all the namespaces available to you - try
kubectl get namespaces
To list pods from all the namespaces, you might want to append --all-namespaces
kubectl get pods --all-namespaces
This should list all the pods from zen, kubesystem and possibly others.
Please try adding namespace to the command as well. In the case for ICP4D try kubectl get pods -n zen.
On the other hand, you could switch your namespace to zen at the beginning by
kubectl config set-context --current --namespace=zen
Then you will be able to see all the information by running without the -n argument
kubectl get pods
Check you are currently on which namespace.
To find out your pod is created in which namespace, you can run this command
kubectl get pods --all-namespaces
Also just to add, since I was in default workspace and I wanted to get logs of a pod in another namespace, just doing
kubectl get logs -f <pod_name>
was giving output "Error from server (NotFound): pods "pod_name" not found".
So I specified the namespace as well.
kubectl logs -f <pod_name> -n namespace