How to authenticate kubectl using environment variables? - kubectl

The only two ways I can find to authenticate is by creating a new authentication context, e.g.
kubectl config set-credentials gajus/foo --token=foo
kubectl config set-cluster foo --insecure-skip-tls-verify=true --server=https://127.0.0.1
kubectl config set-context default/foo/gajus --user=gajus/foo --namespace=default --cluster=foo
kubectl config use-context default/foo/gajus
and by using the command line options, e.g.
kubectl --server=https://127.0.0.1 --insecure-skip-tls-verify=true --token=foo get po
Is there a way to set values for --server and other authentication options using environment variables?

The configuration file for credentials live under $HOME/.kube/config (kubeconfig). You can create multiple configuration files like that and use the KUBECONFIG environment variable to point to the file you want to use for the current session.
export KUBECONFIG=~/.kube/config-foo
kubectl config set-credentials gajus/foo --token=foo
kubectl config set-cluster foo --insecure-skip-tls-verify=true --server=https://127.0.0.1
kubectl config set-context default/foo/gajus --user=gajus/foo --namespace=default --cluster=foo
kubectl config use-context default/foo/gajus
export KUBECONFIG=~/.kube/config-bar
...
KUBECONFIG=$HOME/.kube/config-foo kubectl get pod
KUBECONFIG=$HOME/.kube/config-bar kubectl get pod

Related

How does kubectl know which KUBECONFIG config file to use?

In the past, I used $HOME/.kube/config and downloaded the kubeconfig file from Rancher.
There was only one kubeconfig file (see docs) so I never wondered about which file was being used.
Yesterday, I switched to using the KUBECONFIG environment variable because there are several k8s clusters I'm now using (prod, stg, dev), and that's what the devops team told me to do, so I did it (and it worked). For the sake of discussion, the command was:
export KUBECONFIG=$HOME/.kube/config-prod:$HOME/.kube/config-stage:$HOME/.kube/config-dev
So now I can use following commands to use the staging K8S cluster.
kubectl config current-context
# verify I'm in the using the right k8s config, and switch if necessary
kubectl config use-context dev <<< WHERE DOES kubectl store this setting???
kubectl config set-context --current --namespace MyDeploymentNamespace
kubectl get pods
My question is How does does kubectl know what kube config file is being used? Where does it store the currently selected context?
or instead
Does kubectl logically concatenate all of the kubeconfig files defined in the KUBECONFIG environment variable to get access to any of the defined Kubernetes clusters?
Searching for answer
I found several answers to similar questions but not quite this question.
kubectl: Get location of KubeConfig file in use is similar and points to Finding the kubeconfig file being used
Which explains how to the file
kubectl config current-context -v6
I0203 14:59:05.926287 30654 loader.go:375] Config loaded from file: /home/user1/.kube/config
dev
I'm asking where is this information stored?
The only guess I have is in the $HOME/.kube/cache directory in which I'm finding scores of files.
Some Kubernetes documentation
https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
See:
The KUBECONFIG environment variable; and
Merging kubeconfig files
which says:
Otherwise, if the KUBECONFIG environment variable is set, use it as a list of files that should be merged. Merge the files listed in the KUBECONFIG environment variable according to these rules: (see link for details)
So to answer your question,
Does kubectl logically concatenate all of the kubeconfig files defined in the KUBECONFIG environment variable . . .
The answer is; the files are merged.
kubeconfig files contain sections for clusters,contexts and users and there's a current-context property too which stores the value of kubectl config current-context and is revised by kubectl config use-context.
Per the rules in Merging kubeconfig files, the first occurrence of e.g. current-context is the value that wins.

How to execute a kubectl (k8s) command in all clusters?

I want to get all pods with a specific label in all contexts.
What I have to do now is to iterate the contexts found in kubectl config get-contexts:
kgp --context [CONTEXT] -n my-namespace -l app=my-app
Is there an equivalent of all-namespaces but for contexts?

Kubectl show expanded command when using alases or shorthand

Kubectl has many aliases like svc, po, deploy etc.
Is there a way to show the expanded command for a command with shorthand.
for example kubectl get po
to
kubectl get pods
On a similar question the api-resources is used # What's kubernetes abbreviation for deployments?
But it gives very top level shorthands,
for eg, kubeclt get svc expands to kubectl get services
but in kubectl create svc expands to kubectl create service
Kindly guide,
Thanks
kubectl explain may be of interest e.g.:
kubectl explain po
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
There are plugins for kubectl too.
I've not tried it but kubectl explore may be worth a try.
Unfortunately, kubectl isn't documented by explainshell.com which would be a boon as it would also document the various flags e.g. -n (--namespace) and -o (--output).

How to manage multiple GKE projects in one Google Cloud Account [duplicate]

This question already has answers here:
Run a single kubectl command for a specific project and cluster?
(2 answers)
Closed 2 years ago.
Given a situation where I have three separate GKE instances in different Google Cloud projects under the same billing account, how can I configure kubectl so that the commands I execute with it only apply to a specific cluster?
kubectl access to Kubernetes API servers are managed by configuration contexts.
Here is some documentation for how to do so. In a nutshell, you would stand up multiple Kubernetes clusters and then specify a configuration like so:
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
name: development
- cluster:
name: scratch
users:
- name: developer
- name: experimenter
contexts:
- context:
name: dev-frontend
- context:
name: dev-storage
- context:
name: exp-scratch
To automatically generate one, you can run the following commands:
# Add cluster details to the file
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify
# Add user details to the configuration file
kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password
# Add context details to the configuration file
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
After that, you can safe the context. Then, going forward, when you run a kubectl command, the action will apply to the cluster and namespace listed in the specifeid context. For example:
kubectl config --kubeconfig=config-demo use-context dev-frontend
To then change the context to another one you specified:
kubectl config --kubeconfig=config-demo use-context exp-scratch

kubectl does not have config changed when gcloud config changes for Google Cloud Platform project

I switched the project config for gcloud using gcloud config set project abcxyz, however kubectl get pods is returning the pods in the previous gcloud / kubernetes project.
How do I update the project config to match gcloud's config?
After you've changed project, run:
gcloud container clusters get-credentials <cluster_name>
gcloud will then set kubectl to be looking at your new project.