How does kubectl know which KUBECONFIG config file to use? - kubectl

In the past, I used $HOME/.kube/config and downloaded the kubeconfig file from Rancher.
There was only one kubeconfig file (see docs) so I never wondered about which file was being used.
Yesterday, I switched to using the KUBECONFIG environment variable because there are several k8s clusters I'm now using (prod, stg, dev), and that's what the devops team told me to do, so I did it (and it worked). For the sake of discussion, the command was:
export KUBECONFIG=$HOME/.kube/config-prod:$HOME/.kube/config-stage:$HOME/.kube/config-dev
So now I can use following commands to use the staging K8S cluster.
kubectl config current-context
# verify I'm in the using the right k8s config, and switch if necessary
kubectl config use-context dev <<< WHERE DOES kubectl store this setting???
kubectl config set-context --current --namespace MyDeploymentNamespace
kubectl get pods
My question is How does does kubectl know what kube config file is being used? Where does it store the currently selected context?
or instead
Does kubectl logically concatenate all of the kubeconfig files defined in the KUBECONFIG environment variable to get access to any of the defined Kubernetes clusters?
Searching for answer
I found several answers to similar questions but not quite this question.
kubectl: Get location of KubeConfig file in use is similar and points to Finding the kubeconfig file being used
Which explains how to the file
kubectl config current-context -v6
I0203 14:59:05.926287 30654 loader.go:375] Config loaded from file: /home/user1/.kube/config
dev
I'm asking where is this information stored?
The only guess I have is in the $HOME/.kube/cache directory in which I'm finding scores of files.
Some Kubernetes documentation
https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

See:
The KUBECONFIG environment variable; and
Merging kubeconfig files
which says:
Otherwise, if the KUBECONFIG environment variable is set, use it as a list of files that should be merged. Merge the files listed in the KUBECONFIG environment variable according to these rules: (see link for details)
So to answer your question,
Does kubectl logically concatenate all of the kubeconfig files defined in the KUBECONFIG environment variable . . .
The answer is; the files are merged.
kubeconfig files contain sections for clusters,contexts and users and there's a current-context property too which stores the value of kubectl config current-context and is revised by kubectl config use-context.
Per the rules in Merging kubeconfig files, the first occurrence of e.g. current-context is the value that wins.

Related

Error `executable aws not found` with kubectl config defined by `aws eks update-kubeconfig`

I defined my KUBECONFIG for the AWS EKS cluster:
aws eks update-kubeconfig --region eu-west-1 --name yb-demo
but got the following error when using kubectl:
...
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
[opc#C eks]$ kubectl get sc
Unable to connect to the server: getting credentials: exec: executable aws not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
You can also append your custom aws cli installation path to the $PATH variable in ~/.bash_profile: export PATH=$PATH:<path to aws cli program directory>. This way you do not need to sed the kubeconfig file every time you add an EKS cluster. Also you will be able to use aws command at the command prompt without specifying full path to the program for every execution.
I had this problem when installing kubectx on Ubuntu Linux via a Snap package. It does not seem to be able to access the AWS CLI then. I worked around the issue by removing the Snap package and just using the shell scripts instead.
It seems that in ~/.kube/config the command: aws doesn't use the PATH environment and doesn't find it. Here is how to change it to the full path:
sed -e "/command: aws/s?aws?$(which aws)?" -i ~/.kube/config

We are in the process of configuring Kubectl to manage our cluster in Azure Kubernetes Service

1.Can we get azure kubectl(exe/bin) as downloadable link instead of installing it using command line 'az aks install-cli' or can I use the kubectl from kuberenetes?
2.Is there any azure-cli command to change the default location of kubeconfig file, As by default it is pointing to '.kube\config' in windows ?
Example : Instead of using '--kubeconfig' flag like 'kubectl get nodes --kubeconfig D:\config' can we able to change the default location ?
you can use the default kubernetes CLI (kubectl) and you can get credentials with the Azure CLI az aks get-credentials --resource-group <RESOURCE_GROUP> --name <AKS_CLUSTER_NAME> or over the Azure portal UI.
You can use the flag --kubeconfig=<PATH> or you can overwrite the default value of the variable KUBECONFIG that is pointing to $HOME/.kube/config with KUBECONFIG=<PATH>.
Example:
KUBECONFIG=D:\config
#or
export KUBECONFIG=D:\config

How to expose kubeconfig file after create an EKS cluster by aws_eks_cluster with Terraform?

eks module can generate an output kubeconfig
aws_eks_cluster resource doesn't has this feature.
Why don't add this feature?
This feature is deprecated on v18 of the aws module, from docs:
Support for managing kubeconfig and its associated local_file
resources have been removed; users are able to use the awscli provided
aws eks update-kubeconfig --name <cluster_name> to update their local
kubeconfig as necessary
The terraform eks module exposes that file by default, you can take a look at their files or even use their module. It's relatively easy to setup and works great. Links : eks module, I am not 100% if this is the section for it but you can take the look at their whole repo.

How to set up GKE configuration in a yml file for Django app?

I am following below doc in the link to deploy the django application in the Google Kubernetes Engine.
Setting up your GKE configuration in a yml file
In the step Setting up your GKE configuration, there is yml file called polls.yaml. Where should I find this file ? If it doesn't exist yet, where should I create it and follow what template ?
I suspect this is the polls.yaml file you were looking for.
I used the following info
git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
cd python-docs-samples/kubernetes_engine/django_tutorial
in order to locate your polls.yaml file.
oh I missed the before step i took one week ago so the polls refer to the polls cluster I created before:
gcloud container clusters create polls \ --scopes "https://www.googleapis.com/auth/userinfo.email","cloud-platform" \ --num-nodes 4 --zone "us-central1-a"

How to setup Kubernetes Master HA on AWS

What I am trying to do:
I have setup kubernete cluster using documentation available on Kubernetes website (http_kubernetes.io/v1.1/docs/getting-started-guides/aws.html). Using kube-up.sh, i was able to bring kubernete cluster up with 1 master and 3 minions (as highlighted in blue rectangle in the diagram below). From the documentation as far as i know we can add minions as and when required, So from my point of view k8s master instance is single point of failure when it comes to high availability.
Kubernetes Master HA on AWS
So I am trying to setup HA k8s master layer with the three master nodes as shown above in the diagram. For accomplishing this I am following kubernetes high availability cluster guide, http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer
What I have done:
Setup k8s cluster using kube-up.sh and provider aws (master1 and minion1, minion2, and minion3)
Setup two fresh master instance’s (master2 and master3)
I then started configuring etcd cluster on master1, master 2 and master 3 by following below mentioned link:
http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer
So in short i have copied etcd.yaml from the kubernetes website (http_kubernetes.io/v1.1/docs/admin/high-availability/etcd.yaml) and updated Node_IP, Node_Name and Discovery Token on all the three nodes as shown below.
NODE_NAME NODE_IP DISCOVERY_TOKEN
Master1
172.20.3.150 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed
Master2
172.20.3.200 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed
Master3
172.20.3.250 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed
And on running etcdctl member list on all the three nodes, I am getting:
$ docker exec <container-id> etcdctl member list
ce2a822cea30bfca: name=default peerURLs=http_localhost:2380,http_localhost:7001 clientURLs=http_127.0.0.1:4001
As per documentation we need to keep etcd.yaml in /etc/kubernete/manifest, this directory already contains etcd.manifest and etcd-event.manifest files. For testing I modified etcd.manifest file with etcd parameters.
After making above changes I forcefully terminated docker container, container was existing after few seconds and I was getting below mentioned error on running kubectl get nodes:
error: couldn't read version from server: Get httplocalhost:8080/api: dial tcp 127.0.0.1:8080: connection refused
So please kindly suggest how can I setup k8s master highly available setup on AWS.
To configure an HA master, you should follow the High Availability Kubernetes Cluster document, in particular making sure you have replicated storage across failure domains and a load balancer in front of your replicated apiservers.
Setting up HA controllers for kubernetes is not trivial and I can't provide all the details here but I'll outline what was successful for me.
Use kube-aws to set up a single-controller cluster: https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html. This will create CloudFormation stack templates and cloud-config templates that you can use as a starting point.
Go the AWS CloudFormation Management Console, click the "Template" tab and copy out the complete stack configuration. Alternatively, use $ kube-aws up --export to generate the cloudformation stack file.
User the userdata cloud-config templates generated by kube-aws and replace the variables with actual values. This guide will help you determine what those values should be: https://coreos.com/kubernetes/docs/latest/getting-started.html. In my case I ended up with four cloud-configs:
cloud-config-controller-0
cloud-config-controller-1
cloud-config-controller-2
cloud-config-worker
Validate your new cloud-configs here: https://coreos.com/validate/
Insert your cloud-configs into the CloudFormation stack config. First compress and encode your cloud config:
$ gzip -k cloud-config-controller-0
$ cat cloud-config-controller-0.gz | base64 > cloud-config-controller-0.enc
Now copy the content into your encoded cloud-config into the CloudFormation config. Look for the UserData key for the appropriate InstanceController. (I added additional InstanceController objects for the additional controllers.)
Update the stack at the AWS CloudFormation Management Console using your newly created CloudFormation config.
You will also need to generate TLS asssets: https://coreos.com/kubernetes/docs/latest/openssl.html. These assets will have to be compressed and encoded (same gzip and base64 as above), then inserted into your userdata cloud-configs.
When debugging on the server, journalctl is your friend:
$ journalctl -u oem-cloudinit # to debug problems with your cloud-config
$ journalctl -u etcd2
$ journalctl -u kubelet
Hope that helps.
There is also kops project
From the project README:
Operate HA Kubernetes the Kubernetes Way
also:
We like to think of it as kubectl for clusters
Download the latest release, e.g.:
cd ~/opt
wget https://github.com/kubernetes/kops/releases/download/v1.4.1/kops-linux-amd64
mv kops-linux-amd64 kops
chmod +x kops
ln -s ~/opt/kops ~/bin/kops
See kops usage, especially:
kops create cluster
kops update cluster
Assuming you already have s3://my-kops bucket and kops.example.com hosted zone.
Create configuration:
kops create cluster --state=s3://my-kops --cloud=aws \
--name=kops.example.com \
--dns-zone=kops.example.com \
--ssh-public-key=~/.ssh/my_rsa.pub \
--master-size=t2.medium \
--master-zones=eu-west-1a,eu-west-1b,eu-west-1c \
--network-cidr=10.0.0.0/22 \
--node-count=3 \
--node-size=t2.micro \
--zones=eu-west-1a,eu-west-1b,eu-west-1c
Edit configuration:
kops edit cluster --state=s3://my-kops
Export terraform scripts:
kops update cluster --state=s3://my-kops --name=kops.example.com --target=terraform
Apply changes directly:
kops update cluster --state=s3://my-kops --name=kops.example.com --yes
List cluster:
kops get cluster --state s3://my-kops
Delete cluster:
kops delete cluster --state s3://my-kops --name=kops.identityservice.co.uk --yes