This question already has answers here:
Run a single kubectl command for a specific project and cluster?
(2 answers)
Closed 2 years ago.
Given a situation where I have three separate GKE instances in different Google Cloud projects under the same billing account, how can I configure kubectl so that the commands I execute with it only apply to a specific cluster?
kubectl access to Kubernetes API servers are managed by configuration contexts.
Here is some documentation for how to do so. In a nutshell, you would stand up multiple Kubernetes clusters and then specify a configuration like so:
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
name: development
- cluster:
name: scratch
users:
- name: developer
- name: experimenter
contexts:
- context:
name: dev-frontend
- context:
name: dev-storage
- context:
name: exp-scratch
To automatically generate one, you can run the following commands:
# Add cluster details to the file
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify
# Add user details to the configuration file
kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password
# Add context details to the configuration file
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
After that, you can safe the context. Then, going forward, when you run a kubectl command, the action will apply to the cluster and namespace listed in the specifeid context. For example:
kubectl config --kubeconfig=config-demo use-context dev-frontend
To then change the context to another one you specified:
kubectl config --kubeconfig=config-demo use-context exp-scratch
Related
I am trying to create basic gitlab CICD pipeline which will deploy my node.js based backend to AWS kops based k8s cluster.For that I have created gitlab-ci.yml file which will use for deploy whole CICD pipeline, however I am getting confused with how to get kubernetes cluster IP address so I can use it in gitlab-ci.yml to set as - kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
where I want CLUSTER_ADDRESS to configure with gitlab in gitlab-ci.yml.
Any help would be appreciated.
variables:
DOCKER_DRIVER: overlay2
REGISTRY: $CI_REGISTRY
IMAGE_TAG: $CI_REGISTRY_IMAGE
K8S_DEPLOYMENT_NAME: deployment/$CI_PROJECT_NAME
CONTAINER_NAME: $CI_PROJECT_NAME
stages:
- build
- build-docker
- deploy
build-docker:
image: docker:latest
stage: build-docker
services:
- docker:dind
tags:
- privileged
only:
- Test
script:
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY
- docker build --network host -t $IMAGE_NAME:$IMAGE_TAG -t $IMAGE_NAME:latest .
- docker push $IMAGE_NAME:$IMAGE_TAG
- docker push $IMAGE_NAME:latest
deploy-k8s-(stage):
image:
name: kubectl:latest
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=default
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
If your current kubeconfig context is set to the cluster in question, you can run the following to get the cluster address you want:
kubectl config view --minify --raw \
--output 'jsonpath={.clusters[0].cluster.server}'
You can add --context <cluster name> if not.
In most cases this will be https://api.<cluster name>.
This question already has answers here:
Not able to create Prometheus in K8S cluster
(2 answers)
Closed 3 years ago.
I am a total GCP Newbie- just created a new account.
I have installed a GKE cluster - it is active, also downloaded the sdk.
I was able to deploy a pod on GKE using kubectl.
Have tiller and helm client installed.
From the CLI when I try running a helm command
>helm install --name testngn ./nginx-test
Error: release testngn failed: namespaces "default" is forbidden: User
"system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"
I have given my user "owner" role - so hopefully that is not the issue. But not sure how the CLI identifies the user and permissions (new to me). Also the kubectl -n flag does not work with helm (?)
Most of documentation simply says just do helm init - but that does not provide any permissions to Tiller - so it would fail- unable to execute anything.
Create Service account with cluster-admin role using the rbac-config.yaml.
Then helm init with this service account to provide permissions to Tiller
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller
I am not exactly sure what's going on which is why I am asking this question. When I run this command:
kubectl config get-clusters
I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1
then I run:
kubectl config current-context
and I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and if I run kubectl get pods, I get the expected output.
But how do I switch to the other cluster/context? what's the difference between the cluster and context? I can't figure out how these commands differ:
When I run them, I still get the pods from the wrong cluster:
root#4c2ab870baaf:/# kubectl config set-context arn:aws:eks:us-west-2:913617820371:cluster/eks1
Context "arn:aws:eks:us-west-2:913617820371:cluster/eks1" modified.
root#4c2ab870baaf:/#
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
root#4c2ab870baaf:/# kubectl config set-cluster arn:aws:eks:us-west-2:91xxxxxx371:cluster/eks1
Cluster "arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1" set.
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
so I really don't know how to properly switch between clusters or contexts and also switch the auth routine when doing so.
For example:
contexts:
- context:
cluster: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
user: arn:aws:eks:us-west-2:91xxxx371:cluster/ignitecluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
- context:
cluster: arn:aws:eks:us-west-2:91xxxx371:cluster/teros-eks-cluster
user: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
To clarify on the difference between set-context and use-context
A context is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. So when you do set-context, you just adding context details to your configuration file ~/.kube/config, but it doesn't switch you to that context, while use-context actually does.
Thus, as Vasily mentioned, in order to switch between clusters run
kubectl config use-context <CONTEXT-NAME>
Also, if you run kubectl config get-contexts you will see list of contexts with indication of the current one.
Use
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks
Consider using kubectx for managing your contexts.
Usage
View all contexts (the current context is bolded):
$kubectx
arn:aws:eks:us-east-1:12234567:cluster/eks_app
->gke_my_second_cluster
my-rnd
my-prod
Switch to other context:
$ kubectx my-rnd
Switched to context "my-rnd".
Bonus:
In the same link - check also the kubens tool.
This is the best command to switch between different EKS clusters.
I use it every day.
aws eks update-kubeconfig --name example
Documentation:
https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html
I am making CICD pipeline, using AWS CodeBuild to build and deploy application(service) to aws eks cluster. I have installed kubectl and aws-iam-authenticator properly,
getting aws instead of aws-iam-authenticator in command
kind: Config
preferences: {}
users:
- name: arn:aws:eks:ap-south-1:*******:cluster/DevCluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- DevCluster
command: aws
env: null
[Container] 2019/05/14 04:32:09 Running command kubectl get svc
error: the server doesn't have a resource type "svc"
I donot want to edit configmap manually because it comes through pipeline.
As #Priya Rani said in the comments, he found the solution.
There is no issue with configmap file. Its all right.
1) I need to make Cloudformation (cluster+nodeinstance)trusted role to communicate with Codebuild by editing trusted role.
2) Need to add usedata section to communicate node instance with clusters.
Why you don't just load a proper/dedicated kube config file, by setting KUBECONFIG env variable inside your CICD pipeline, like this:
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
which would include a right command to use with aws-iam-authenticator:
#
#config-devel
#
...
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
The only two ways I can find to authenticate is by creating a new authentication context, e.g.
kubectl config set-credentials gajus/foo --token=foo
kubectl config set-cluster foo --insecure-skip-tls-verify=true --server=https://127.0.0.1
kubectl config set-context default/foo/gajus --user=gajus/foo --namespace=default --cluster=foo
kubectl config use-context default/foo/gajus
and by using the command line options, e.g.
kubectl --server=https://127.0.0.1 --insecure-skip-tls-verify=true --token=foo get po
Is there a way to set values for --server and other authentication options using environment variables?
The configuration file for credentials live under $HOME/.kube/config (kubeconfig). You can create multiple configuration files like that and use the KUBECONFIG environment variable to point to the file you want to use for the current session.
export KUBECONFIG=~/.kube/config-foo
kubectl config set-credentials gajus/foo --token=foo
kubectl config set-cluster foo --insecure-skip-tls-verify=true --server=https://127.0.0.1
kubectl config set-context default/foo/gajus --user=gajus/foo --namespace=default --cluster=foo
kubectl config use-context default/foo/gajus
export KUBECONFIG=~/.kube/config-bar
...
KUBECONFIG=$HOME/.kube/config-foo kubectl get pod
KUBECONFIG=$HOME/.kube/config-bar kubectl get pod