Rancher - Create namespace within specific project via CLI - kubectl

I would like to create a namespace within a project using the Rancher CLI.
What I already did:
rancher login https://URI --token abcde --context c-abc:p-abc
rancher kubectl create namespace myns --dry-run=client -o yaml | rancher kubectl apply -f -
But the namespace is created in "default" and not in my project.

There are a couple ways I've found to do this:
Create it from a yaml file using kubectl apply -f namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: NAME
annotations:
field.cattle.io/projectId: PROJECT_NAME:PROJECT_NAMESPACE
If you have the rights to modify a namespace that's not in a project you can add the annotation after you create it
kubectl annotate namespace work-test field.cattle.io/projectId=PROJECT_NAME:PROJECT_NAMESPACE
You can get the project name and namespace by viewing the project yaml file and looking in the metadata section.

Related

Kubectl show expanded command when using alases or shorthand

Kubectl has many aliases like svc, po, deploy etc.
Is there a way to show the expanded command for a command with shorthand.
for example kubectl get po
to
kubectl get pods
On a similar question the api-resources is used # What's kubernetes abbreviation for deployments?
But it gives very top level shorthands,
for eg, kubeclt get svc expands to kubectl get services
but in kubectl create svc expands to kubectl create service
Kindly guide,
Thanks
kubectl explain may be of interest e.g.:
kubectl explain po
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
There are plugins for kubectl too.
I've not tried it but kubectl explore may be worth a try.
Unfortunately, kubectl isn't documented by explainshell.com which would be a boon as it would also document the various flags e.g. -n (--namespace) and -o (--output).

How to manage multiple GKE projects in one Google Cloud Account [duplicate]

This question already has answers here:
Run a single kubectl command for a specific project and cluster?
(2 answers)
Closed 2 years ago.
Given a situation where I have three separate GKE instances in different Google Cloud projects under the same billing account, how can I configure kubectl so that the commands I execute with it only apply to a specific cluster?
kubectl access to Kubernetes API servers are managed by configuration contexts.
Here is some documentation for how to do so. In a nutshell, you would stand up multiple Kubernetes clusters and then specify a configuration like so:
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
name: development
- cluster:
name: scratch
users:
- name: developer
- name: experimenter
contexts:
- context:
name: dev-frontend
- context:
name: dev-storage
- context:
name: exp-scratch
To automatically generate one, you can run the following commands:
# Add cluster details to the file
kubectl config --kubeconfig=config-demo set-cluster development --server=https://1.2.3.4 --certificate-authority=fake-ca-file
kubectl config --kubeconfig=config-demo set-cluster scratch --server=https://5.6.7.8 --insecure-skip-tls-verify
# Add user details to the configuration file
kubectl config --kubeconfig=config-demo set-credentials developer --client-certificate=fake-cert-file --client-key=fake-key-seefile
kubectl config --kubeconfig=config-demo set-credentials experimenter --username=exp --password=some-password
# Add context details to the configuration file
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
After that, you can safe the context. Then, going forward, when you run a kubectl command, the action will apply to the cluster and namespace listed in the specifeid context. For example:
kubectl config --kubeconfig=config-demo use-context dev-frontend
To then change the context to another one you specified:
kubectl config --kubeconfig=config-demo use-context exp-scratch

helm install failing on GKE [duplicate]

This question already has answers here:
Not able to create Prometheus in K8S cluster
(2 answers)
Closed 3 years ago.
I am a total GCP Newbie- just created a new account.
I have installed a GKE cluster - it is active, also downloaded the sdk.
I was able to deploy a pod on GKE using kubectl.
Have tiller and helm client installed.
From the CLI when I try running a helm command
>helm install --name testngn ./nginx-test
Error: release testngn failed: namespaces "default" is forbidden: User
"system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "default"
I have given my user "owner" role - so hopefully that is not the issue. But not sure how the CLI identifies the user and permissions (new to me). Also the kubectl -n flag does not work with helm (?)
Most of documentation simply says just do helm init - but that does not provide any permissions to Tiller - so it would fail- unable to execute anything.
Create Service account with cluster-admin role using the rbac-config.yaml.
Then helm init with this service account to provide permissions to Tiller
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller

Query config file value on deployed cluster K8S

I got admin rights on our cluster and I want to query the config yaml file which is deployed to the cluster, the file is already applied to the cluster and I want to query the file values of networkPolicy I mean all files with kind kind: NetworkPolicy , is it possible to do it from the terminal ?
In small cluster I can do
kubectl get deploy --all-namespaces -o yaml > dep.yaml
but in big cluster there is a lot of file/data ....
To get the YAML manifests of all NetworkPolicy resources in the cluster:
kubectl get netpol --all-namespaces -o yaml
If you need only some of them, you can filter based on label values or field values with the -l/--selector and --field-selector options.

Unable to get aws-iam-authenticator in config-map while applying through AWS CodeBuild

I am making CICD pipeline, using AWS CodeBuild to build and deploy application(service) to aws eks cluster. I have installed kubectl and aws-iam-authenticator properly,
getting aws instead of aws-iam-authenticator in command
kind: Config 
preferences: {} 
users: 
- name: arn:aws:eks:ap-south-1:*******:cluster/DevCluster 
user: 
exec: 
apiVersion: client.authentication.k8s.io/v1alpha1 
args: 
- eks 
- get-token 
- --cluster-name 
- DevCluster 
command: aws
env: null 
[Container] 2019/05/14 04:32:09 Running command kubectl get svc 
error: the server doesn't have a resource type "svc"
I donot want to edit configmap manually because it comes through pipeline.
As #Priya Rani said in the comments, he found the solution.
There is no issue with configmap file. Its all right.
1) I need to make Cloudformation (cluster+nodeinstance)trusted role to communicate with Codebuild by editing trusted role.
2) Need to add usedata section to communicate node instance with clusters.
Why you don't just load a proper/dedicated kube config file, by setting KUBECONFIG env variable inside your CICD pipeline, like this:
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
which would include a right command to use with aws-iam-authenticator:
#
#config-devel
#
...
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"