Using kubeconfig contexts simply - amazon-web-services

I need to use now multiple cluster, currently what I did is simple put all the kubeconfig
under .kube folder and any time update the config file with the cluster which I need , e.g.
mv config cluserone
vi config
insert new kubeconfig to the config file and start working with the new cluster,
Let say inside the /Users/i033346/.kube I've all the kubeconfig file one by one.
is there a way to use them as contexts without creating a new file which contain all of them.
I try to use also kubectx however when I use:
export KUBECONFIG=/Users/i033346/.kube/trial
and
export KUBECONFIG=/Users/i033346/.kube/prod
and use kubectx I always get the last one and doenst get list of the defined contexts,any idea?

KUBECONFIG env var supports multiple files, comma-separated:
export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"
This should be enough to see all of them in kubectx.
You can even merge all configs to 1 file:
export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"
kubectl config view --flatten > ~/.kube/config

What I used to do in this scenario is to create multiple aliases pointing to different config files.
e.g
in your .bashrc/.zshrc
edited in your ~/.bashrc or your ~/.zshrc
alias k-cluster1="kubectl --kubeconfig /my_path/config_cluster1"
alias k-cluster2="kubectl --kubeconfig /my_path/config_cluster2"
after loading the terminal k-cluster1 get pods or k-cluster2 get pods should work

Related

Migrate Secrets from SecretManager in GCP

Hi I have my secrets in Secretmanager in one project and want to know how to copy them or migrate them to other project.
Is there a mechanism to do it smoothly.
As of today there is no way to have GCP move the Secret between projects for you.
It's a good feature request that you can file here: https://b.corp.google.com/issues/new?component=784854&pli=1&template=1380926
edited according to John Hanley's comment
I just had to deal with something similar myself, and came up with a simple bash script that does what I need. I run Linux.
there are some prerequisites:
download the gcloud cli for your OS.
get the list of secrets you want to migrate (you can do it by setting up the gcloud with the source project gcloud config set project [SOURCE_PROJECT], and then running gcloud secrets list)
then once you have the list, convert it textually to a list in
format "secret_a" "secret_b" ...
the last version of each secret is taken, so it must not be in a "disabled" state, or it won't be able to move it.
then you can run:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}_env_file"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate)
done
rm secret_migrate
what this script does, is set the project to the source one, then get the secrets, and one by one save it to file, and upload it to the target project.
the file is rewritten for each secret and deleted at the end.
you need to replace the secrets array (secret_array), and the project names ([SOURCE_PROJECT], [TARGET_PROJECT]) with your own data.
I used this version below, which also sets a different name, and labels according to the secret name:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate --labels=environment=test,service="${i}")
done
rm secret_migrate
All "secrets" MUST be decrypted and compiled in order to be processed by a CPU as hardware decryption isn't practical for commercial use. Because of this getting your passwords/configuration (in PLAIN TEXT) is as simple as logging into one of your deployments that has the so called "secrets" (plain text secrets...) and typing 'env' a command used to list all environment variables on most Linux systems.
If your secret is a text file just use the program 'cat' to read the file. I haven't found a way to read these tools from GCP directly because "security" is paramount.
GCP has methods of exec'ing into a running container but you could also look into kubectl commands for this too. I believe the "PLAIN TEXT" secrets are encrypted on googles servers then decrypted when they're put into your cluser/pod.

How do I switch between different Kubernetes contexts described in distinct kubeconfig yaml files fast?

I need to access several Kubernetes clusters. For each of them, I got a kubeconfig yaml file, e.g. kubeconfig-cluster1.yaml and kubeconfig-cluster2.yaml.
How can I easily switch between these configurations? I mean, without setting the KUBECONFIG environment variable manually to one of these files?
You can declare all contexts in the KUBECONFIG environment variable:
The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited.
To autodetect the contexts based on the kubeconfig files, assuming they're all located in the ~/.kube folder, and assign them as a colon-separated list to the KUBECONFIG environment variable, you could add a script to your ~/.bashrc or ~/.zshrc:
# Autodetect kubeconfig files to enable switching between them with kubectx
export KUBECONFIG=`ls -1 ~/.kube/kubeconfig-* | paste -sd ":" -`
Then, to switch between these kubectl contexts (with autocomplete!), have a look at the kubectx utility.
The kubectx README page contains installation instructions.
$ kubectx cluster1
Switched to context "cluster1".
$ kubectx cluster2
Switched to context "cluster2".
I also had multiple kubernetes cluster to manage. I wrote a script to switch kubeconfig and namespace easily. Hope it can help you.
. k-use -k <kubeconfig> -n <namespace>
https://github.com/kingonion/k-use

How to I resolve command not found in AWS EC2?

All of a sudden no linux command(ls, vi, etc..) is working in AWS EC2 instance and I get message saying command not found.
I had launched an EC2 instance and all linux commands were working fine.
I then uploaded some files to EC2 and extracted them(setting up my environment).
I made following changes to the ~/.bashrc file
export M2_HOME=/home/ec2-user/apache-maven-3.6.0
export JAVA_HOME=/home/ec2-user/jdk1.8.0_151
export ANT_HOME=/home/ec2-user/apache-ant-1.9.13
export PATH=/home/ec2-user/jdk1.7.0_80/bin:/home/ec2-user/apache-maven-3.6.0/bin
export JBOSS_HOME=target/wildfly-run/wildfly-11.0.0.Final
and I executed below command in my AWS EC2 instance.
source ~/.bashrc
After this linux commands(ls, vi, cat, etc..) are not working, however "which", "pwd" commands are working.
Can someone help to me to correct the PATH settings so that my commands start executing normally
You should append the original PATH to the additions you made (using the $PATH variable), like below:
export PATH=/home/ec2-user/jdk1.7.0_80/bin:/home/ec2-user/apache-maven-3.6.0/bin:$PATH
Changing value of path as below sorted out all the issues
export PATH=/usr/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/local/bin:/opt/aws/bin:/root/bin:/home/ec2-user/jdk1.7.0_80/bin:/home/ec2-user/apache-maven-3.5.2/bin:/home/ec2-user/apache-ant-1.9.14/bin
below is the system default path
PATH=/usr/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/local/bin:/opt/aws/bin:/root/bin

How to rename existing 'named configurations' using gcloud cli in GCP?

I would like to know if there is a way to rename an existing 'gcloud topic configurations' e.g. I would like rename 'foo' to 'bar' in the below example.
I couldn't find anything on this in the gcloud reference documents.
Technically, it is not possible to change the name of that configuration using the gcloud command.
However, you can change it doing this little workaround:
Use gcloud config configurations activate [YOUR_CONFIG_NAME] to activate the configuration you wish.
Use gcloud info --format='get(config.paths.active_config_path)' to find the directory where your configurations are stored. You will get the path of the file of that specific configuration, looking like this /tmp/tmp.XAfddVDdg/configurations/[YOUR_CONFIG_NAME]
If you cd into the directory /tmp/tmp.XAfddVDdg/configurations/, you will find all your configurations there. Every configuration will be named there like this config_[YOUR_CONFIG_NAME]. Modifying the part that matches the name of your configuration will successfully change its name. DO NOT delete the config_ part of the name.
After this, is you print all the configurations using gcloud config configurations list, you will find your configuration renamed, but none will be active now. Just activate it with gcloud config configurations activate [YOUR_CONFIG_NAME], and you will be good to go.
Don't know when this was added, but there is a remame command for configurations. So no more need to jump through hoops by deleting and recreating configurations or directly editing the file.
gcloud config configurations rename CONFIGURATION_NAME --new-name=NEW_NAME
https://cloud.google.com/sdk/gcloud/reference/config/configurations/rename

How to set environment variable for root user at start-up?

I'm trying to add memory usage monitoring to the monitoring tab of an instance at console.aws.amazon.com. It's an instance running Amazon Linux AMI 2013.09.2 I have found the Amazon CloudWatch Monitoring Scripts for Linux and specifically mon-put-instance-data.pl that let's me collect memory stats and report it to CloudWatch as custom metrics.
To have this working I need to set the environment variable AWS_CREDENTIAL_FILE to point to a file containing my AWSAccessKeyId and AWSSecretKey. I do this by typing:
export AWS_CREDENTIAL_FILE=/home/ec2-user/aws-scripts-mon/awscreds.template
To avoid having to type this over and over again, I'm looking for a way to set the environment variable at startup. I have tried adding the code to these files:
/etc/rc.local file
/etc/profile
/home/ec2-user/.bash_profile
As adding the line of code in either of the files seems to work when I switch to root user, where should I put it? If I set the variable in /home/ec2-user/.bash_profile the variable is set for ec2-user but not for root. If i then sudo -E su it works, but I don't know if this is the best way to go about it?
Create a sh file and put the code in it. Then put this sh file in /etc/profile.d/ folder.
Note: create this sh file using the root user.
Once your instance is created, this sh file will automatically run and creates the environment variable for you and this environment variable will be accessible to all the users.