How do I switch between different Kubernetes contexts described in distinct kubeconfig yaml files fast? - kubectl

I need to access several Kubernetes clusters. For each of them, I got a kubeconfig yaml file, e.g. kubeconfig-cluster1.yaml and kubeconfig-cluster2.yaml.
How can I easily switch between these configurations? I mean, without setting the KUBECONFIG environment variable manually to one of these files?

You can declare all contexts in the KUBECONFIG environment variable:
The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited.
To autodetect the contexts based on the kubeconfig files, assuming they're all located in the ~/.kube folder, and assign them as a colon-separated list to the KUBECONFIG environment variable, you could add a script to your ~/.bashrc or ~/.zshrc:
# Autodetect kubeconfig files to enable switching between them with kubectx
export KUBECONFIG=`ls -1 ~/.kube/kubeconfig-* | paste -sd ":" -`
Then, to switch between these kubectl contexts (with autocomplete!), have a look at the kubectx utility.
The kubectx README page contains installation instructions.
$ kubectx cluster1
Switched to context "cluster1".
$ kubectx cluster2
Switched to context "cluster2".

I also had multiple kubernetes cluster to manage. I wrote a script to switch kubeconfig and namespace easily. Hope it can help you.
. k-use -k <kubeconfig> -n <namespace>
https://github.com/kingonion/k-use

Related

How to clone an AWS EB environment across platform branches

Background
Our AWS Elastic Beanstalk environment, running the latest version of the pre-configured "Python 3.7 on 64-bit Amazon Linux 2" platform branch, has a lot of custom configuration and environment properties.
Now we would like to switch this environment to the "Python 3.8 on 64-bit Amazon Linux 2" platform branch.
Basically, the goal is to clone the environment, keeping the current configuration (other than platform branch and version) and environment properties.
Problem
Unfortunately, when cloning, it is not possible to switch between different platform branches (we can only switch between platform versions within the same platform branch).
The documentation suggests that a blue/green deployment is required here. However, a blue/green deployment involves creating a new environment from scratch, so we would still need some other way to copy our configuration settings and environment properties.
Question
What would be the recommended way to copy the configuration settings and/or environment properties from the original environment into a newly created environment?
I suppose we could use eb config to download the original configuration, modify the environment name, platform branch and version, and so on, and then use eb config --update on the new environment. However, that feels like a hack.
Summary
save current config: eb config save <env name>
use a text editor to modify the platform branch in the saved config file
create new environment based on modified config file: eb create --cfg <config name> (add --sample to use the sample application)
if necessary, delete local config files
if necessary, use eb printenv and eb setenv to copy environment properties
EDIT: For some reason the saved config does not include all security group settings, so it may be necessary to check those manually, using the EB console (configuration->instances).
Background
AWS support have confirmed that using eb config is the way to go, and they referred to the online documentation for details.
Unfortunately, the documentation for the eb cli does not provide all the answers.
The following is based on my own adventures using the latest version of the eb cli (3.20.2) with botocore 1.21.50, and documentation at the time of writing (Sep 30, 2021). Note there's a documentation repo on github but it was last updated six months ago and does not match the latest online docs...
eb config
Here's a screenshot from the eb config docs:
Indeed, if you call eb config my-env or eb config my-env --display, environment properties are not shown.
However, this does not hold for eb config save: YAML files created using eb config save actually do include environment properties*.
*Beware, if your environment properties include secrets (e.g. passwords), these also end up in your saved configs, so make sure you don't commit those to version control.
Moreover, it is currently also possible to set environment properties using eb config --update.
This implies we should be able to "copy" both configuration settings and environment properties in one go.
EDIT: After some testing it turns out eb config save does not always get the complete set of environment properties: some properties may be skipped. Not yet sure why... Step 5 below might help in those cases.
Walk-through
Not sure if this is the best way to do it, but here's what seems to work for me:
Suppose we have an existing EB environment called py37-env with lots of custom configuration and properties, running the Python 3.7 platform branch.
The simplest way to "clone" this would be as follows:
Step 1: download the existing configuration
Download the configuration for the existing environment:
eb config save py37-env
By default, the config file will end up in our project directory as .elasticbeanstalk/saved_configs/py37-env-sc.cfg.yml.
The saved config file could look like this (just an example, also see environment manifest):
EnvironmentConfigurationMetadata:
Description: Configuration created from the EB CLI using "eb config save".
DateCreated: '1632989892000'
DateModified: '1632989892000'
Platform:
PlatformArn: arn:aws:elasticbeanstalk:eu-west-1::platform/Python 3.7 running on 64bit Amazon Linux 2/3.3.5
OptionSettings:
aws:elasticbeanstalk:application:environment:
MY_ENVIRONMENT_PROPERTY: myvalue
aws:elasticbeanstalk:command:
BatchSize: '30'
BatchSizeType: Percentage
aws:elb:policies:
ConnectionDrainingEnabled: true
aws:elb:loadbalancer:
CrossZone: true
aws:elasticbeanstalk:environment:
ServiceRole: aws-elasticbeanstalk-service-role
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: my-key
aws:autoscaling:updatepolicy:rollingupdate:
RollingUpdateType: Health
RollingUpdateEnabled: true
EnvironmentTier:
Type: Standard
Name: WebServer
AWSConfigurationTemplateVersion: 1.1.0.0
Also see the list of available configuration options in the documentation.
Step 2: modify the saved configuration
We are only interested in the Platform, so it is sufficient here to replace 3.7 by 3.8 in the PlatformArn value.
If necessary, you can use e.g. eb platform list to get an overview of valid platform names.
Step 3: create a new environment based on the modified config file
eb create --cfg py37-env-sc
This will deploy the most recent application version. Use --version <my version> to deploy a specific version, or use --sample to deploy the sample application, as described in the docs.
This will automatically look for files in the default saved config folder, .elasticbeanstalk/saved_configs/.
If you get a ServiceError or InvalidParameterValueError at this point, make sure only to pass in the name of the file, i.e. without the file extension .cfg.yml and without the folders.
Step 4: clean up local saved configuration file
Just in case you have any secrets stored in the environment properties.
Step 5: alternative method for copying environment properties
If environment properties are not included in the saved config files, or if some of them are missing, here's an alternative way to copy them (using bash).
This might not be the most efficient implementation, but I think it serves to illustrate the approach. Error handling was omitted, for clarity.
source_env="py37-env" # or "$1"
target_env="py38-env" # or "$2"
# get the properties from the source environment
source_env_properties="$(eb printenv "$source_env")"
# format the output so it can be used with `eb setenv`
mapfile -t arg_array < <(echo "$source_env_properties" | grep "=" | sed -e 's/ =/=/g' -e 's/= /=/g' -e 's/^ *//g')
# copy the properties to the target environment
eb setenv -e "$target_env" "${arg_array[#]}"
This has the advantage that it does not store any secrets in local files.

Where are my environment variables in Elastic Beanstalk for AL2?

I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).
During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.
Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.
If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)

Using kubeconfig contexts simply

I need to use now multiple cluster, currently what I did is simple put all the kubeconfig
under .kube folder and any time update the config file with the cluster which I need , e.g.
mv config cluserone
vi config
insert new kubeconfig to the config file and start working with the new cluster,
Let say inside the /Users/i033346/.kube I've all the kubeconfig file one by one.
is there a way to use them as contexts without creating a new file which contain all of them.
I try to use also kubectx however when I use:
export KUBECONFIG=/Users/i033346/.kube/trial
and
export KUBECONFIG=/Users/i033346/.kube/prod
and use kubectx I always get the last one and doenst get list of the defined contexts,any idea?
KUBECONFIG env var supports multiple files, comma-separated:
export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"
This should be enough to see all of them in kubectx.
You can even merge all configs to 1 file:
export KUBECONFIG="/Users/i033346/.kube/trial,/Users/i033346/.kube/prod"
kubectl config view --flatten > ~/.kube/config
What I used to do in this scenario is to create multiple aliases pointing to different config files.
e.g
in your .bashrc/.zshrc
edited in your ~/.bashrc or your ~/.zshrc
alias k-cluster1="kubectl --kubeconfig /my_path/config_cluster1"
alias k-cluster2="kubectl --kubeconfig /my_path/config_cluster2"
after loading the terminal k-cluster1 get pods or k-cluster2 get pods should work

How to rename existing 'named configurations' using gcloud cli in GCP?

I would like to know if there is a way to rename an existing 'gcloud topic configurations' e.g. I would like rename 'foo' to 'bar' in the below example.
I couldn't find anything on this in the gcloud reference documents.
Technically, it is not possible to change the name of that configuration using the gcloud command.
However, you can change it doing this little workaround:
Use gcloud config configurations activate [YOUR_CONFIG_NAME] to activate the configuration you wish.
Use gcloud info --format='get(config.paths.active_config_path)' to find the directory where your configurations are stored. You will get the path of the file of that specific configuration, looking like this /tmp/tmp.XAfddVDdg/configurations/[YOUR_CONFIG_NAME]
If you cd into the directory /tmp/tmp.XAfddVDdg/configurations/, you will find all your configurations there. Every configuration will be named there like this config_[YOUR_CONFIG_NAME]. Modifying the part that matches the name of your configuration will successfully change its name. DO NOT delete the config_ part of the name.
After this, is you print all the configurations using gcloud config configurations list, you will find your configuration renamed, but none will be active now. Just activate it with gcloud config configurations activate [YOUR_CONFIG_NAME], and you will be good to go.
Don't know when this was added, but there is a remame command for configurations. So no more need to jump through hoops by deleting and recreating configurations or directly editing the file.
gcloud config configurations rename CONFIGURATION_NAME --new-name=NEW_NAME
https://cloud.google.com/sdk/gcloud/reference/config/configurations/rename

How to set environment variable for root user at start-up?

I'm trying to add memory usage monitoring to the monitoring tab of an instance at console.aws.amazon.com. It's an instance running Amazon Linux AMI 2013.09.2 I have found the Amazon CloudWatch Monitoring Scripts for Linux and specifically mon-put-instance-data.pl that let's me collect memory stats and report it to CloudWatch as custom metrics.
To have this working I need to set the environment variable AWS_CREDENTIAL_FILE to point to a file containing my AWSAccessKeyId and AWSSecretKey. I do this by typing:
export AWS_CREDENTIAL_FILE=/home/ec2-user/aws-scripts-mon/awscreds.template
To avoid having to type this over and over again, I'm looking for a way to set the environment variable at startup. I have tried adding the code to these files:
/etc/rc.local file
/etc/profile
/home/ec2-user/.bash_profile
As adding the line of code in either of the files seems to work when I switch to root user, where should I put it? If I set the variable in /home/ec2-user/.bash_profile the variable is set for ec2-user but not for root. If i then sudo -E su it works, but I don't know if this is the best way to go about it?
Create a sh file and put the code in it. Then put this sh file in /etc/profile.d/ folder.
Note: create this sh file using the root user.
Once your instance is created, this sh file will automatically run and creates the environment variable for you and this environment variable will be accessible to all the users.