How to reset or start from scratch aws cdk configrations? - amazon-web-services

I have recently start using aws cdk as a newbie. so i ran lot of commands that i had no idea about.
now i want to remove all settings like env variables i created or profiles and start from scratch. what should in un install to achieve that?

I'm not totally sure what you're trying to reset but here's a few suggestions that might help:
Remove Deployed CDK Stacks
cdk destroy stack_name
Note: You'll have to do this for every stack you've deployed. This can also be done through "CloudFormation" in the AWS dashboard in your browser.
Remove CLI Settings
As per https://docs.amazonaws.cn/en_us/cli/latest/userguide/cli-configure-files.html
To remove a setting, use an empty string as the value, or manually delete the setting in your config and credentials files in a text editor.
Example:
aws configure set cli_pager ""
Remove Profiles
Unsure if you can do this easily through the CLI but you can just manually remove them from your config files. There are only two config files and they can be found using https://docs.amazonaws.cn/en_us/cli/latest/userguide/cli-configure-profiles.html
~/.aws/credentials (Linux & Mac) or %USERPROFILE%.aws\credentials (Windows)
~/.aws/config (Linux & Mac) or %USERPROFILE%.aws\config (Windows)
If you need more specific help on how to undo something then please provide an example of what exactly you ran that you would like to undo.

Related

Migrate Secrets from SecretManager in GCP

Hi I have my secrets in Secretmanager in one project and want to know how to copy them or migrate them to other project.
Is there a mechanism to do it smoothly.
As of today there is no way to have GCP move the Secret between projects for you.
It's a good feature request that you can file here: https://b.corp.google.com/issues/new?component=784854&pli=1&template=1380926
edited according to John Hanley's comment
I just had to deal with something similar myself, and came up with a simple bash script that does what I need. I run Linux.
there are some prerequisites:
download the gcloud cli for your OS.
get the list of secrets you want to migrate (you can do it by setting up the gcloud with the source project gcloud config set project [SOURCE_PROJECT], and then running gcloud secrets list)
then once you have the list, convert it textually to a list in
format "secret_a" "secret_b" ...
the last version of each secret is taken, so it must not be in a "disabled" state, or it won't be able to move it.
then you can run:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}_env_file"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate)
done
rm secret_migrate
what this script does, is set the project to the source one, then get the secrets, and one by one save it to file, and upload it to the target project.
the file is rewritten for each secret and deleted at the end.
you need to replace the secrets array (secret_array), and the project names ([SOURCE_PROJECT], [TARGET_PROJECT]) with your own data.
I used this version below, which also sets a different name, and labels according to the secret name:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate --labels=environment=test,service="${i}")
done
rm secret_migrate
All "secrets" MUST be decrypted and compiled in order to be processed by a CPU as hardware decryption isn't practical for commercial use. Because of this getting your passwords/configuration (in PLAIN TEXT) is as simple as logging into one of your deployments that has the so called "secrets" (plain text secrets...) and typing 'env' a command used to list all environment variables on most Linux systems.
If your secret is a text file just use the program 'cat' to read the file. I haven't found a way to read these tools from GCP directly because "security" is paramount.
GCP has methods of exec'ing into a running container but you could also look into kubectl commands for this too. I believe the "PLAIN TEXT" secrets are encrypted on googles servers then decrypted when they're put into your cluser/pod.

passing aws creds to kitchen ec2 command line

I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.
You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.

Deploy app from CircleCI with

I'm looking to automatically deploy my app once we release a new version. We use CircleCI, so firing these commands shouldn't be a big deal.
cf login -a https://api.lyra-836.appcloud.swisscom.com -u myuser -p seret
cf push myapp
However I don't want to expose my personal credentials (Passeport acount) into our git repository. Is it possible to generate an API key for that purpose?
How do you handle that? I might also need to ssh into the instance to fire some migrations scripts after the deployment, same goes there.
Currently Swisscoms Application cloud does not offer technical accounts but you can create an additional account easily. Then add it to your org/space as developer and it should be able to fulfill your needs.
CircleCI documentation has a section about handling secrets: Using CircleCI Environment Variables
Setting environment variables for all commands without adding them to
git
Occasionally, you’ll need to add an API key or some other secret
as an environment variable. You might not want to add the value to
your git history. Instead, you can add environment variables using the
Project settings > Environment Variables page of your project.
This documentation describes how to store encrypted stuff within your VCS.
If you prefer to keep your sensitive environment variables checked
into git, but encrypted, you can follow the process outlined at
circleci/encrypted-files.

Elastic Beanstalk optionsettings file keep getting overwritten with default parameters

While trying to setup an elastic beanstalk worker application using the command line tools (eb tools), my configuration file (optionsettings.MyApp-env) gets overwritten when I start/update/stop the environment.
These are the steps to reproduce:
Using the CLI tools' command eb init I've created a new application in Elastic Beanstalk.
The config file in the .elasticbeanstalk folder had the following line:
OptionSettingFile=/Users/doron/projects/workers/my-worker/.elasticbeanstalk/optionsettings.MyWorkerName-dev
After running eb start for the first time, that file was created with some values.
I went ahead and changed its contents according to http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html so it'll be configured as I want (environment parameters, autoscaling servers amount, etc...).
To apply the changes I've tried the following:
Update the existing environment with eb update.
Terminate the existing environment with eb stop and build it from scratch with eb start.
In both cases the optionsettings file get changed after running the command (update or start).
The new content of the file looks more like the vanilla version I got after calling the first eb start with all sorts of configuration parameters that I added - removed completely.
Is there another way of configuring the environment (not the software on the machine, but the configuration that exists in the console - instance type, regions, autoscaling, rotating updates, etc...) ?
I realise that this is an old thread but in case anyone comes across this, as I did, then check out this thread on the AWS forums for Elasticbeanstalk https://forums.aws.amazon.com/thread.jspa?messageID=395052#395052
It explains how settings set in the .elasticbeanstalk/optionsettings. file are set using the API in such a way that they can't be changed later, unlike those set in the .ebextensions/*.config files.
Also, in an incredibly annoying move, the optionsettings file will often set some settings in it which you want to set in the .config file however it automatically re-creates the optionsettings file when running eb start and there's very little that seems possible. This makes using the eb command line tools close to impossible to use if you want to change something like the WSGIPath.

How to set up and use EC2 CLI on Mac?

I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?