Copy an ssm path with other name - amazon-web-services

We keep environment information (endpoints, passwords,etcs) under an ssm tree, lets call it /qa/ and we'd like to find a simple way to copy it over to /qa01/ /qa02/ etcs, and in the process modify some variables.
We have dumped the current content with:
aws ssm get-parameters-by-path --path "/qa/"
to a file but I can not find a way to modify and upload it under a new path.
The idea being we will set environment variables using
chamber export qa --fortmat=dotenv > .env
at build time and
chamber exec qa -- node server
at runtime under ECS.
Is it a good way to keep environment information out of git?
Thanks

Related

Migrate Secrets from SecretManager in GCP

Hi I have my secrets in Secretmanager in one project and want to know how to copy them or migrate them to other project.
Is there a mechanism to do it smoothly.
As of today there is no way to have GCP move the Secret between projects for you.
It's a good feature request that you can file here: https://b.corp.google.com/issues/new?component=784854&pli=1&template=1380926
edited according to John Hanley's comment
I just had to deal with something similar myself, and came up with a simple bash script that does what I need. I run Linux.
there are some prerequisites:
download the gcloud cli for your OS.
get the list of secrets you want to migrate (you can do it by setting up the gcloud with the source project gcloud config set project [SOURCE_PROJECT], and then running gcloud secrets list)
then once you have the list, convert it textually to a list in
format "secret_a" "secret_b" ...
the last version of each secret is taken, so it must not be in a "disabled" state, or it won't be able to move it.
then you can run:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}_env_file"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate)
done
rm secret_migrate
what this script does, is set the project to the source one, then get the secrets, and one by one save it to file, and upload it to the target project.
the file is rewritten for each secret and deleted at the end.
you need to replace the secrets array (secret_array), and the project names ([SOURCE_PROJECT], [TARGET_PROJECT]) with your own data.
I used this version below, which also sets a different name, and labels according to the secret name:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate --labels=environment=test,service="${i}")
done
rm secret_migrate
All "secrets" MUST be decrypted and compiled in order to be processed by a CPU as hardware decryption isn't practical for commercial use. Because of this getting your passwords/configuration (in PLAIN TEXT) is as simple as logging into one of your deployments that has the so called "secrets" (plain text secrets...) and typing 'env' a command used to list all environment variables on most Linux systems.
If your secret is a text file just use the program 'cat' to read the file. I haven't found a way to read these tools from GCP directly because "security" is paramount.
GCP has methods of exec'ing into a running container but you could also look into kubectl commands for this too. I believe the "PLAIN TEXT" secrets are encrypted on googles servers then decrypted when they're put into your cluser/pod.

can not build a Container using a Dockerfile

I am working on VM instances from the Google Cloud Platform and I am using Docker for the first time, so please bear with me. I am trying to follow steps to build a container because it is supposed to be a certain way for a project. I am stuck here:
Create the directory named ~/keto (~/ refers to your home directory)
Create a file ~/keto/Dockerfile
Add the following content to ~/keto/Dockerfile and save
#Pull the keto/ssh image from Docker hub
FROM keto/ssh:latest
# Create a user and password with environment variables
ENV SSH_USERNAME spock
ENV SSH_PASSWORD Vulcan
#Copy a ssh public key from ~/keto/id_rsa.pub to spock .ssh/authorized_keys
COPY ./id_rsa.pub /home/spock/.ssh/authorized_keys
I was able to Pull the keto/ssh image from the Docker hub
with no issues, but my problem is that I am unable to create the directory and I am also stuck when it comes to creating the environment variable. Can anyone guide me to what is the correct approach to:
A-build a directory and B- after I am done with the directory to create environment variables I would really appreciate it a lot. thank you
#Pull the keto/ssh image from Docker hub
FROM keto/ssh:latest
# Create a user and password with environment variables
ENV SSH_USERNAME=spock
ENV SSH_PASSWORD=Vulcan
# Create keto directory:
RUN mkdir ~/keto
#Copy a ssh public key from ~/keto/id_rsa.pub to spock .ssh/authorized_keys
ADD ./id_rsa.pub /home/spock/.ssh/authorized_keys
You may find useful the Docker’s official documentation on how to create a Dockerfile or to check how ENV variable has to be set.
I recommend always having a look at Dockerfile's hub, for this case is keto's ssh because it usually contains some guidance about the image we are going to build.

AWS pass large number of ENV variables into codebuild

Currently our singleton application including 5 containers goes through AWS pipeline into code build and then code deploy into ECS services. During codebuild base on an ENV set in codebuild $Stage it can be dev, prod or staging and loads a specific config file for which contains all the ENV variables each container needs. See below:
build:
commands:
#Get commit id
- "echo STAGE $STAGE"
- "export STAGE=$STAGE"
#Assigning AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY needs to be done in two steps, otherwise it ends up in "Partial credentials found in env" error
- "export ANSIBLE_VARS=\"\
USE_EXISTING_VPC=true \
DISABLE_BASIC_AUTH=true\""
- "export DOCKER_ARGS=\"-e COMMIT_ID=$GIT_COMMIT -e APP_ENV=$STAGE
Problem 1: is these config files are within the repo and anybody can modify them. So there are lots of human errors like the production redirect Url is pointing to the wrong place, or new ENV is not set.
So I want to move away from loading different config files and move ENV variables to AWS to handle. Something like during code build it will load from parameter store. Is this correct way?
Problem 2 is there are lots of ENV variables, is the only option to list them one by one in the CloudFormation template ? Are there any other better way to load all of ENV variable into DOCKER_ARG from above build command ?

Get elastic beanstalk environment variables in docker container

So, i'm trying not to put sensitive information on the dockerfile. A logical approach is to put the creds in the ebs configuration (the GUI) as a ENV variable. However, docker build doesn't seem to be able to access the ENV variable. Any thoughts?
FROM jupyter/scipy-notebook
USER root
ARG AWS_ACCESS_KEY_ID
RUN echo {$AWS_ACCESS_KEY_ID}
I assume that for every deployment you create a new Dockerrun.aws.json file with the correct docker image tag for that deployment. At deployment stage, you can inject environment values which will then be used in docker run command by EB agent. So your docker containers can now access to these environment variables.
Putting sensitive information (for a Dockerfile to use) can be either for allowing a specific step of the image to run (build time), or for the resulting image to have that secret still there at runtime.
For runtime, if you can use the latest docker 1.13 in a swarm mode configuration, you can manage secrets that way
But the first case (build time) is typically for passing credentials to an http_proxy, and that can be done with --build-arg:
docker build --build-arg HTTP_PROXY=http://...
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don’t persist in the intermediate or final images like ENV values do.
In that case, you would not use ENV, but ARG:
ARG <name>[=<default value>]
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag

How to set environment variable for root user at start-up?

I'm trying to add memory usage monitoring to the monitoring tab of an instance at console.aws.amazon.com. It's an instance running Amazon Linux AMI 2013.09.2 I have found the Amazon CloudWatch Monitoring Scripts for Linux and specifically mon-put-instance-data.pl that let's me collect memory stats and report it to CloudWatch as custom metrics.
To have this working I need to set the environment variable AWS_CREDENTIAL_FILE to point to a file containing my AWSAccessKeyId and AWSSecretKey. I do this by typing:
export AWS_CREDENTIAL_FILE=/home/ec2-user/aws-scripts-mon/awscreds.template
To avoid having to type this over and over again, I'm looking for a way to set the environment variable at startup. I have tried adding the code to these files:
/etc/rc.local file
/etc/profile
/home/ec2-user/.bash_profile
As adding the line of code in either of the files seems to work when I switch to root user, where should I put it? If I set the variable in /home/ec2-user/.bash_profile the variable is set for ec2-user but not for root. If i then sudo -E su it works, but I don't know if this is the best way to go about it?
Create a sh file and put the code in it. Then put this sh file in /etc/profile.d/ folder.
Note: create this sh file using the root user.
Once your instance is created, this sh file will automatically run and creates the environment variable for you and this environment variable will be accessible to all the users.