passing aws creds to kitchen ec2 command line - amazon-web-services

I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.

You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.

Related

Migrate Secrets from SecretManager in GCP

Hi I have my secrets in Secretmanager in one project and want to know how to copy them or migrate them to other project.
Is there a mechanism to do it smoothly.
As of today there is no way to have GCP move the Secret between projects for you.
It's a good feature request that you can file here: https://b.corp.google.com/issues/new?component=784854&pli=1&template=1380926
edited according to John Hanley's comment
I just had to deal with something similar myself, and came up with a simple bash script that does what I need. I run Linux.
there are some prerequisites:
download the gcloud cli for your OS.
get the list of secrets you want to migrate (you can do it by setting up the gcloud with the source project gcloud config set project [SOURCE_PROJECT], and then running gcloud secrets list)
then once you have the list, convert it textually to a list in
format "secret_a" "secret_b" ...
the last version of each secret is taken, so it must not be in a "disabled" state, or it won't be able to move it.
then you can run:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}_env_file"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate)
done
rm secret_migrate
what this script does, is set the project to the source one, then get the secrets, and one by one save it to file, and upload it to the target project.
the file is rewritten for each secret and deleted at the end.
you need to replace the secrets array (secret_array), and the project names ([SOURCE_PROJECT], [TARGET_PROJECT]) with your own data.
I used this version below, which also sets a different name, and labels according to the secret name:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate --labels=environment=test,service="${i}")
done
rm secret_migrate
All "secrets" MUST be decrypted and compiled in order to be processed by a CPU as hardware decryption isn't practical for commercial use. Because of this getting your passwords/configuration (in PLAIN TEXT) is as simple as logging into one of your deployments that has the so called "secrets" (plain text secrets...) and typing 'env' a command used to list all environment variables on most Linux systems.
If your secret is a text file just use the program 'cat' to read the file. I haven't found a way to read these tools from GCP directly because "security" is paramount.
GCP has methods of exec'ing into a running container but you could also look into kubectl commands for this too. I believe the "PLAIN TEXT" secrets are encrypted on googles servers then decrypted when they're put into your cluser/pod.

Unable to locate credentials aws cli

I am using AWS CLI version 2. I am using centos > Nginx > php 7.1, Following command works fine when I directly run on command line.
aws s3 cp files/abc.pdf s3://bucketname/
but when I run same command from index.php file using following code
echo exec("aws s3 cp files/abc.pdf s3://bucketname/ 2>&1");
then it gives error
upload failed: Unable to locate credentials
#Jass Add your credentials in "~/.aws/credentials" or "~/.aws/config" and make it [default] or else use profile_name incase you have multiple accounts.
Also verify, if you are using keys as Environment variables by export, then it will work for that terminal only. So try to execute the php from same terminal where you exported the keys or add it in ~/.aws/credentials.
I tried this and it worked for me and I believe should work for you as well. In your PHP code (index.php), try exporting the credential file location like below
echo exec("export AWS_SHARED_CREDENTIALS_FILE=/<path_to_aws_folder>/.credentials; aws s3 cp files/abc.pdf s3://bucketname/ 2>&1");
When you run from your command-line the AWS CLI picks up the credentials from your home directory i.e. ~/.aws/credentials (this is default). When the index.php is being executed it is looking for the above file in its home directory which appears is not the same as your home directory and hence cannot find the credentials. With the above change you are explicitly pointing it to your AWS credentials.

How to reset or start from scratch aws cdk configrations?

I have recently start using aws cdk as a newbie. so i ran lot of commands that i had no idea about.
now i want to remove all settings like env variables i created or profiles and start from scratch. what should in un install to achieve that?
I'm not totally sure what you're trying to reset but here's a few suggestions that might help:
Remove Deployed CDK Stacks
cdk destroy stack_name
Note: You'll have to do this for every stack you've deployed. This can also be done through "CloudFormation" in the AWS dashboard in your browser.
Remove CLI Settings
As per https://docs.amazonaws.cn/en_us/cli/latest/userguide/cli-configure-files.html
To remove a setting, use an empty string as the value, or manually delete the setting in your config and credentials files in a text editor.
Example:
aws configure set cli_pager ""
Remove Profiles
Unsure if you can do this easily through the CLI but you can just manually remove them from your config files. There are only two config files and they can be found using https://docs.amazonaws.cn/en_us/cli/latest/userguide/cli-configure-profiles.html
~/.aws/credentials (Linux & Mac) or %USERPROFILE%.aws\credentials (Windows)
~/.aws/config (Linux & Mac) or %USERPROFILE%.aws\config (Windows)
If you need more specific help on how to undo something then please provide an example of what exactly you ran that you would like to undo.

I am getting exception while copying file from a Linux machine to GCS

I am getting the exception when I am running the below command inside Linux machine.
I am trying to copy one text file from linux machine to Google Cloud Storage bucket.
I have created a service account.
Steps I followed-
export GOOGLE_APPLICATION_CREDENTIALS=/home/test/shubham_test/xyz.json
Here, xyz.json is the key file which was downloaded while creating service account.
gsutil cp test.txt gs://my-bucket/
I was getting the below exception.
ServiceException: 401 Anonymous caller does not have storage.objects.create access to my-bucket/test.txt.
I was going to advise to use gcloud auth but the answer is already present there ;)
Hope this does the trick: Automating gsutil commands

Getting files from server on AWS Using Jenkins Build

I have installed jenkins on my local machine (on premises). I have my server (Linux) in AWS Cloud. I need to share logs with developers with out giving server access to them. I need to create a jenkins job by running that job they should get the logs from server.
How can i do that ?? If any one following the same process to get the data from cloud please help me in solving this... Thanks in advance.
Use the SSH Agent plugin to securely setup your private key
Use SCP to copy the log files to the local workspace
Archive those files to the Jenkins job
You could write a pipeline script to do this. Something like:
node ("linux") {
sshagent (credentials: ['deploy-dev']) {
sh 'scp user#awshostnamehere:/somepath/somelogfile .'
archive somelogfile
}
}
Note that this requires you to fill in the blanks. To get this to work you would have to:
Setup an SSH private key credential named deploy-dev
Setup a build agent with the label 'linux' or change that to a label of an agent you do have.