I have a use case where I need to access the SNS topic from outside AWS. We planned to use https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/ as it seems to be the right fit
But I'm unable to get this working correctly. I followed the link exactly mentioed above where the contents of .aws/config file are
credential_process = ./aws_signing_helper credential-process
--certificate /path/to/certificate.pem
--private-key /path/to/private-key.pem
--trust-anchor-arn <TA_ARN>
--profile-arn <PROFILE_ARN>
--role-arn <ExampleS3WriteRole_ARN>
But my spring boot application throws an error stating that it could not fetch the credentials to connect to AWS. Kindly assist
I found the easiest thing to do was to create a separate script for the credential_process to target, this isn't necessary I just found it easier.
So create a script along the lines of:
#! /bin/bash
# raw_helper.sh
/path/to/aws_signing_helper credential-process \
--certificate /path/to/cert.crt \
--private-key /path/to/key.key \
--trust-anchor-arn <TA_ARN> \
--profile-arn <Roles_Anywhere_Profile_ARN> \
--role-arn <IAM_Role_ARN>
The key thing I found is that most places (including AWS documentation) tell you to use the ~/.aws/config file and declare the profile there. This didn't seem to work, but when I added the profile to my ~/.aws/credentials file it did work. Assuming you've created a helper script, this would look like this:
# ~/.aws/credentials
[raw_profile]
credential_process = /path/to/raw_helper.sh
Related
Hi I have my secrets in Secretmanager in one project and want to know how to copy them or migrate them to other project.
Is there a mechanism to do it smoothly.
As of today there is no way to have GCP move the Secret between projects for you.
It's a good feature request that you can file here: https://b.corp.google.com/issues/new?component=784854&pli=1&template=1380926
edited according to John Hanley's comment
I just had to deal with something similar myself, and came up with a simple bash script that does what I need. I run Linux.
there are some prerequisites:
download the gcloud cli for your OS.
get the list of secrets you want to migrate (you can do it by setting up the gcloud with the source project gcloud config set project [SOURCE_PROJECT], and then running gcloud secrets list)
then once you have the list, convert it textually to a list in
format "secret_a" "secret_b" ...
the last version of each secret is taken, so it must not be in a "disabled" state, or it won't be able to move it.
then you can run:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}_env_file"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate)
done
rm secret_migrate
what this script does, is set the project to the source one, then get the secrets, and one by one save it to file, and upload it to the target project.
the file is rewritten for each secret and deleted at the end.
you need to replace the secrets array (secret_array), and the project names ([SOURCE_PROJECT], [TARGET_PROJECT]) with your own data.
I used this version below, which also sets a different name, and labels according to the secret name:
$(gcloud config set project [SOURCE_PROJECT])
declare -a secret_array=("secret_a" "secret_b" ...)
for i in "${secret_array[#]}"
do
SECRET_NAME="${i}"
SECRET_VALUE=$(gcloud secrets versions access "latest" --secret=${SECRET_NAME})
echo $SECRET_VALUE > secret_migrate
$(gcloud secrets create ${SECRET_NAME} --project [TARGET_PROJECT] --data-file=secret_migrate --labels=environment=test,service="${i}")
done
rm secret_migrate
All "secrets" MUST be decrypted and compiled in order to be processed by a CPU as hardware decryption isn't practical for commercial use. Because of this getting your passwords/configuration (in PLAIN TEXT) is as simple as logging into one of your deployments that has the so called "secrets" (plain text secrets...) and typing 'env' a command used to list all environment variables on most Linux systems.
If your secret is a text file just use the program 'cat' to read the file. I haven't found a way to read these tools from GCP directly because "security" is paramount.
GCP has methods of exec'ing into a running container but you could also look into kubectl commands for this too. I believe the "PLAIN TEXT" secrets are encrypted on googles servers then decrypted when they're put into your cluser/pod.
I want to store the Cfnoutputs in AWS-CDK to a file(Python).
Below is the code to show Public IP on console.
my_ip = core.CfnOutput(
scope=self,
id="PublicIp",
value=my_ec2.instance_public_ip,
description="public ip of my instance",
export_name="my-ec2-public-ip")
I have tried using redirecting the output in Python by using command:
cdk deploy * > file.txt
But no success.
Please help
For every value you want saved after the stack is run add a core.CfnOutput call in your code.
Then when you deploy your stack, use:
% cdk deploy {stack-name} --profile $(AWS_PROFILE) --require-approval never \
--outputs-file {output-json-file}
This deploys the stack, doesn't stop to ask for yes/no approvals (so you can put it in a Makefile or a CI/CD script) and once done, saves the value of every CfnOutput in your stack to a JSON file.
Details here: https://github.com/aws/aws-cdk/commit/75d5ee9e41935a9525fa6cfe5a059398d0a799cd
This answer is only relevant if you're using CDK <1.32.0. Since then #7020 was merged and --outputs-file is supported. See the top voted answer for a full example.
Based on this closed issue, your best bet is using AWS CLI to describe the stack and extract the output. For example:
aws cloudformation describe-stacks \
--stack-name <my stack name> \
--query "Stacks[0].Outputs[?OutputKey==`PublicIp`].OutputValue" \
--output text
If you're using Python, this can also be done with boto3.
import boto3
outputs = boto3.Session().client("cloudformation").describe_stacks(StackName="<my stack here>")["Stacks"][0]["Outputs"]
for o in outputs:
if o["OutputKey"] == "PublicIp":
print(o["OutputValue"])
break
else:
print("Can't find output")
Not sure if this is a recent update to cdk CLI, but cdk deploy -O will work.
-O, --outputs-file Path to file where stack outputs will be written as JSON
This is an option now. It will take cdk.CfnOutput and put in JSON file.
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_core.CfnOutput.html
I have accidentally deleted aws credential and config file from the location c:\user\admin.aws.
Now when i use the aws cli through powershell its throwing an error saying profile not found ,i am unable to create or get those two files. How do i do it?
I tried creating these files using notepad which did not work for me.
I think the path for the files would be "c:\users\admin\.aws\", right?
Once the files are added there, with the right settings, just try
aws sts get-caller-identity
to check if the profile's configuration files are accessible by the command line.
I have been trying to update some of the terraform scripts from version 0.6.13 to 0.9.6. In my scripts I had before
terraform remote config -backend=s3 \
-backend-config="bucket=my_bucker" \
-backend-config="access_key=my_access_key" \
-backend-config="secret_key=my_secret" \
-backend-config="region=my_region" \
-backend-config="key=my_state_key"
and then
terraform/terraform remote pull
Which was pulling the remote state from aws. Upon running terraform apply it will give me the exact resources that needed to be updated/ created based on the remote tfstate that is stored in an s3 bucket.
Now the issue I'm facing is that remote pull and remote config commands are outdated and don't work anymore.
I tried to follow the instructions on https://www.terraform.io/docs/backends/types/remote.html
however it was not much helpful.
From what I understand I would have to do an init first with a partial configuration which presumably would automatically pull the remote state as following:
`terraform init -var-file="terraform.tfvars"\
-backend=true \
-backend-config="bucket=my_bucker" \
-backend-config="access_key=my_access_key" \
-backend-config="secret_key=my_secret" \
-backend-config="region=my_region" \
-backend-config="key=my_state_key"`
However it doesn't really pull the remote state as it was doing before.
Would anyone be able to guide me into the right direction?
You don't need terraform remote pull any more. Terraform by default will automatically based on the refresh flag which defaults to true.
Apparently I had to add a minimal backend configuration such as
terraform {
backend "s3" {
}
}
in my main.tf file for it to work
I am trying to do chef cookbook development via Jenkinsfile pipeline. I have my jenkins server running as a container (using jenkinsci/blueocean image). As one of the stages, I am trying to do aws configure and then run kitchen test. For some reason with below code, I am getting unauthorized operation error. For some reason, my AWS creds are not sent properly to .kitchen.yml (No need to check IAM creds, because they have admin access)
stage('\u27A1 Verify Kitchen') {
steps {
sh '''mkdir -p ~/.aws/
echo 'AWS_ACCESS_KEY_ID=...' >> ~/.aws/credentials
echo 'AWS_SECRET_ACCESS_KEY=...' >> ~/.aws/credentials
cat ~/.aws/credentials
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen list
KITCHEN_LOCAL_YAML=.kitchen.yml /opt/chefdk/embedded/bin/kitchen test'''
}
}
Is there anyway, I can pass AWS creds here. Also .kitchen.yml no longer supports passing AWS creds inside the file. Is there someway I can pass creds on command i.e. .kitchen.yml access_key=... secret_access_key=... /opt/chefdk/embedded/bin/kitchen test
Really appreciate your help.
You don't need to set KITCHEN_LOCAL_YAML=.kitchen.yml, that's already the primary config file.
You probably want to be using a Jenkins credential file, not hardcoding things into the job. But that said, the reason this isn't working is because the AWS credentials file is not a shell script, which is the syntax you're using there. It's an INI/TOML file and is paired with a similar config file that shares a similar structure.
You should probably just be using the environment variable support in kitchen-ec2 via the withEnv pipeline helper method or similar things for integrating with Jenkins managed credentials.