Stuck with the usage of SecureString from AWS Parameter Store. I am trying to refer to the database password as:
DatabasePassword:
Type: AWS::SSM::Parameter::Value<SecureString>
NoEcho: 'true'
Default: /environment/default/database_password
Description: The database admin account password
This throws an error:
An error occurred (ValidationError) when calling the CreateStack operation: Template format error: Unrecognized parameter type: SecureString
However, if I refer to this parameter as String instead of SecureString it throws a different error:
An error occurred (ValidationError) when calling the CreateStack operation: Parameters [/environment/default/database_password] referenced by template have types not supported by CloudFormation.
I did try using '{{resolve:ssm-secure:parameter-name:version}}' and it works for database configuration:
MasterUsername: !Ref DatabaseUsername
MasterUserPassword: '{{resolve:ssm-secure:/environment/default/database_password:1}}'
However, I'm using AWS Fargate docker containers where I'm supplying these values as Environment variables:
Environment:
- Name: DATABASE_HOSTNAME
Value: !Ref DatabaseHostname
- Name: DATABASE_USERNAME
Value: !Ref DatabaseUsername
- Name: DATABASE_PASSWORD
Value: '{{resolve:ssm-secure:/environment/default/database_password:1}}'
This throws an error:
An error occurred (ValidationError) when calling the CreateStack operation: SSM Secure reference is not supported in: [AWS::ECS::TaskDefinition/Properties/ContainerDefinitions/Environment]
Unable to use secure strings in my implementation. Is there any workaround to this problem? AWS announced support for SecureString last year, but unable to find the documentation. All I found was to use resolve which only works in some cases.
References:
1
2
CloudFormation does not support SecureString as template parameter type. You can confirm it in the documentation below, let me quote it.
In addition, AWS CloudFormation does not support defining template
parameters as SecureString Systems Manager parameter types. However,
you can specify Secure Strings as parameter values for certain
resources by using dynamic parameter patterns.
Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#aws-ssm-parameter-types
As you mention you "could" solve it using dynamic parameter patterns, but only a limited amount of resources supports it. ECS and Fargate does not.
Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html
Maybe you can address it using Secrets Manager, instead you set the password as environment variable for you container, your application get the password in runtime from Secrets Manager, this also improves your security, the password will not be in clear text inside the container.
Below you can see one example of this solution, it is not for container, but the "way of work" is the same using environment variable and Secrets Manager.
Reference: https://aws.amazon.com/blogs/security/how-to-securely-provide-database-credentials-to-lambda-functions-by-using-aws-secrets-manager/
The AWS Secrets Manager can be used to obtain secrets for CloudFormation templates, even where they are not things such as database passwords.
Here is a link to the documentation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#dynamic-references-secretsmanager
There are 3 parts to a Secrets Manager secret:
The Secret's Name e.g. PROD_DB_PASSWORD
The Secret's Key e.g. DB_PASSWORD
And the actual Secret Value
You would then resolve the above secret in your CloudFormation template using:
'{{resolve:secretsmanager:PROD_DB_PASSWORD:SecretString:DB_PASSWORD}}'
I know this post is quite old, but I came across a situation where I needed to use a SecureString and found both this post and a blog post that describes a workaround. I thought this could help some people.
Original Post Here
Basically, you can create a .conf file in your .ebextensions folder like this:
---
packages:
yum:
bash: []
curl: []
jq: []
perl: []
files:
/opt/elasticbeanstalk/hooks/restartappserver/pre/00_resolve_ssm_environment_variables.sh:
mode: "000700"
owner: root
group: root
content: |
#!/usr/bin/env bash
/usr/local/bin/resolve_ssm_environment_variables.sh
/opt/elasticbeanstalk/hooks/appdeploy/pre/00_resolve_ssm_environment_variables.sh:
mode: "000700"
owner: root
group: root
content: |
#!/usr/bin/env bash
/usr/local/bin/resolve_ssm_environment_variables.sh
/opt/elasticbeanstalk/hooks/configdeploy/pre/00_resolve_ssm_environment_variables.sh:
mode: "000700"
owner: root
group: root
content: |
#!/usr/bin/env bash
/usr/local/bin/resolve_ssm_environment_variables.sh
/usr/local/bin/resolve_ssm_environment_variables.sh:
mode: "000700"
owner: root
group: root
content: |
#!/usr/bin/env bash
set -Eeuo pipefail
# Resolve SSM parameter references in the elasticbeanstalk option_settings environment variables.
# SSM parameter references must take the same form used in CloudFormation, see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#dynamic-references-ssm-secure-strings
# supported forms are:
# {{resolve:ssm-secure-env:path:version}}
# {{resolve:ssm-secure-env:path}}
# {{resolve:ssm-env:path:version}}
# {{resolve:ssm-env:path}}
# where "path" is the SSM parameter path and "version" is the parameter version.
if [[ -z "${AWS_DEFAULT_REGION:-}" ]]; then
# not set so get from configuration
AWS_DEFAULT_REGION="$(aws configure get region)" || :
fi
if [[ -z "${AWS_DEFAULT_REGION:-}" ]]; then
# not set so get from metadata
AWS_DEFAULT_REGION="$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)" || :
fi
if [[ -z "${AWS_DEFAULT_REGION:-}" ]]; then
echo "Could not determine region." 1>&2
exit 1
fi
export AWS_DEFAULT_REGION
readonly CONTAINER_CONFIG_FILE="${1:-/opt/elasticbeanstalk/deploy/configuration/containerconfiguration}"
readonly TEMP_CONTAINER_CONFIG_FILE="$(mktemp)"
i=0
for envvar in $(jq -r ".optionsettings[\"aws:elasticbeanstalk:application:environment\"][]" "${CONTAINER_CONFIG_FILE}"); do
envvar="$(echo "${envvar}" | perl -p \
-e 's|{{resolve:ssm(?:-secure)-env:([a-zA-Z0-9_.-/]+?):(\d+?)}}|qx(aws ssm get-parameter-history --name "$1" --with-decryption --query Parameters[?Version==\\\x60$2\\\x60].Value --output text) or die("Failed to get SSM parameter named \"$1\" with version \"$2\"")|eg;' \
-e 's|{{resolve:ssm(?:-secure)-env:([a-zA-Z0-9_.-/]+?)}}|qx(aws ssm get-parameter --name "$1" --with-decryption --query Parameter.Value --output text) or die("Failed to get SSM parameter named \"$1\"")|eg;')"
export envvar
jq ".optionsettings[\"aws:elasticbeanstalk:application:environment\"][${i}]=env.envvar" < "${CONTAINER_CONFIG_FILE}" > "${TEMP_CONTAINER_CONFIG_FILE}"
cp "${TEMP_CONTAINER_CONFIG_FILE}" "${CONTAINER_CONFIG_FILE}"
rm "${TEMP_CONTAINER_CONFIG_FILE}"
((i++)) || :
done
And then you can use it like that in CloudFormation template (or really any way you want, I use it with Terraform). Note that there is an extra -env suffix to distinguish from the native resolver.
---
AWSTemplateFormatVersion: '2010-09-09'
Resoures:
BeanstalkEnvironment:
Type: AWS::ElasticBeanstalk::Environment
Properties:
OptionSettings:
-
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: SPRING_DATASOURCE_PASSWORD
Value: !Sub "{{resolve:ssm-secure-env:/my/parameter:42}
Related
Is it possible to fetch instances details like instances id, vpc id, key-pair, sg group etc from cloudformation template from cfn-get-metadata please assist.
If it possible please share ..
cfn-get-metadata only gets info from the metadata section of the cloudformation template. In cfn-init, you can use values from params and maps.
You can use common cloud formation features to refer to the template like !FindInMap or you can use ${}.
Check out this snippet
owner: !FindInMap [ nodes, !Ref nodeType, userName ]
group: !FindInMap [ nodes, !Ref nodeType, userName ]
configure:
commands:
configure_service:
command: /opt/app/configure.sh
test: "test ! -e /etc/systemd/system/${nodeType}.service"
If you need some data from the stack then you could add Outputs for the items you need and run describe stack in your user data.
aws cloudformation describe-stacks --stack-name myteststack
I am trying to deploy a Sagemaker Lifecycle with AWS CloudFormation.
The Lifecycle is importing ipynb notebooks from s3 bucket to the Sagemaker notebook instance.
the bucket name is specified in the parameters, I want to use it in a !Sub function inside the bash script of the Lifecycle.
The problem is that the CF runs first on the template and tries to complete its own functions (like !Sub) and then the scripts upload as bash script to the Lifecycle.
This is my code:
LifecycleConfig:
Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
Properties:
NotebookInstanceLifecycleConfigName: !Sub
- ${NotebookInstanceName}LifecycleConfig
- NotebookInstanceName: !Ref NotebookInstanceName
OnStart:
- Content:
Fn::Base64: !Sub
- |
#!/bin/bash -xe
set -e
CP_SAMPLES=true
EXTRACT_CSV=false
s3region=s3.amazonaws.com
SRC_NOTEBOOK_DIR=${Consumer2BucketName}/sagemaker-notebooks
Sagedir=/home/ec2-user/SageMaker
industry=industry
notebooks=("notebook1.ipynb" "notebook2.ipynb" "notebook3.ipynb")
download_files(){
for notebook in "${notebooks[#]}"
do
printf "aws s3 cp s3://${SRC_NOTEBOOK_DIR}/${notebook} ${Sagedir}/${industry}\n"
aws s3 cp s3://"${SRC_NOTEBOOK_DIR}"/"${notebook}" ${Sagedir}/${industry}
done
}
if [ ${CP_SAMPLES} = true ]; then
sudo -u ec2-user mkdir -p ${Sagedir}/${industry}
mkdir -p ${Sagedir}/${industry}
download_files
chmod -R 755 ${Sagedir}/${industry}
chown -R ec2-user:ec2-user ${Sagedir}/${industry}/.
fi
- Consumer2BucketName: !Ref Consumer2BucketName
Raised the following error:
Template error: variable names in Fn::Sub syntax must contain only alphanumeric characters, underscores, periods, and colons
It seems that was a conflict with the Bash Vars and the !Sub CF function.
In the following template I changed the Bash Vars and removed the {}:
LifecycleConfig:
Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
Properties:
NotebookInstanceLifecycleConfigName: !Sub
- ${NotebookInstanceName}LifecycleConfig
- NotebookInstanceName: !Ref NotebookInstanceName
OnStart:
- Content:
Fn::Base64:
!Sub
- |
#!/bin/bash -xe
set -e
CP_SAMPLES=true
EXTRACT_CSV=false
s3region=s3.amazonaws.com
SRC_NOTEBOOK_DIR=${Consumer2BucketName}/sagemaker-notebooks
Sagedir=/home/ec2-user/SageMaker
industry=industry
notebooks=("notebook1.ipynb" "notebook2.ipynb" "notebook3.ipynb")
download_files(){
for notebook in $notebooks
do
printf "aws s3 cp s3://$SRC_NOTEBOOK_DIR/${!notebook} $Sagedir/$industry\n"
aws s3 cp s3://"$SRC_NOTEBOOK_DIR"/"${!notebook}" $Sagedir/$industry
done
}
if [ $CP_SAMPLES = true ]; then
sudo -u ec2-user mkdir -p $Sagedir/$industry
mkdir -p $Sagedir/$industry
download_files
chmod -R 755 $Sagedir/$industry
chown -R ec2-user:ec2-user $Sagedir/$industry/.
fi
- Consumer2BucketName: !Ref Consumer2BucketName
The problem here is the for loop is not running through all the notebooks in the list but importing only the first one.
After going through some solutions I tried adding [#] to the notebooks:
for notebook in $notebooks[#]
and
for notebook in “$notebooks[#]“/”$notebooks[*]“/$notebooks[#]
I got the same error.
It seems that was a conflict with the Bash Vars and the !Sub CF function.
That's correct. Both bash and !Sub use ${} for variable substitution. You can escape the bash variables with ${!}. For example:
for notebook in "${!notebooks[#]}"
Also mentioned in the docs:
To write a dollar sign and curly braces (${}) literally, add an exclamation point (!) after the open curly brace, such as ${!Literal}. AWS CloudFormation resolves this text as ${Literal}.
I want to store a secret in AWS secrets manager and retrieve it in a CloudFormation template.
To test it I just put it in the value of a tag -
MainRouteTable:
Properties:
Tags:
- Key: Environment
Value: LIVE
- Key: Name
Value: '{{resolve:secretsmanager:tvs:SecretString:testname}}'
VpcId: !Ref 'VPC'
Type: AWS::EC2::RouteTable
After I run the CloudFormation using the template and the environment is up, the value for the tag "Name" is "{{resolve:secretsmanager:tvs:SecretString:testname}}" and not the actual secret stored in testname.
I have looked all around and can not figure out what is wrong. According to the AWS docs I am doing it properly.
I can retrieve the secret fine from the CLI -
aws secretsmanager --region us-east-1 get-secret-value --secret-id arn:aws:secretsmanager:us-east-1:xxxxxx:secret:tvs-ZVTiDO --query SecretString --output text | jq -r .testname
Any suggestions?
I followed the instructions here - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html#dynamic-references-secretsmanager
SecretString can only be used in few resources and selected properties. Tags are not supported. The supported list is:
AWS::DirectoryService::MicrosoftAD Password
AWS::DirectoryService::SimpleAD Password
AWS::ElastiCache::ReplicationGroup AuthToken
AWS::IAM::User LoginProfile Password
AWS::KinesisFirehose::DeliveryStream
RedshiftDestinationConfiguration Password
AWS::OpsWorks::App Source Password
AWS::OpsWorks::Stack CustomCookbooksSource Password
AWS::OpsWorks::Stack RdsDbInstances DbPassword
AWS::RDS::DBCluster MasterUserPassword
AWS::RDS::DBInstance MasterUserPassword
AWS::Redshift::Cluster MasterUserPassword
as a general rule, secrets will never display in AWS console, e.g. you can't use the im CloudFormation export, tags ect.
I'm following this doc https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html to set up a pipeline to deploy to the ECS cluster.
This doc is using a custom task def JSON file and using the same for the deployment after updating the image name.
Am I required to copy the complete task definition JSON and put that in my repository? My task definition has lots of environment variables in it. I do not want to expose them by putting it in the repository.
Or, the task definition template will update the default task definition and create a new revision. (not overwrite)
The deployment step is
tags:
revision-*:
- step:
deployment: production
name: Deploy to ECS
script:
# Replace the docker image name in the task definition with the newly pushed image.
- export IMAGE_NAME=${ECR_USERNAME}/${BITBUCKET_REPO_SLUG}:latest
- envsubst < task-definition-template.json > task-definition.json
# Update the task definition.
- pipe: atlassian/aws-ecs-deploy:1.0.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: $AWS_ECS_CLUSTER_NAME
SERVICE_NAME: $AWS_ECS_SERVICE_NAME
TASK_DEFINITION: 'task-definition.json'
It is expecting me to have a definition file in my repository task-definition-template.json
How can I use the predefined tasks instead of using the JSON file? Also, where can I find more doc about the pipe.
atlassian/aws-ecs-deploy
You can put a shell script into your repository for deployment, and execute this script in the Bitbucket pipeline.
e.g. put this shell script in cicd/update-task.sh
update-task.sh :
#!/bin/bash
set -e
ECR_IMAGE_TAG=1234555555.dkr.ecr.eu-west-1.amazonaws.com/my-image:abcdefa
if [ "$TASK_FAMILY" = "" ]; then
echo "Missing variable TASK_FAMILY" >&2
exit 1
fi
if [ "$AWS_DEFAULT_REGION" = "" ]; then
echo "Missing variable AWS_DEFAULT_REGION" >&2
exit 1
fi
if [ "$ECR_IMAGE_TAG" = "" ]; then
echo "Missing variable ECR_IMAGE_TAG" >&2
exit 1
fi
TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_FAMILY")
NEW_TASK_DEFINTIION=$(echo "$TASK_DEFINITION" | jq --arg IMAGE "$ECR_IMAGE_TAG" '.taskDefinition | .containerDefinitions[0].image = $IMAGE | del(.taskDefinitionArn) | del(.revision) | del(.status) | del(.requiresAttributes) | del(.compatibilities)')
NEW_TASK_INFO=$(aws ecs register-task-definition --region "$AWS_DEFAULT_REGION" --cli-input-json "$NEW_TASK_DEFINTIION")
NEW_REVISION=$(echo "$NEW_TASK_INFO" | jq '.taskDefinition.revision')
# return new task revision
echo "${TASK_FAMILY}:${NEW_REVISION}"
You can use aws cli to run this command and retrieve the existing task definition JSON:
https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-task-definition.html
Cloudformation appears to have an "Outputs" section where you can have a value referenced for other stacks, or to display back to the user, etc.
The limited doc is here.
Is it possible to use this to make the contents of a file available?
e.g. I've got a Jenkins install where the initial admin password is stored within:
/var/lib/jenkins/secrets/initialAdminPassword
I'd love to have that value available after deploying our Jenkins Cloudformation stack without having to then SSH into the server.
Is this possible with the outputs section, or any other way with cloudformation templates?
The Outputs section Cloud Formation template are meant to help you find your resource easily.
For any resource you create, you can output the properties defined in Fb::GetAtt Documentation.
For example, to get the connection string for the RDS Instance which was created using Cloud formation template, you can use the following
"Outputs" : {
"JDBCConnectionString": {
"Description" : "JDBC connection string for the master database",
"Value" : { "Fn::Join": [ "",
[ "jdbc:mysql://",
{ "Fn::GetAtt": [ "MyDatabase", "Endpoint.Address" ] },
":",
{ "Fn::GetAtt": [ "MyDatabase", "Endpoint.Port" ] },
"/",
{ "Ref": "MyDBName" }]
]}
}
}
It is not possible to output contents from a file. Moreover, outputs are visible to all the users having access to your AWS account. So, having password as an output is not recommended.
I would suggest you to upload your secrets to a private S3 bucket after the cloud formation create stack operation is successful and download the secrets whenever required.
Hope this helps.
I know this question has been answered but I wanted to offer another solution.
I found myself wanting to do exactly what you (the, OP) were trying to do: use Cloudformation to install Jenkins on an EC2 instance and then print the initial admin password to the Cloudformation outputs.
I ended up working around trying to read the file with the password and instead used the Jenkins CLI from the UserData section to update the admin user with a password that I specified.
Here’s what I did (showing snippets from the template in YAML):
Added a parameter to the template inputs to get the password:
Parameters:
KeyName:
ConstraintDescription: Must be the name of an existing EC2 KeyPair.
Description: Name of an existing EC2 KeyPair for SSH access
Type: AWS::EC2::KeyPair::KeyName
PassWord:
AllowedPattern: '[-_a-zA-Z0-9]*'
ConstraintDescription: A complex password at least eight chars long with alphanumeric characters, dashes and underscores.
Description: Password for the admin account
MaxLength: 64
MinLength: 8
NoEcho: true
Type: String
In the UserData section, I used the PassWord parameter in a call to the jenkins-cli to update the admin account :
UserData: !Base64
Fn::Join:
- ''
- - "#!/bin/bash -x\n"
- "exec > /tmp/user-data.log 2>&1\nunset UCF_FORCE_CONFFOLD\n"
- "export UCF_FORCE_CONFFNEW=YES\n"
- "ucf --purge /boot/grub/menu.lst\n"
- "export DEBIAN_FRONTEND=noninteractive\n"
- "echo \"deb http://pkg.jenkins-ci.org/debian binary/\" > /etc/apt/sources.list.d/jenkins.list\n"
- "wget -q -O jenkins-ci.org.key http://pkg.jenkins-ci.org/debian-stable/jenkins-ci.org.key\n\
apt-key add jenkins-ci.org.key\n"
- "apt-get update\n"
- "apt-get -o Dpkg::Options::=\"--force-confnew\" --force-yes -fuy upgrade\n"
- "apt-get install -y python-pip\n"
- "pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n"
- "apt-get install -y nginx\n"
- "apt-get install -y openjdk-8-jdk\n"
- "apt-get install -y jenkins\n"
- "# Wait for Jenkins to Set Up\n
- until [ $(curl -o /dev/null --silent\
\ --head --write-out '%{http_code}\n' http://localhost:8080) -eq 403\
\ ]; do sleep 1; done\nsleep 10\n# Change the password for the admin\
\ account\necho 'jenkins.model.Jenkins.instance.securityRealm.createAccount(\"\
admin\", \""
- !Ref 'PassWord'
- "\")' | java -jar /var/cache/jenkins/war/WEB-INF/jenkins-cli.jar -s\
\ \"http://localhost:8080/\" -auth \"admin:$(cat /var/lib/jenkins/secrets/initialAdminPassword)\"\
\ groovy =\n/usr/local/bin/cfn-init --resource=Instance --region="
- !Ref 'AWS::Region'
- ' --stack='
- !Ref 'AWS::StackName'
- "\n"
- "unlink /etc/nginx/sites-enabled/default\nsystemctl reload nginx\n"
- /usr/local/bin/cfn-signal -e $? --resource=Instance --region=
- !Ref 'AWS::Region'
- ' --stack='
- !Ref 'AWS::StackName'
- "\n"
Using this method, when Jenkins starts up, I don’t get the “enter the initial admin password” screen but instead I get a screen where i can just log in as admin with the password used in the parameters.
In terms of adding something to the outputs from a file on the system, I think there is a way to do it using WaitCondition and and passing data back using a cfn-signal command. But once I figured out that all I needed to do was set the password I didn’t pursue the WaitCondition method.
Again, I know you have your answer, but I wanted to share in case anyone else happens to be searching for a way to do this. This way worked for me! :D