I am trying to get the value of my RDS endpoint and use it as a value in a secret-manager I am creating.
I know how to get the endpoint in the outputs:
DB1ConnectionString:
Condition: Launch1Engine
Description: The First db Connection String
Value: {"Fn::GetAtt": ["RDSDBInstance1","Endpoint.Address"]}
But I can not use output inside my current stack, so I want to use the same way I got the Endpoint and use it in the secret manager.
This is what I tried:
DBStringSecret1:
Condition: Launch1Engine
Type: 'AWS::SecretsManager::Secret'
Properties:
Name: !Ref DBStringSecret1Name
SecretString: !Sub '{"repository":!GetAtt RDSDBInstance1.Endpoint.Address,"username":"MasterUsername","password":"${SafeMineDBPassword1}"}'
But I get a literal string as the "repository value and not the RDS endpoint,
Is there a way to use the "!GetAtt" inside the "!Sub"?
Or am I doing it all wrong and I can define a new parameter that will build the value I want using Join?
!Sub 'jdbc://{!GetAtt RDSDBInstance1.Endpoint.Address}:3306/<SCHEMA>?'
Expected result:
jdbc://endpoint:3306/?
You have to use join function in this case:
SecretString: !Join
- ''
- - '{"repository": "'
- !GetAtt RDSDBInstance1.Endpoint.Address
- '","username":"MasterUsername","password":"'
- !Ref SafeMineDBPassword1
- '"}'
Related
I have a CloudFormation template that creates an AWS::Events::Rule and an AWS::SSM::Document. I need to provide a list of Targets for the SSM::Rule, but each target expects an ARN:
mySSMDocument:
Type: AWS::SSM::Document
Properties:
DocumentType: 'Command'
Content:
schemaVersion: '2.2'
description: "Code that will be run on EC2"
mainSteps:
- action: "aws:runShellScript"
name: runShellScript
inputs:
runCommand:
- 'Some command to execute'
myEventRule:
Type: AWS::Events::Rule
Properties:
Description: "A description for the Rule."
EventPattern:
source:
- "aws.autoscaling"
detail-type:
- "EC2 Instance-terminate Lifecycle Action"
detail:
AutoScalingGroupName:
- !Ref 'someAutoScalingGroupInThisTemplate'
RoleArn: 'some role ARN'
State: "ENABLED"
Targets:
- Id: "some-unique-id"
Arn: <-- This is the value that I need to fill in.
RunCommandParameters:
RunCommandTargets:
- Key: "tag: Name"
Values:
- 'The name of the EC2 machine'
I think that I need to replace the <-- This is the value that I need to fill in. with the ARN of mySSMDocument, but I don't see any way to retrieve this value from within the template itself. The documentation does not specify any GetAtt functionality on SSM::Document that allows to get the ARN. Anyone know how to solve this issue?
This is ARN pattern of Document
arn:${Partition}:ssm:${Region}:${Account}:document/${DocumentName}
example:
arn:aws:ssm:us-east-2:12345678912:document/demoooo
You can use Ref function to get name of document, then Sub to create final ARN
refer: https://docs.aws.amazon.com/IAM/latest/UserGuide/list_awssystemsmanager.html#awssystemsmanager-resources-for-iam-policies
!Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:document/${mySSMDocument}
You can produce the ARN format for AWS::SSM::Document using the return Value for AWS::SSM::Document, the Pseudo Parameters for Partition, Region, and AccountId, and the Sub intrinsic function
When creating ECS infrastructure we describe our Task Definitions with CloudFormation. We want to be able to dynamically pass environment variables as a parameter to the template. According to the docs, Environment has a KeyValuePair type, but CloudFormation parameters do not have this type.
We can not hardcode Environment variables to the template, because this template is used as a nested stack so environment variables will be dynamically passed inside it.
The only possible way I see so far is to pass all arguments as a CommaDelimitedList, and then somehow parse and map it using CloudFormation functions. I can Fn::Split every entity in key and value, but how to dynamically build an array of KeyValuePair in CloudFormation?
Or maybe there is an easier way, and I'm missing something? Thanks in advance for any ideas.
I know it's late and you have already found a workaround. However, the following is the closest I came to solve this. Still not completely dynamic as expected parameters have to be defined as placeholders. Therefore the maximum number of environment variables expected should be known.
The answer is based on this blog. All credits to the author.
Parameters:
EnvVar1:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
EnvVar2:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
EnvVar3:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
Conditions:
Env1Exist: !Not [ !Equals [!Ref EnvVar1, '']]
Env2Exist: !Not [ !Equals [!Ref EnvVar2, '']]
Env3Exist: !Not [ !Equals [!Ref EnvVar3, '']]
Resources:
TaskDefinition:
ContainerDefinitions:
-
Environment:
- !If
- Env1Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar1]]
Value: !Select [1, !Split [":", !Ref EnvVar1]]
- !Ref "AWS::NoValue"
- !If
- Env2Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar2]]
Value: !Select [1, !Split [":", !Ref EnvVar2]]
- !Ref "AWS::NoValue"
- !If
- Env3Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar3]]
Value: !Select [1, !Split [":", !Ref EnvVar3]]
- !Ref "AWS::NoValue"
You may want to consider using the EC2 Parameter Store to create secured key/value pairs, which is supported in CloudFormation, and can be integrated with ECS environments.
AWS Systems Manager Parameter Store
AWS Systems Manager Parameter Store provides secure, hierarchical
storage for configuration data management and secrets management. You
can store data such as passwords, database strings, and license codes
as parameter values. You can store values as plain text or encrypted
data. You can then reference values by using the unique name that you
specified when you created the parameter. Highly scalable, available,
and durable, Parameter Store is backed by the AWS Cloud. Parameter
Store is offered at no additional charge.
While Parameter Store has great security features for storing application secrets, it can also be used to store nonsensitive application strings such as public keys, environment settings, license codes, etc.
And it is supported directly by CloudFormation, allowing you to easily capture, store and manage application configuration strings which can be accessed by ECS.
This template allows you provide the Parameter store key values at stack creation time via the console or CLI:
Description: Simple SSM parameter example
Parameters:
pSMTPServer:
Description: SMTP Server URL eg [email-smtp.us-east-1.amazonaws.com]:587
Type: String
NoEcho: false
SMTPServer:
Type: AWS::SSM::Parameter
Properties:
Name: my-smtp-server
Type: String
Value: !Ref pSMTPServer
Any AWS runtime environment (EC2, ECS, Lambda) can easily securely retrieve the values. From the console side, there is great Parameter manager interface that maintains parameter version history. Its intergated with IAM, so permissions are controlled with standard IAM policy syntax:
{
"Action": [
"ssm:GetParameterHistory",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath"
],
"Resource": [
"arn:aws:ssm:us-west-2:555513456471:parameter/smtp-server"
],
"Effect": "Allow"
},
{
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-west-2:555513456471:key/36235f94-19b5-4649-84e0-978f52242aa0a"
],
"Effect": "Allow"
}
Finally, this blog article shows a technique to read the permissions into a Dockerfile at runtime. They suggest a secure way to handle environment variables in Docker with AWS Parameter Store. For reference, I am including their Dockerfile here:
FROM grafana/grafana:master
RUN curl -L -o /bin/aws-env https://github.com/Droplr/aws-env/raw/master/bin/aws-env-linux-amd64 && \
chmod +x /bin/aws-env
ENTRYPOINT ["/bin/bash", "-c", "eval $(/bin/aws-env) && /run.sh"]
With that invocation, each of the parameters are available as an environment variable in the container. You app may or may not need a wrapper to read the parameters from the environment variables.
I was facing the same problem ,I needed to create a lambda resource with environment variables.
We decided to fix initial set of environment variable and keys name are also decided in advance.
So I added four parameters , and used Ref for values while keeping fixed keys name.
There is another way too - which may sound overkill but it allows to put whatever env. to the function you wish, no need to "predefine" how many env. variables, only restriction in sample below - can not use :::: or |||| inside value of the key. Key can't have such symbols by AWS docs already.
Gameplan:
Make an inline CF Lambda Function with code which accepts all env in any format you wish as a string and uses any code you want to use inside that function (i use JS with NodeJS env) and while it's your code, parse how you wish that string passing in and use aws-sdk to update the function. Call function once inside the CF template.
In this sample you pass in env as such string:
key1::::value1||||key2::::value2 If you need to use :::: or |||| in your value, of course update to some other divider.
Not big fan of running lambda for such task, yet i want to have option of passing in virtually any env. to the CF template and this works.
LambdaToSetEnvRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy
Policies:
- PolicyName: cloudwatch-logs
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource:
- !Sub "arn:aws:logs:*:${AWS::AccountId}:log-group:*:*"
- !Sub "arn:aws:logs:*:${AWS::AccountId}:log-group:/aws/lambda-insights:*"
- PolicyName: trigger-lambda-by-cloud-events
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'lambda:UpdateFunctionConfiguration'
Resource:
- !GetAtt OriginalLambda.Arn
Tags:
- { Key: managed-by, Value: !Ref AWS::StackId }
LambdaToSetEnv:
Type: AWS::Lambda::Function
DeletionPolicy: Delete
Properties:
Code:
ZipFile: |
const response = require('cfn-response');
const aws = require('aws-sdk');
exports.handler = (event, context) => {
console.log(JSON.stringify({event, context}));
try {
if (event.RequestType === "Delete") {
response.send(event, context, response.SUCCESS, {RequestType: event.RequestType});
} else {
const client = new aws.Lambda({apiVersion: '2015-03-31'});
const Variables = {
"All": process.env.FunctionEnvVariables,
};
console.log('process.env.FunctionEnvVariables: ', process.env.FunctionEnvVariables);
if(process.env.FunctionEnvVariables){
process.env.FunctionEnvVariables.split('||||').forEach((pair) => {
if(pair && pair.trim() !== ''){
Variables[pair.split('::::')[0]] = pair.split('::::')[1];
}
})
}
const result = client.updateFunctionConfiguration({ FunctionName: process.env.LambdaToUpdateArn, Environment: { Variables } }, function (error, data){
console.log('data: ', data);
console.log('error: ', error);
if(error){
console.error(error);
response.send(event, context, response.ERROR, {});
} else {
response.send(event, context, response.SUCCESS, {});
}
});
}
} catch (e) {
response.send(event, context, response.ERROR, e.stack);
}
}
Role: !GetAtt LambdaToSetEnvRole.Arn
Handler: index.handler
Runtime: nodejs14.x
Timeout: '300'
Environment:
Variables:
LambdaToUpdateArn: !GetAtt OriginalLambda.Arn
FunctionEnvVariables: !Ref FunctionEnvVariables
LambdaCall:
DependsOn:
- OriginalLambda
- LambdaToSetEnv
Type: Custom::LambdaCallout
Properties:
ServiceToken: !GetAtt LambdaToSetEnv.Arn
I have two nested Cloudformation stacks - the first template needs to define a Kinesis stream, the second needs to use a reference to that stream's ARN, as an argument to a further nested stack.
So it seems I need to "export" the stream from the first template, and "import" it into the second (following AWS docs on importing/exporting values across stacks) -
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html
My export code [truncated] looks like this -
Outputs:
MyKinesisStreamOutput:
Value:
Ref: MyKinesisStream
Export:
Name: my-kinesis-stream
Resources:
MyKinesisStream:
Properties:
Name:
Ref: AppName
ShardCount: 1
Type: AWS::Kinesis::Stream
Whilst my import code [truncated] looks like this -
MyNestedStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://s3.${AWS::Region}.amazonaws.com/my-nested-stack.yaml"
Parameters:
AppName: my-nested-stack
KinesisStream:
Fn::GetAtt:
- Fn::ImportValue:
my-kinesis-stream
- Arn
But then I get the following Cloudformation error -
Template error: every Fn::GetAtt object requires two non-empty parameters, the resource name and the resource attribute
and suspect I am falling foul of this -
For the Fn::GetAtt logical resource name, you cannot use functions. You must specify a string that is a resource's logical ID.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html
Assuming I am exporting and importing the Kinesis stream correctly, how then am I supposed to get its Arn value ?
When you export something in Outputs, it's just a string, you can no longer GetAtt on it in the importing template.
What you need to do is to additionally export ARN:
Outputs:
MyKinesisStream:
Value: !Ref MyKinesisStream
Export:
Name: my-kinesis-stream
MyKinesisStreamArn:
Value: !GetAtt MyKinesisStream.Arn
Export:
Name: my-kinesis-stream-arn
I can think of two potential ways:
1) Fn::GetAtt:[!ImportValue my-kinesis-stream, Arn]
Sorry wasn't reading carefully enough
or what I would prefer
2) Directly export the required value as output: Value: !GetAtt MyKinesisStream.Arn
Hope that helps!
I want to read the URL of my database from parameter store in my CloudFormation template. This is easy enough for a single URL, but I can't figure out how to change the URL with different environments.
I have four environments (development, integration, pre-production and production) and their details are stored on four different paths in Parameter Store:
/database/dev/url
/database/int/url
/database/ppe/url
/database/prod/url
I now want to pick the correct Database URL when deploying via CloudFormation. How can I do this?
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- int
- ppe
- prod
DatabaseUrl:
Type: 'AWS::SSM::Parameter::Value<String>'
# Obviously the '+' operator here won't work - so what do I do?
Default: '/database/' + Environment + '/url'
This feature isn't as neat as one would wish. You have to actually pass name/path of each parameter that you want to look up from the parameter store.
Template:
AWSTemplateFormatVersion: 2010-09-09
Description: Example
Parameters:
BucketNameSuffix:
Type: AWS::SSM::Parameter::Value<String>
Default: /example/dev/BucketNameSuffix
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub parameter-store-example-${BucketNameSuffix}
If you don't pass any parameters to the template, BucketNameSuffix will be populated with the value stored at /example/dev/BucketNameSuffix. If you want to use, say, a prod value (pointed to by /example/prod/BucketNameSuffix), then you should specify value for parameter BucketNameSuffix, but instead of passing the actual value you should pass the alternative name of the parameter to use, so you'd pass /example/prod/BucketNameSuffix.
aws cloudformation update-stack --stack-name example-dev \
--template-body file://./example-stack.yml
aws cloudformation update-stack --stack-name example-prod \
--template-body file://./example-stack.yml \
--parameters ParameterKey=BucketNameSuffix,ParameterValue=/example/prod/BucketNameSuffix
A not so great AWS blog post on this: https://aws.amazon.com/blogs/mt/integrating-aws-cloudformation-with-aws-systems-manager-parameter-store/
Because passing million meaningless parameters seems stupid, I might actually generate an environment-specific template and set the right Default: in the generated template so for prod environment Default would be /example/prod/BucketNameSuffix and then I can update the prod stack without passing any parameters.
You can populate CloudFormation templates with parameters stored in AWS Systems Manager Parameter Store using Dynamic References
In this contrived example we make two lookups using resolve:ssm and replace the environment using !Join and !Sub
AWSTemplateFormatVersion: 2010-09-09
Parameters:
Environment:
Type: String
Default: prod
AllowedValues:
- prod
- staging
Resources:
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
Image: !Join
- ''
- - 'docker.io/bitnami/redis#'
- !Sub '{{resolve:ssm:/app/${Environment}/digest-sha/redis}}'
Environment:
- Name: DB_HOST
Value: !Sub '{{resolve:ssm:/app/${Environment}/db-host}}'
You can make use of Fn::Join here.
Here is some pseudo code.
You will have to take Environment as an parameter, which you are already doing.
Creating the required string in the resources where DatabaseUrl is required.
Resources :
Instance :
Type : 'AWS::Some::Resource'
Properties :
DatabaseURL : !Join [ "", [ "/database/", !Ref "Environment" , "/url ] ]
Hope this helps.
Note: You cannot assign value to a Parameter dynamically using some computation logic. All the values for defined parameters should be given as Input.
I like Fn::Sub, which is much cleaner and easy to read.
!Sub "/database/${Environment}/url"
I am stuck into the same problem, below are my findings:
We can compose values and descriptions while writing into the SSM Parameter Store from CloudFormation like below :
LambdaARN:
Type: AWS::SSM::Parameter
Properties:
Type: String
Description: !Sub "Lambda ARN from ${AWS::StackName}"
Name: !Sub "/secure/${InstallationId}/${AWS::StackName}/lambda-function-arn"
Value: !GetAtt LambdaFunction.Arn
We can not compose values/defaults to looking for in SSM Parameter Store. Like below:
Parameters:
...
LambdaARN:
Type: Type: AWS::SSM::Parameter::Value<String>
Value: !Sub "/secure/${InstallationId}/teststack/lambda-function-arn"
This is not allowed as per AWS Documentation[1]. Both(Sub/Join) Functions won't work. Following is the error which I was getting:
An error occurred (ValidationError) when calling the CreateChangeSet
operation: Template format error: Every Default member must be a
string.
Composing and passing values while creating stacks can be done like this:
Parameters:
...
LambdaARN:
Type: Type: AWS::SSM::Parameter::Value<String>
....
$ aws cloudformation deploy --template-file cfn.yml --stack-name mystack --parameter-overrides 'LambdaARN=/secure/devtest/teststack/lambda_function-arn'
If you add your custom tags while putting the values in the Parameter Store, it will overwrite the default tags added by CFN.
Default Tags:
- aws:cloudformation:stack-id
- aws:cloudformation:stack-name
- aws:cloudformation:logical-id
Every time we update the values in parameter store, it creates a new version, which is beneficial when we are using DynamicResolvers, this can serve as a solution to the problem in the question like
{{ resolve:ssm:/my/value:1}}
The last field is the version. Where different versions can point to different environments.
We are using versions with the parameters, and adding the labels to them[2], this can't be done via CFN[3], only possible way via CLI or AWS Console. This is AWS's way of handling multiple environments.
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html
[2] https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-labels.html
[3] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-parameter.html
I'm having a hard time trying to figure out how to add multiple tag groups for CodeDeploy in Cloudformation via YAML.
Here is a example in JSON for Mutiple tag groups with single tag for ec2TagFilters. I know how to create multiple tags in the same tag group, but I can't seem to figure out how to create multiple tag groups using YAML.
Can someone help?
Thank you.
When you're having problems converting between JSON and YAML, you should use tools like this website to help you. I plugged in the documentation you reference, and this is what I got:
ec2TagSetList:
- - Key: KEY_AND_VALUE
Type: Name
Value: AppVersion-ABC
- - Key: KEY_AND_VALUE
Type: Region
Value: North
- - Key: KEY_AND_VALUE
Type: Type
Value: t2.medium
The AWS Documentation is really frustrating here...
First, case is inconsistent across documentation:
"ec2TagSetList" they frequently start with lowercase in documentation (which is incorrect)
"Ec2TagSetList" is what you want to use
That said, there is even more inconsistency where the documentation above is just wrong with the syntax...
This is the correct syntax for a single tag group:
ResourceNameHere:
Type: AWS::CodeDeploy::DeploymentGroup
Properties:
...
Ec2TagSet:
Ec2TagSetList:
- Ec2TagGroup:
- Type: KEY_AND_VALUE
Key: "Tag1Key"
Value: "Tag1Value"
- Type: KEY_AND_VALUE
Key: "Tag2Key"
Value: "Tag2Value"
...
If you want to configure multiple tag groups just add another Ec2TagGroup:
ResourceNameHere:
Type: AWS::CodeDeploy::DeploymentGroup
Properties:
...
Ec2TagSet:
Ec2TagSetList:
- Ec2TagGroup:
- Type: KEY_AND_VALUE
Key: "Tag1Key"
Value: "Tag1Value"
- Type: KEY_AND_VALUE
Key: "Tag2Key"
Value: "Tag2Value"
- Ec2TagGroup:
- Type: KEY_AND_VALUE
Key: "Tag3Key"
Value: "Tag3Value"
- Type: KEY_AND_VALUE
Key: "Tag4Key"
Value: "Tag4Value"
...