I am tagging my resources using Tags in my cfn script:
"Tags" : [ { "Key" : "Owner", "Value" : "my name" },
{ "Key" : "Name", "Value" : "instance name" }
{ "Key" : "DateCreated", "Value" : <something goes here> }
],
I would like to create a tag with the current date as per the example above. Is it possible?
You can use a "custom resource" to generate a timestamp (or any other value).
Custom resources are a newish feature in CloudFormation (introduced around 2014) and allow you to basically call a lambda function to "create", "update" or "delete" a resource for which CloudFormation does not provide language support (can even be resources outside AWS).
I use custom resource a lot just to compute some values for use in other parts of the stack, for example to create "variables" that hold computed values (e.g. using !Join and similar functions) that I need to use often and would like to compute once.
You can easily use a custom resource to just generate a time stamp. Here is some example code that is very close to what I actually use in production:
Create the "resource" implementation
Resources:
ValueFunc:
Type: AWS::Lambda::Function
Properties:
Code:
ZipFile: >
var r = require('cfn-response');
exports.handler = function(ev, ctx) {
ev.ResourceProperties.Time = new Date().toISOString();
r.send(ev, ctx, r.SUCCESS, ev.ResourceProperties);
};
Handler: index.handler
Runtime: nodejs6.10
Timeout: 30
Role: !GetAtt ValueFunctionExecutionRole.Arn
ValueFunctionExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: { Service: [ lambda.amazonaws.com ] }
Action: sts:AssumeRole
Policies:
- PolicyName:
Fn::Sub: "value-custom-res-${AWS::StackName}-${AWS::Region}"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: "arn:aws:logs:*:*:*"
- Effect: Allow
Action: cloudformation:DescribeStacks
Resource: "arn:aws:cloudformation:*:*:*"
Then wherever you want to generate a time stamp, you do something like this (Scheduled action example taken from here):
Create a custom resource that calculates its creation time
GetTimeThisTime:
Type: Custom::Value
Properties:
ServiceToken: !GetAtt ValueFunc.Arn
Read the created timestamp using the Time attribute
ScheduledActionUp:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref WebServerGroup
DesiredCapacity: 2
StartTime: !GetAtt GetTimeThisTime.Time
Recurrence: "0 7 * * *"
You can generate multiple time stamps at different times of the stack creation by simply creating a new "custom value" that depends on the logical entity whose creation you want to time.
The advice by #Guy is correct, you can access the creation timestamp of the stack from the stack properties.
If you still need to specify tags as parameters then you can do it the following way. Currently the JSON syntax supports an extremely limited set of functions. Because of this the possibilities for dynamically modifying your templates are very tiny. The only way I see to introduce this the tag you want is by adding another parameter to the template itself. Depending on the way you initialize the stack, you can script the parameter to be specified dynamically or provide it in the web console.
For example, if you have this in your template:
"Parameters" : {
"CreationDate" : {
"Description" : "Date",
"Type" : "String",
"Default" : "2013-03-20 21:15:00",
"AllowedPattern" : "^\\d{4}(-\\d{2}){2} (\\d{2}:){2}\\d{2}$",
"ConstraintDescription" : "Date and time of creation"
}
},
You can later reference it using the Ref keyword in the tags like this:
"Tags" : [ { "Key" : "Owner", "Value" : "my name" },
{ "Key" : "Name", "Value" : "instance name" },
{ "Key" : "DateCreated", "Value" : { "Ref" : "CreationDate" } }
],
It is not trivial to automatically assign the current time if you create the stack from the AWS console, but if you use the CLI tools you can call cfn-create-stack like this:
cfn-create-stack MyStack --template-file My.template --parameters "CreationDate=$(date +'%F %T')"
Hope this helps!
Related
I have an organization account with several managed accounts underneath it. Each managed account has multiple VPCs in them. One of the VPC in each managed account will have a tag "ServiceName":"True" while the others in that account will have a "ServiceName":"False" tag instead.
I'm trying to create a stackset with a stack dedicated to create a security group with ingress rules attached to it and I need to dynamically assign the "VpcId" property of that security group to be the "VpcId" of VPC with the "ServiceName":"True" tag in that account.
Obviously, if I don't specify a VPC ID in the VpcId field, it creates the security group but attach it to the default VPC of that account. I can't specify manually a VPC either since it's going to be ran in multiple accounts. Leaving me with the only option available to search and assign VPCs by running some sort of function to extract the "VpcId".
The stack itself works fine as I ran it in a test environment while specifying a VPC ID. So, it's just a matter getting that "VpcId" dynamically.
In the end, I'm looking to do something that would resemble this:
{
"Parameters": {
"MyValidVPCID": {
"Description": "My Valid VPC ID where ServiceName tag equals true. Do some Lambda Kung Fu to get the VPC ID using something that would let me parse the equivalent of aws ec2 describe-vpcs command.",
"Type": "String"
}
},
"Resources": {
"SG": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Security Group Desc.",
"Tags": [
{
"Key": "Key1",
"Value": "ABC"
},
{
"Key": "Key2",
"Value": "DEF"
}
],
"VpcId" : { "Ref" : "MyValidVPCID" }
}
},
"SGIngressRule01":
{
"Type": "AWS::EC2::SecurityGroupIngress",
"DependsOn": "SG",
"Properties": {
"GroupId" : { "Fn::GetAtt": [ "SG", "GroupId" ] },
"Description": "Rule 1 description",
"IpProtocol": "tcp",
"FromPort": 123,
"ToPort": 456,
"CidrIp": "0.0.0.0/0"
}
}
}
I really don't know if it's a feasible approach or what would be the extra steps needed to recuperate that VpcId based on the tag. That's why if I could get some input from people used to work with CloudFormation, it would help me a lot.
getting that "VpcId" dynamically.
You have to use custom resource for that. You would have to create it as a lambda function which would take any input arguments you want, and using AWS SDK, would query or modify the VPC/Security groups in your stack.
Thanks Marcin for pointing me in the right direction with the custom resources. For those who are wondering what the basic code to make it work looks like, it looks something like this:
Resources:
FunctionNameLambdaFunctionRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: FunctionNameLambdaFunctionRole
Path: "/"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
FunctionNameLambdaFunctionRolePolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: admin3cx
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "*"
Resource: "*"
Roles:
- Ref: FunctionNameLambdaFunctionRole
FunctionNameLambdaFunctionCode:
Type: "AWS::Lambda::Function"
DeletionPolicy: Delete
DependsOn:
- FunctionNameLambdaFunctionRole
Properties:
FunctionName: FunctionNameLambdaFunctionCode
Role: !GetAtt FunctionNameLambdaFunctionRole.Arn
Runtime: python3.7
Handler: index.handler
MemorySize: 128
Timeout: 30
Code:
ZipFile: |
import boto3
import cfnresponse
ec2 = boto3.resource('ec2')
client = boto3.client('ec2')
def handler(event, context):
responseData = {}
filters =[{'Name':'tag:ServiceName', 'Values':['True']}]
vpcs = list(ec2.vpcs.filter(Filters=filters))
for vpc in vpcs:
responseVPC = vpc.id
responseData['ServiceName'] = responseVPC
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "CustomResourcePhysicalID")
FunctionNameLambdaFunctionInvocationCode:
Type: "Custom::FunctionNameLambdaFunctionInvocationCode"
Properties:
ServiceToken: !GetAtt FunctionNameLambdaFunctionCode.Arn
SGFunctionName:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: Description
VpcId: !GetAtt FunctionNameLambdaFunctionInvocationCode.ServiceName
...
Some stuff has been redacted and I made the switch to YAML. The code will be refined obviously. The point was just to make sure I was able to get a return value based on a filter in a Lambda function inside a CloudFormation stack.
I'm trying to create a cloudformation template that has default values, and I'm running a few !Sub functions to replace imported parameters into the template.
However, I am passing a list to a nodejs Lambda function that I need to !Sub before sending it.
The code that I'm writing:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Creating Athena database and tables
Parameters:
S3DataLocations:
Type: CommaDelimitedList
Description: The S3 locations where the logs are read from (Specify 'Use,Default' to inherit defaults)
Default: Use,Default
Conditions:
CustomS3DataLocations: !Equals ['Use,Default', !Join [",", !Ref S3DataLocations]]
Resources:
# Custom resource for running CreateTableFunction, to create databases
CreateLogTable:
Type: Custom::CreateLogTable
Properties:
ServiceToken: !GetAtt [CreateLogTableFunction, Arn]
S3DataLocations:
Fn::If:
- CustomS3DataLocations
- !Split
- ","
- !Sub
- s3://${LoggingBucket}/data/ApplicationLogs1/,
s3://${LoggingBucket}/data/ApplicationLogs2/,
s3://${LoggingBucket}/data/ApplicationLogs3/
- { LoggingBucket: !ImportValue Parent-LoggingBucket}
- !Ref S3DataLocations
If I pass these as a literal external DataTypes parameter s3://logbucket/data/ApplicationLogs1/,s3://logbucket/data/ApplicationLogs2/,s3://logbucket/data/ApplicationLogs3/ it works fine and translates to ["s3://logbucket/data/ApplicationLogs1/","s3://logbucket/data/ApplicationLogs2/","s3://logbucket/data/ApplicationLogs3/"] and is interpreted by the Lambda without issue.
The parameter gets parsed through the CommaDelimitedList type and is passed to the Lambda without issue.
The issue arises in that I am trying to create a manual default, so I need to !Sub a list, as a string, then !Split to be passed as an actual list to the Custom Lambda. This doesn't seem to be working every way I try it and I cannot figure why.
I've been inspecting the success (manual param) and failure (defaults, without manual param) and I cant see a big difference.
The event of the lambda shows, when working:
{
"RequestType": "Create",
"ServiceToken": "hidden",
"ResponseURL": "hidden",
"StackId": "hidden",
"RequestId": "hidden",
"LogicalResourceId": "CreateLogTable",
"ResourceType": "Custom::CreateLogTable",
"ResourceProperties": {
"S3DataLocations": [
"s3://loggingbucket/data/ApplicationLogs/",
"s3://loggingbucket/data/ApplicationLogs/",
"s3://loggingbucket/data/ApplicationLogs/",
"s3://loggingbucket/data/ApplicationLogs/"
]
}
}
And when NOT working:
...
{
"RequestType": "Create",
"ServiceToken": "hidden",
"ResponseURL": "hidden",
"StackId": "hidden",
"RequestId": "hidden",
"LogicalResourceId": "CreateLogTable",
"ResourceType": "Custom::CreateLogTable",
"ResourceProperties": {
"S3DataLocations": [
"s3://logging/data/ApplicationLogs/",
" s3://loggingbucket/data/ApplicationLogs/",
" s3://loggingbucket/data/ApplicationLogs/",
" s3://loggingbucket/data/ApplicationLogs/"
]
}
}
I'm a little stuck here, I think there might be some Type mismatch but I cant tell the difference between the manual and param.
Does anyone have any idea?
You can break your string into multiple line while preventing the change of \n into space using quotation and slash combo.
To verify that, I used the following surrogate template for your situation:
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties: {}
Outputs:
Test1:
Value: !Sub
- s3://${LoggingBucket}/data/ApplicationLogs1/,
s3://${LoggingBucket}/data/ApplicationLogs2/,
s3://${LoggingBucket}/data/ApplicationLogs3/
- { LoggingBucket: "Parent-LoggingBucket"}
Test2:
Value: !Sub
- "s3://${LoggingBucket}/data/ApplicationLogs1/,\
s3://${LoggingBucket}/data/ApplicationLogs2/,\
s3://${LoggingBucket}/data/ApplicationLogs3/"
- { LoggingBucket: "Parent-LoggingBucket"}
The Test1 produces string with spaces as in your question:
s3://Parent-LoggingBucket/data/ApplicationLogs1/, s3://Parent-LoggingBucket/data/ApplicationLogs2/, s3://Parent-LoggingBucket/data/ApplicationLogs3/
In contrast, Test2 does not have space:
s3://Parent-LoggingBucket/data/ApplicationLogs1/,s3://Parent-LoggingBucket/data/ApplicationLogs2/,s3://Parent-LoggingBucket/data/ApplicationLogs3/
When creating ECS infrastructure we describe our Task Definitions with CloudFormation. We want to be able to dynamically pass environment variables as a parameter to the template. According to the docs, Environment has a KeyValuePair type, but CloudFormation parameters do not have this type.
We can not hardcode Environment variables to the template, because this template is used as a nested stack so environment variables will be dynamically passed inside it.
The only possible way I see so far is to pass all arguments as a CommaDelimitedList, and then somehow parse and map it using CloudFormation functions. I can Fn::Split every entity in key and value, but how to dynamically build an array of KeyValuePair in CloudFormation?
Or maybe there is an easier way, and I'm missing something? Thanks in advance for any ideas.
I know it's late and you have already found a workaround. However, the following is the closest I came to solve this. Still not completely dynamic as expected parameters have to be defined as placeholders. Therefore the maximum number of environment variables expected should be known.
The answer is based on this blog. All credits to the author.
Parameters:
EnvVar1:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
EnvVar2:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
EnvVar3:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
Conditions:
Env1Exist: !Not [ !Equals [!Ref EnvVar1, '']]
Env2Exist: !Not [ !Equals [!Ref EnvVar2, '']]
Env3Exist: !Not [ !Equals [!Ref EnvVar3, '']]
Resources:
TaskDefinition:
ContainerDefinitions:
-
Environment:
- !If
- Env1Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar1]]
Value: !Select [1, !Split [":", !Ref EnvVar1]]
- !Ref "AWS::NoValue"
- !If
- Env2Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar2]]
Value: !Select [1, !Split [":", !Ref EnvVar2]]
- !Ref "AWS::NoValue"
- !If
- Env3Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar3]]
Value: !Select [1, !Split [":", !Ref EnvVar3]]
- !Ref "AWS::NoValue"
You may want to consider using the EC2 Parameter Store to create secured key/value pairs, which is supported in CloudFormation, and can be integrated with ECS environments.
AWS Systems Manager Parameter Store
AWS Systems Manager Parameter Store provides secure, hierarchical
storage for configuration data management and secrets management. You
can store data such as passwords, database strings, and license codes
as parameter values. You can store values as plain text or encrypted
data. You can then reference values by using the unique name that you
specified when you created the parameter. Highly scalable, available,
and durable, Parameter Store is backed by the AWS Cloud. Parameter
Store is offered at no additional charge.
While Parameter Store has great security features for storing application secrets, it can also be used to store nonsensitive application strings such as public keys, environment settings, license codes, etc.
And it is supported directly by CloudFormation, allowing you to easily capture, store and manage application configuration strings which can be accessed by ECS.
This template allows you provide the Parameter store key values at stack creation time via the console or CLI:
Description: Simple SSM parameter example
Parameters:
pSMTPServer:
Description: SMTP Server URL eg [email-smtp.us-east-1.amazonaws.com]:587
Type: String
NoEcho: false
SMTPServer:
Type: AWS::SSM::Parameter
Properties:
Name: my-smtp-server
Type: String
Value: !Ref pSMTPServer
Any AWS runtime environment (EC2, ECS, Lambda) can easily securely retrieve the values. From the console side, there is great Parameter manager interface that maintains parameter version history. Its intergated with IAM, so permissions are controlled with standard IAM policy syntax:
{
"Action": [
"ssm:GetParameterHistory",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath"
],
"Resource": [
"arn:aws:ssm:us-west-2:555513456471:parameter/smtp-server"
],
"Effect": "Allow"
},
{
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-west-2:555513456471:key/36235f94-19b5-4649-84e0-978f52242aa0a"
],
"Effect": "Allow"
}
Finally, this blog article shows a technique to read the permissions into a Dockerfile at runtime. They suggest a secure way to handle environment variables in Docker with AWS Parameter Store. For reference, I am including their Dockerfile here:
FROM grafana/grafana:master
RUN curl -L -o /bin/aws-env https://github.com/Droplr/aws-env/raw/master/bin/aws-env-linux-amd64 && \
chmod +x /bin/aws-env
ENTRYPOINT ["/bin/bash", "-c", "eval $(/bin/aws-env) && /run.sh"]
With that invocation, each of the parameters are available as an environment variable in the container. You app may or may not need a wrapper to read the parameters from the environment variables.
I was facing the same problem ,I needed to create a lambda resource with environment variables.
We decided to fix initial set of environment variable and keys name are also decided in advance.
So I added four parameters , and used Ref for values while keeping fixed keys name.
There is another way too - which may sound overkill but it allows to put whatever env. to the function you wish, no need to "predefine" how many env. variables, only restriction in sample below - can not use :::: or |||| inside value of the key. Key can't have such symbols by AWS docs already.
Gameplan:
Make an inline CF Lambda Function with code which accepts all env in any format you wish as a string and uses any code you want to use inside that function (i use JS with NodeJS env) and while it's your code, parse how you wish that string passing in and use aws-sdk to update the function. Call function once inside the CF template.
In this sample you pass in env as such string:
key1::::value1||||key2::::value2 If you need to use :::: or |||| in your value, of course update to some other divider.
Not big fan of running lambda for such task, yet i want to have option of passing in virtually any env. to the CF template and this works.
LambdaToSetEnvRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy
Policies:
- PolicyName: cloudwatch-logs
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource:
- !Sub "arn:aws:logs:*:${AWS::AccountId}:log-group:*:*"
- !Sub "arn:aws:logs:*:${AWS::AccountId}:log-group:/aws/lambda-insights:*"
- PolicyName: trigger-lambda-by-cloud-events
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'lambda:UpdateFunctionConfiguration'
Resource:
- !GetAtt OriginalLambda.Arn
Tags:
- { Key: managed-by, Value: !Ref AWS::StackId }
LambdaToSetEnv:
Type: AWS::Lambda::Function
DeletionPolicy: Delete
Properties:
Code:
ZipFile: |
const response = require('cfn-response');
const aws = require('aws-sdk');
exports.handler = (event, context) => {
console.log(JSON.stringify({event, context}));
try {
if (event.RequestType === "Delete") {
response.send(event, context, response.SUCCESS, {RequestType: event.RequestType});
} else {
const client = new aws.Lambda({apiVersion: '2015-03-31'});
const Variables = {
"All": process.env.FunctionEnvVariables,
};
console.log('process.env.FunctionEnvVariables: ', process.env.FunctionEnvVariables);
if(process.env.FunctionEnvVariables){
process.env.FunctionEnvVariables.split('||||').forEach((pair) => {
if(pair && pair.trim() !== ''){
Variables[pair.split('::::')[0]] = pair.split('::::')[1];
}
})
}
const result = client.updateFunctionConfiguration({ FunctionName: process.env.LambdaToUpdateArn, Environment: { Variables } }, function (error, data){
console.log('data: ', data);
console.log('error: ', error);
if(error){
console.error(error);
response.send(event, context, response.ERROR, {});
} else {
response.send(event, context, response.SUCCESS, {});
}
});
}
} catch (e) {
response.send(event, context, response.ERROR, e.stack);
}
}
Role: !GetAtt LambdaToSetEnvRole.Arn
Handler: index.handler
Runtime: nodejs14.x
Timeout: '300'
Environment:
Variables:
LambdaToUpdateArn: !GetAtt OriginalLambda.Arn
FunctionEnvVariables: !Ref FunctionEnvVariables
LambdaCall:
DependsOn:
- OriginalLambda
- LambdaToSetEnv
Type: Custom::LambdaCallout
Properties:
ServiceToken: !GetAtt LambdaToSetEnv.Arn
I want to create Route53 HostedZone with CloudFormation so I want to check some information in Route53 about HostedZone is exist.
In logic of my case I need check if resource is exist, ignore the resource creation. How I can handle this problem.
My CloudFormation template show at below.
"myDNSRecord" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"HostedZoneName" : { "Ref" : "HostedZoneResource" },
"Comment" : "DNS name for my instance.",
"Name" : {
"Fn::Join" : [ "", [
{"Ref" : "Ec2Instance"}, ".",
{"Ref" : "AWS::Region"}, ".",
{"Ref" : "HostedZone"} ,"."
] ]
},
"Type" : "A",
"TTL" : "900",
"ResourceRecords" : [
{ "Fn::GetAtt" : [ "Ec2Instance", "PublicIp" ] }
]
}
}
This is not exactly the answer you need. But in general, you can use Conditions for this. In you template, you define your condition in Conditions section and use it to conditionally create the resource. e.g.
Parameters:
EnvironmentSize:
Type: String
Default: Micro
AllowedValues:
- Micro
- Small
- Medium
- AuroraCluster
Conditions:
isntAuroraCluster:
!Not [!Equals [!Ref EnvironmentSize, "AuroraCluster"]]
DBInstance:
Type: AWS::RDS::DBInstance
Condition: isntAuroraCluster
Properties:
DBInstanceClass: !FindInMap [InstanceSize, !Ref EnvironmentSize, DB]
<Rest of properties>
Here my RDS DBinstance is only created if my environment size is not AuroraCluster.
If you don't find a better solution, you could take that as user input (whether to create a record set or not) & use that as condition to create your resource. Hope it helps.
The best way to do this would be to do the following:
Create a lambda backed custom resource
Check using lambda whether your resource exists or not, depending on that return an identifier
Use cloudformation conditions to check on the value of the returned identifier and then correspondingly create or not create the resource.
You can fetch the return value of the custom resource using !GetAtt
More information can be found on the AWS websites relating to custom resource:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cfn-customresource.html
You can try to orchestrate creation of specific resources using AWS::NoValue
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html
Below is taken from variables creation for LambdaFunction
Conditions:
IsProd: !Equals [!Ref Env, "production"]
Environment:
Variables:
USER: !If [IsProd, !GetAtt ...., Ref: AWS::NoValue]
What would be a good strategy to have a default value on mappings?
I.E.
I have a parameter called country
Based on that country I reference a DNS using mappings
"Mappings" : {
"DNS":{
"us" : {"dns" : "mypage.us.com", "ttl" : "600"},
"mx" : {"dns" : "mypage.default.com", "ttl" : "300"},
"ar" : {"dns" : "mypage.default.com", "ttl" : "300"},
"br" : {"dns" : "mypage.default.com", "ttl" : "300"}
}
}
If us it's been mapped:
{ "Fn::FindInMap" : [ "DNS", { "Ref" : "country" }, "dns" ]}
I get "mypage.us.com" for the other countries I've created a huge list of countries with a default value mypage.default.com, in the future, this values will be changing and we will be adding more countries, is there a better approach to this?
The only way I was able to do this was to chain Fn::If statements instead of using the map. I tried using a combination of Fn::If and Fn::FindInMap but Fn::FindInMap will always raise an error it if can't find the mapping.
Therefore the only solution for me was to resort to using something like the following (for me it was setting ecs memory based on instance type):
Conditions:
IsT2Micro: !Equals [!Ref InstanceType, "t2.micro"]
IsT2Small: !Equals [!Ref InstanceType, "t2.small"]
...
taskdefinition:
Type: AWS::ECS::TaskDefinition
Properties:
...
Memory: !If [ IsT2Micro, 900, !If [ IsT2Small, 1900, !Ref "AWS::NoValue"]]
To elaborate on Steve Smith's answer.
Cloudformation always expects a valid mapping, even behind a missed logic gate.
You can combine !Sub and !If for a fair amount of flexibility though.
For example, we do this for dynamic staging ECS Env vars:
Parameters:
Env:
Type: String
Branch:
Type: String
DevelopUrl:
Type: String
Default: "develop.example.com"
MasterUrl:
Type: String
Default: "master.example.com"
...(ECS Resource)
Environment:
- !If
- IsStaging
- Name: SOME_CALLBACK_URL
Value: !Sub
- "https://${Url}/some-callback-endpoint"
- Url: !If [ IsDevelop, !Ref DevelopUrl, !If [ IsMaster, !Ref MasterUrl, !GetAtt MyLoadBalancer.DNSName ] ]
- !Ref "AWS::NoValue"
Cloud Formation helps create AWS resources once in the beginning of their life. You can also do updates with it, but I think in your case it sounds like you'll be better off building your DNS config logic into your application. Maybe create a Database table in DynamoDB with the mapping data. You could pass the Country value into the servers as an environment variable, and have them query the DynamoDB table on launch based on their environment variable.
Alternatively, you can have Cloud Formation invoke a Lambda function when it launches a new stack to query DynamoDB to get the value of the DNS config based on the country so you don't have to keep modifying your stack JSON every time there's a new entry and don't have to change your application.
In your mapping, add a default entry:
"Mappings" : {
"DNS":{
"us" : {"dns" : "mypage.us.com", "ttl" : "600"},
"mx" : {"dns" : "mypage.mx.com", "ttl" : "300"},
"default" : {"dns" : "mypage.default.com", "ttl" : "300"}
}
}
Then create a condition (YAML):
Conditions:
HasSpecialDNS: !Or:
- !Equals [!Ref country, "us"]
- !Equals [!Ref country, "mx"]
Then change the 2nd parameter of FindInMap to:
{ "Fn::FindInMap" : [ "DNS", { "Fn::If": ["HasSpecialDNS", {"Ref" : "country"}, "default" ]}, "dns" ]}
Or YAML:
Fn::FindInMap:
- DNS
- !If ["HasSpecialDNS", !Ref country, "default" ]
- "dns"