I'm trying to create a cloudformation template that has default values, and I'm running a few !Sub functions to replace imported parameters into the template.
However, I am passing a list to a nodejs Lambda function that I need to !Sub before sending it.
The code that I'm writing:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Creating Athena database and tables
Parameters:
S3DataLocations:
Type: CommaDelimitedList
Description: The S3 locations where the logs are read from (Specify 'Use,Default' to inherit defaults)
Default: Use,Default
Conditions:
CustomS3DataLocations: !Equals ['Use,Default', !Join [",", !Ref S3DataLocations]]
Resources:
# Custom resource for running CreateTableFunction, to create databases
CreateLogTable:
Type: Custom::CreateLogTable
Properties:
ServiceToken: !GetAtt [CreateLogTableFunction, Arn]
S3DataLocations:
Fn::If:
- CustomS3DataLocations
- !Split
- ","
- !Sub
- s3://${LoggingBucket}/data/ApplicationLogs1/,
s3://${LoggingBucket}/data/ApplicationLogs2/,
s3://${LoggingBucket}/data/ApplicationLogs3/
- { LoggingBucket: !ImportValue Parent-LoggingBucket}
- !Ref S3DataLocations
If I pass these as a literal external DataTypes parameter s3://logbucket/data/ApplicationLogs1/,s3://logbucket/data/ApplicationLogs2/,s3://logbucket/data/ApplicationLogs3/ it works fine and translates to ["s3://logbucket/data/ApplicationLogs1/","s3://logbucket/data/ApplicationLogs2/","s3://logbucket/data/ApplicationLogs3/"] and is interpreted by the Lambda without issue.
The parameter gets parsed through the CommaDelimitedList type and is passed to the Lambda without issue.
The issue arises in that I am trying to create a manual default, so I need to !Sub a list, as a string, then !Split to be passed as an actual list to the Custom Lambda. This doesn't seem to be working every way I try it and I cannot figure why.
I've been inspecting the success (manual param) and failure (defaults, without manual param) and I cant see a big difference.
The event of the lambda shows, when working:
{
"RequestType": "Create",
"ServiceToken": "hidden",
"ResponseURL": "hidden",
"StackId": "hidden",
"RequestId": "hidden",
"LogicalResourceId": "CreateLogTable",
"ResourceType": "Custom::CreateLogTable",
"ResourceProperties": {
"S3DataLocations": [
"s3://loggingbucket/data/ApplicationLogs/",
"s3://loggingbucket/data/ApplicationLogs/",
"s3://loggingbucket/data/ApplicationLogs/",
"s3://loggingbucket/data/ApplicationLogs/"
]
}
}
And when NOT working:
...
{
"RequestType": "Create",
"ServiceToken": "hidden",
"ResponseURL": "hidden",
"StackId": "hidden",
"RequestId": "hidden",
"LogicalResourceId": "CreateLogTable",
"ResourceType": "Custom::CreateLogTable",
"ResourceProperties": {
"S3DataLocations": [
"s3://logging/data/ApplicationLogs/",
" s3://loggingbucket/data/ApplicationLogs/",
" s3://loggingbucket/data/ApplicationLogs/",
" s3://loggingbucket/data/ApplicationLogs/"
]
}
}
I'm a little stuck here, I think there might be some Type mismatch but I cant tell the difference between the manual and param.
Does anyone have any idea?
You can break your string into multiple line while preventing the change of \n into space using quotation and slash combo.
To verify that, I used the following surrogate template for your situation:
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties: {}
Outputs:
Test1:
Value: !Sub
- s3://${LoggingBucket}/data/ApplicationLogs1/,
s3://${LoggingBucket}/data/ApplicationLogs2/,
s3://${LoggingBucket}/data/ApplicationLogs3/
- { LoggingBucket: "Parent-LoggingBucket"}
Test2:
Value: !Sub
- "s3://${LoggingBucket}/data/ApplicationLogs1/,\
s3://${LoggingBucket}/data/ApplicationLogs2/,\
s3://${LoggingBucket}/data/ApplicationLogs3/"
- { LoggingBucket: "Parent-LoggingBucket"}
The Test1 produces string with spaces as in your question:
s3://Parent-LoggingBucket/data/ApplicationLogs1/, s3://Parent-LoggingBucket/data/ApplicationLogs2/, s3://Parent-LoggingBucket/data/ApplicationLogs3/
In contrast, Test2 does not have space:
s3://Parent-LoggingBucket/data/ApplicationLogs1/,s3://Parent-LoggingBucket/data/ApplicationLogs2/,s3://Parent-LoggingBucket/data/ApplicationLogs3/
Related
I want to create a EC2 instance type t3.medium on all environments and m5.large on production.
I'm using .ebextensions (YAML) like so:
option 1:
Mappings:
EnvironmentMap:
"production":
TheType: "m5.large"
SecurityGroup: "foo"
...
"staging":
TheType: "t3.medium"
SecurityGroup: "bar"
...
option_settings:
aws:autoscaling:launchconfiguration:
IamInstanceProfile: "aws-elasticbeanstalk-ec2-role"
InstanceType: !FindInMap
- EnvironmentMap
- !Ref 'AWSEBEnvironmentName'
- TheType
SecurityGroups:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "SecurityGroup"]}
Option 2:
InstanceType: {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "EC2InstanceType"]}
Option 3:
InstanceType:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "EC2InstanceType"]}
Results
Option 1 fails with Invalid Yaml (but I took this from this AWS example.
Option 2 and 3 fail with the same problem.
The FindInMap function is not "called":
Invalid option value: '{"Fn::FindInMap":["EnvironmentMap","EC2InstanceType"]},{"Ref":"AWSEBEnvironmentName"}' (Namespace: 'aws:autoscaling:launchconfiguration', OptionName: 'InstanceType'): Value is not one of the allowed values: [c1.medium, c1.xlarge, c3.2xlarge, ....
It tries to interpret the whole function/thing as a string.
For the SecurityGroups property it works, for InstanceType it does not.
I can't do it dynamically and I can't find how to achieve this neither on AWS doc, SO, or anywhere else. I would assume this is simple stuff. What am I missing?
EDIT:
Option 4: using conditionals
Conditions:
IsProduction: !Equals [ !Ref AWSEBEnvironmentName, production ]
option_settings:
aws:autoscaling:launchconfiguration:
InstanceType: !If [ IsProduction, m5.large, t3.medium ]
SecurityGroups:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "SecurityGroup"]}
Error: YAML exception: Invalid Yaml: could not determine a constructor for the tag !Equals in...
But this comes from documentation on conditions and if.
EDIT 2:
I eventually found out that the option InstanceType is obsolute and we should use:
aws:ec2:instances
InstanceTypes: "t3.medium"
But alas, this does not solve the problem either because I cannot use the replacement functions here as well (Fn:findInMap).
The reason why FindInMap does not work in option_settings is the fact that only four intrinsic functions are allowed there (from docs):
Ref
Fn::GetAtt
Fn::Join
Fn::GetOptionSetting
I'm not convinced that SecurityGroups worked. I think your script failed before FindInMap in SecurityGroups got chance to be evaluated.
However, I tried to find a way using Resources. The closes I got was with the following config file:
Mappings:
EnvironmentMap:
production:
TheType: "t3.medium"
staging:
TheType: "t2.small"
Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
InstanceType:
? "Fn::FindInMap"
:
- EnvironmentMap
-
Ref: "AWSEBEnvironmentName"
- TheType
Although this is a step closer, it ultimately fails as well. The reason is that when EB is jointing our Resources config file with its own template, it produces the following:
"InstanceType": {
"Ref": "InstanceType", # <--- this should NOT be here :-(
"Fn::FindInMap": [
"EnvironmentMap",
{
"Ref": "AWSEBEnvironmentName"
},
"TheType"
]
},
instead of
"InstanceType": {
"Fn::FindInMap": [
"EnvironmentMap",
{
"Ref": "AWSEBEnvironmentName"
},
"TheType"
]
},
And this happens because the original InstanceType (before the joint operation) is:
"InstanceType":{"Ref":"InstanceType"},
Therefore, EB instead of replacing InstanceType with our custom InstanceType provided in our config file, it just merges them.
I wanted to have some quick references at the top of my CloudFormation template, so that I don't have to write out a complex reference every time i need it throughout the template.
So I wrote this:
Mappings:
StandardResourcesMap:
AWSExecuteApi:
string: !Join [ ':' , ['arn', !Sub '${AWS::Partition}', 'execute-api', !Sub '${AWS::Region}', !Sub '${AWS::AccountId}'] ]
AWSLambdaFunctions:
string: !Join [ ':' , ['arn', !Sub '${AWS::Partition}', 'apigateway', !Sub '${AWS::Region}', 'lambda:path/2015-03-31/functions/'] ]
The rest of the CloudFormation template follows this, and, without the lines above, the template deploys (an S3 bucket, DynamoDB table and a python 3.7-based Lambda).
The hope was that I could then just use:
!FindInMap [StandardResourcesMap,AWSExecuteApi,string]
whenever i needed the long-winded value, however the template fails validation with:
An error occurred (ValidationError) when calling the CreateChangeSet operation: Template format error: Every Mappings attribute must be a String or a List.
as the title says.
I have tried a number of variations on the Mappings such as using the !Ref variant:
Mappings:
StandardResourcesMap:
AWSExecuteApi:
string: !Join [ ':' , ['arn', !Ref 'AWS::Partition', 'execute-api', !Ref 'AWS::Region', !Ref 'AWS::AccountId'] ]
AWSLambdaFunctions:
string: !Join [ ':' , ['arn', !Ref 'AWS::Partition', 'apigateway', !Ref 'AWS::Region', 'lambda:path/2015-03-31/functions/'] ]
and I just get pulled up on various validation errors, centring on the one presented above.
any help would be greatly appreciated.
The problem is this: You cannot include parameters, pseudo parameters, or intrinsic functions in the Mappings section. Mappings
I'm fighting with wired case.
I need to push cloudformation stacks dynamically parameterized with terraform.
My resource looks like this.
resource "aws_cloudformation_stack" "eks-single-az" {
count = length(var.single_az_node_groups)
name = "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
template_body = <<EOF
Description: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
Resources:
ASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
VPCZoneIdentifier: ["${var.private_subnet_ids[count.index]}"]
MinSize: "${lookup(var.single_az_node_groups[count.index], "asg_min", "0")}"
MaxSize: "${lookup(var.single_az_node_groups[count.index], "asg_max", "10")}"
HealthCheckType: EC2
TargetGroupARNs: [] < - here is error.
MixedInstancesPolicy:
InstancesDistribution:
OnDemandBaseCapacity: "0"
OnDemandPercentageAboveBaseCapacity: "${lookup(var.single_az_node_groups[count.index], "on_demand_percentage", "0")}"
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateId: "${aws_launch_template.eks-single-az[count.index].id}"
Version: "${aws_launch_template.eks-single-az[count.index].latest_version}"
Overrides:
-
InstanceType: m5.large
Tags:
- Key: "Name"
Value: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
PropagateAtLaunch: true
- Key: "kubernetes.io/cluster/${var.cluster_name}"
Value: "owned"
PropagateAtLaunch: true
- Key: "k8s.io/cluster-autoscaler/enabled"
Value: "true"
PropagateAtLaunch: true
- Key: "k8s.io/cluster-autoscaler/${var.cluster_name}"
Value: "true"
PropagateAtLaunch: true
UpdatePolicy:
AutoScalingRollingUpdate:
MinSuccessfulInstancesPercent: 80
MinInstancesInService: "${lookup(data.external.desired_capacity.result, "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}", "0")}"
PauseTime: PT4M
SuspendProcesses:
- HealthCheck
- ReplaceUnhealthy
- AZRebalance
- AlarmNotification
- ScheduledActions
WaitOnResourceSignals: true
EOF
depends_on = [
aws_launch_template.eks-single-az
]
}
I need to put target groups arn from list containing json objects:
single_az_node_groups = [
{
"name" : "workload-az1",
"instance_type" : "t2.micro",
"asg_min" : "1",
"asg_max" : "7",
"target_group_arns" : "arnA, arnB, arnC"
},
...
]
I tried everything. Problem is that i tried many terraform functions and all the time terraform is addding some double-quotes which cloudformation does not support or terraform won't process the template_body becuase of missing quotes..
Do you know meybe some sneaky trick how to achive that ?
When building strings that represent serialized data structures, it's much easier to use Terraform's built-in serialization functions to construct the result, rather than trying to produce a valid string using string templates.
In this case, we can use jsonencode to construct a JSON string representing the template_body from a Terraform object value, which then allows using all of the Terraform language expression features to build it:
template_body = jsonencode({
Description: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}",
Resources: {
ASG: {
Type: "AWS::AutoScaling::AutoScalingGroup",
Properties: {
AutoScalingGroupName: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}",
VPCZoneIdentifier: [var.private_subnet_ids[count.index]],
MinSize: lookup(var.single_az_node_groups[count.index], "asg_min", "0"),
MaxSize: lookup(var.single_az_node_groups[count.index], "asg_max", "10"),
HealthCheckType: "EC2",
TargetGroupArns: flatten([
for g in local.single_az_node_groups : [
split(", ", g.target_group_arns)
]
]),
# etc, etc
},
},
},
})
As you can see above, by using jsonencode for the entire data structure we can then use Terraform expression operators to build the values. For TargetGroupArns in the above example I used the flatten function along with a for expression to transform the nested local.single_az_node_groups data structure into a flat list of target group ARN strings.
CloudFormation supports both JSON and YAML, and Terraform also has a yamlencode function that you could potentially use instead of jsonencode here. I chose jsonencode both because yamlencode is currently marked as experimental (the exact YAML formatting it produces may change in a later release) and because Terraform has special support for JSON formatting in the plan output where it can show a structural diff of the data structure inside, rather than a string-based diff.
When creating ECS infrastructure we describe our Task Definitions with CloudFormation. We want to be able to dynamically pass environment variables as a parameter to the template. According to the docs, Environment has a KeyValuePair type, but CloudFormation parameters do not have this type.
We can not hardcode Environment variables to the template, because this template is used as a nested stack so environment variables will be dynamically passed inside it.
The only possible way I see so far is to pass all arguments as a CommaDelimitedList, and then somehow parse and map it using CloudFormation functions. I can Fn::Split every entity in key and value, but how to dynamically build an array of KeyValuePair in CloudFormation?
Or maybe there is an easier way, and I'm missing something? Thanks in advance for any ideas.
I know it's late and you have already found a workaround. However, the following is the closest I came to solve this. Still not completely dynamic as expected parameters have to be defined as placeholders. Therefore the maximum number of environment variables expected should be known.
The answer is based on this blog. All credits to the author.
Parameters:
EnvVar1:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
EnvVar2:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
EnvVar3:
Type: String
Description: A possible environment variable to be passed on to the container definition.
Should be a key-value pair combined with a ':'. E.g. 'envkey:envval'
Default: ''
Conditions:
Env1Exist: !Not [ !Equals [!Ref EnvVar1, '']]
Env2Exist: !Not [ !Equals [!Ref EnvVar2, '']]
Env3Exist: !Not [ !Equals [!Ref EnvVar3, '']]
Resources:
TaskDefinition:
ContainerDefinitions:
-
Environment:
- !If
- Env1Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar1]]
Value: !Select [1, !Split [":", !Ref EnvVar1]]
- !Ref "AWS::NoValue"
- !If
- Env2Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar2]]
Value: !Select [1, !Split [":", !Ref EnvVar2]]
- !Ref "AWS::NoValue"
- !If
- Env3Exist
-
Name: !Select [0, !Split [":", !Ref EnvVar3]]
Value: !Select [1, !Split [":", !Ref EnvVar3]]
- !Ref "AWS::NoValue"
You may want to consider using the EC2 Parameter Store to create secured key/value pairs, which is supported in CloudFormation, and can be integrated with ECS environments.
AWS Systems Manager Parameter Store
AWS Systems Manager Parameter Store provides secure, hierarchical
storage for configuration data management and secrets management. You
can store data such as passwords, database strings, and license codes
as parameter values. You can store values as plain text or encrypted
data. You can then reference values by using the unique name that you
specified when you created the parameter. Highly scalable, available,
and durable, Parameter Store is backed by the AWS Cloud. Parameter
Store is offered at no additional charge.
While Parameter Store has great security features for storing application secrets, it can also be used to store nonsensitive application strings such as public keys, environment settings, license codes, etc.
And it is supported directly by CloudFormation, allowing you to easily capture, store and manage application configuration strings which can be accessed by ECS.
This template allows you provide the Parameter store key values at stack creation time via the console or CLI:
Description: Simple SSM parameter example
Parameters:
pSMTPServer:
Description: SMTP Server URL eg [email-smtp.us-east-1.amazonaws.com]:587
Type: String
NoEcho: false
SMTPServer:
Type: AWS::SSM::Parameter
Properties:
Name: my-smtp-server
Type: String
Value: !Ref pSMTPServer
Any AWS runtime environment (EC2, ECS, Lambda) can easily securely retrieve the values. From the console side, there is great Parameter manager interface that maintains parameter version history. Its intergated with IAM, so permissions are controlled with standard IAM policy syntax:
{
"Action": [
"ssm:GetParameterHistory",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath"
],
"Resource": [
"arn:aws:ssm:us-west-2:555513456471:parameter/smtp-server"
],
"Effect": "Allow"
},
{
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-west-2:555513456471:key/36235f94-19b5-4649-84e0-978f52242aa0a"
],
"Effect": "Allow"
}
Finally, this blog article shows a technique to read the permissions into a Dockerfile at runtime. They suggest a secure way to handle environment variables in Docker with AWS Parameter Store. For reference, I am including their Dockerfile here:
FROM grafana/grafana:master
RUN curl -L -o /bin/aws-env https://github.com/Droplr/aws-env/raw/master/bin/aws-env-linux-amd64 && \
chmod +x /bin/aws-env
ENTRYPOINT ["/bin/bash", "-c", "eval $(/bin/aws-env) && /run.sh"]
With that invocation, each of the parameters are available as an environment variable in the container. You app may or may not need a wrapper to read the parameters from the environment variables.
I was facing the same problem ,I needed to create a lambda resource with environment variables.
We decided to fix initial set of environment variable and keys name are also decided in advance.
So I added four parameters , and used Ref for values while keeping fixed keys name.
There is another way too - which may sound overkill but it allows to put whatever env. to the function you wish, no need to "predefine" how many env. variables, only restriction in sample below - can not use :::: or |||| inside value of the key. Key can't have such symbols by AWS docs already.
Gameplan:
Make an inline CF Lambda Function with code which accepts all env in any format you wish as a string and uses any code you want to use inside that function (i use JS with NodeJS env) and while it's your code, parse how you wish that string passing in and use aws-sdk to update the function. Call function once inside the CF template.
In this sample you pass in env as such string:
key1::::value1||||key2::::value2 If you need to use :::: or |||| in your value, of course update to some other divider.
Not big fan of running lambda for such task, yet i want to have option of passing in virtually any env. to the CF template and this works.
LambdaToSetEnvRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy
Policies:
- PolicyName: cloudwatch-logs
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource:
- !Sub "arn:aws:logs:*:${AWS::AccountId}:log-group:*:*"
- !Sub "arn:aws:logs:*:${AWS::AccountId}:log-group:/aws/lambda-insights:*"
- PolicyName: trigger-lambda-by-cloud-events
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'lambda:UpdateFunctionConfiguration'
Resource:
- !GetAtt OriginalLambda.Arn
Tags:
- { Key: managed-by, Value: !Ref AWS::StackId }
LambdaToSetEnv:
Type: AWS::Lambda::Function
DeletionPolicy: Delete
Properties:
Code:
ZipFile: |
const response = require('cfn-response');
const aws = require('aws-sdk');
exports.handler = (event, context) => {
console.log(JSON.stringify({event, context}));
try {
if (event.RequestType === "Delete") {
response.send(event, context, response.SUCCESS, {RequestType: event.RequestType});
} else {
const client = new aws.Lambda({apiVersion: '2015-03-31'});
const Variables = {
"All": process.env.FunctionEnvVariables,
};
console.log('process.env.FunctionEnvVariables: ', process.env.FunctionEnvVariables);
if(process.env.FunctionEnvVariables){
process.env.FunctionEnvVariables.split('||||').forEach((pair) => {
if(pair && pair.trim() !== ''){
Variables[pair.split('::::')[0]] = pair.split('::::')[1];
}
})
}
const result = client.updateFunctionConfiguration({ FunctionName: process.env.LambdaToUpdateArn, Environment: { Variables } }, function (error, data){
console.log('data: ', data);
console.log('error: ', error);
if(error){
console.error(error);
response.send(event, context, response.ERROR, {});
} else {
response.send(event, context, response.SUCCESS, {});
}
});
}
} catch (e) {
response.send(event, context, response.ERROR, e.stack);
}
}
Role: !GetAtt LambdaToSetEnvRole.Arn
Handler: index.handler
Runtime: nodejs14.x
Timeout: '300'
Environment:
Variables:
LambdaToUpdateArn: !GetAtt OriginalLambda.Arn
FunctionEnvVariables: !Ref FunctionEnvVariables
LambdaCall:
DependsOn:
- OriginalLambda
- LambdaToSetEnv
Type: Custom::LambdaCallout
Properties:
ServiceToken: !GetAtt LambdaToSetEnv.Arn
I am tagging my resources using Tags in my cfn script:
"Tags" : [ { "Key" : "Owner", "Value" : "my name" },
{ "Key" : "Name", "Value" : "instance name" }
{ "Key" : "DateCreated", "Value" : <something goes here> }
],
I would like to create a tag with the current date as per the example above. Is it possible?
You can use a "custom resource" to generate a timestamp (or any other value).
Custom resources are a newish feature in CloudFormation (introduced around 2014) and allow you to basically call a lambda function to "create", "update" or "delete" a resource for which CloudFormation does not provide language support (can even be resources outside AWS).
I use custom resource a lot just to compute some values for use in other parts of the stack, for example to create "variables" that hold computed values (e.g. using !Join and similar functions) that I need to use often and would like to compute once.
You can easily use a custom resource to just generate a time stamp. Here is some example code that is very close to what I actually use in production:
Create the "resource" implementation
Resources:
ValueFunc:
Type: AWS::Lambda::Function
Properties:
Code:
ZipFile: >
var r = require('cfn-response');
exports.handler = function(ev, ctx) {
ev.ResourceProperties.Time = new Date().toISOString();
r.send(ev, ctx, r.SUCCESS, ev.ResourceProperties);
};
Handler: index.handler
Runtime: nodejs6.10
Timeout: 30
Role: !GetAtt ValueFunctionExecutionRole.Arn
ValueFunctionExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: { Service: [ lambda.amazonaws.com ] }
Action: sts:AssumeRole
Policies:
- PolicyName:
Fn::Sub: "value-custom-res-${AWS::StackName}-${AWS::Region}"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: "arn:aws:logs:*:*:*"
- Effect: Allow
Action: cloudformation:DescribeStacks
Resource: "arn:aws:cloudformation:*:*:*"
Then wherever you want to generate a time stamp, you do something like this (Scheduled action example taken from here):
Create a custom resource that calculates its creation time
GetTimeThisTime:
Type: Custom::Value
Properties:
ServiceToken: !GetAtt ValueFunc.Arn
Read the created timestamp using the Time attribute
ScheduledActionUp:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref WebServerGroup
DesiredCapacity: 2
StartTime: !GetAtt GetTimeThisTime.Time
Recurrence: "0 7 * * *"
You can generate multiple time stamps at different times of the stack creation by simply creating a new "custom value" that depends on the logical entity whose creation you want to time.
The advice by #Guy is correct, you can access the creation timestamp of the stack from the stack properties.
If you still need to specify tags as parameters then you can do it the following way. Currently the JSON syntax supports an extremely limited set of functions. Because of this the possibilities for dynamically modifying your templates are very tiny. The only way I see to introduce this the tag you want is by adding another parameter to the template itself. Depending on the way you initialize the stack, you can script the parameter to be specified dynamically or provide it in the web console.
For example, if you have this in your template:
"Parameters" : {
"CreationDate" : {
"Description" : "Date",
"Type" : "String",
"Default" : "2013-03-20 21:15:00",
"AllowedPattern" : "^\\d{4}(-\\d{2}){2} (\\d{2}:){2}\\d{2}$",
"ConstraintDescription" : "Date and time of creation"
}
},
You can later reference it using the Ref keyword in the tags like this:
"Tags" : [ { "Key" : "Owner", "Value" : "my name" },
{ "Key" : "Name", "Value" : "instance name" },
{ "Key" : "DateCreated", "Value" : { "Ref" : "CreationDate" } }
],
It is not trivial to automatically assign the current time if you create the stack from the AWS console, but if you use the CLI tools you can call cfn-create-stack like this:
cfn-create-stack MyStack --template-file My.template --parameters "CreationDate=$(date +'%F %T')"
Hope this helps!