I have to use AWS lambda in various stack of my application, thus I have created a generic cloud-formation template to create a lambda function. This template can be included in another cloud-formation template for further use as a nested stack.
# Basics
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation Template to create a lambda function for java 8 or nodejs
# Parameters
Parameters:
FunctionName:
Type: String
Description: Funciton Name
HandlerName:
Type: String
Description: Handler Name
FunctionCodeS3Bucket:
Type: String
Description: Name of s3 bucket where the function code is present
Default: my-deployment-bucket
FunctionCodeS3Key:
Type: String
Description: Function code present in s3 bucket
MemorySize:
Type: Number
Description: Memory size between 128 MB - 1536 MB and multiple of 64
MinValue: '128'
MaxValue: '1536'
Default: '128'
RoleARN:
Type: String
Description: Role ARN for this function
Runtime:
Type: String
Description: Runtime Environment name e.g nodejs, java8
AllowedPattern: ^(nodejs6.10|nodejs4.3|java8)$
ConstraintDescription: must be a valid environment (nodejs6.10|nodejs4.3|java8) name.
Timeout:
Type: Number
Description: Timeout in seconds
Default: '3'
Env1:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env2:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env3:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env4:
Type: String
Description: Environment Variable with format Key|value
Default: ''
# Conditions
Conditions:
Env1Exist: !Not [ !Equals [!Ref Env1, '']]
Env2Exist: !Not [ !Equals [!Ref Env2, '']]
Env3Exist: !Not [ !Equals [!Ref Env3, '']]
Env4Exist: !Not [ !Equals [!Ref Env4, '']]
# Resources
Resources:
LambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref 'FunctionCodeS3Bucket'
S3Key: !Ref 'FunctionCodeS3Key'
Description: !Sub 'Lambda function for: ${FunctionName}'
Environment:
Variables:
'Fn::If':
- Env1Exist
-
- !Select [0, !Split ["|", !Ref Env1]]: !Select [1, !Split ["|", !Ref Env1]]
- 'Fn::If':
- Env2Exist
- !Select [0, !Split ["|", !Ref Env2]]: !Select [1, !Split ["|", !Ref Env2]]
- !Ref "AWS::NoValue"
- 'Fn::If':
- Env3Exist
- !Select [0, !Split ["|", !Ref Env3]]: !Select [1, !Split ["|", !Ref Env3]]
- !Ref "AWS::NoValue"
- 'Fn::If':
- Env4Exist
- !Select [0, !Split ["|", !Ref Env4]]: !Select [1, !Split ["|", !Ref Env4]]
- !Ref "AWS::NoValue"
- !Ref "AWS::NoValue"
FunctionName: !Ref 'FunctionName'
Handler: !Ref 'HandlerName'
MemorySize: !Ref 'MemorySize'
Role: !Ref 'RoleARN'
Runtime: !Ref 'Runtime'
Timeout: !Ref 'Timeout'
Outputs:
LambdaFunctionARN:
Value: !GetAtt 'LambdaFunction.Arn'
I want to inject the environment variables to the the function and that will be passed from parent stack as below:
# Resouces
Resources:
# Lambda for search Function
ChildStackLambdaFunction:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: <<REF_TO_ABOVE_LAMBDA_STACK.yml>>
Parameters:
FunctionName: test
HandlerName: 'index.handler'
FunctionCodeS3Bucket: <<BUCKET_NAME>>
FunctionCodeS3Key: <<FUNCTION_DEPLOYMENT_NAME>>
MemorySize: '256'
RoleARN: <<ROLE_ARN>>
Runtime: nodejs6.10
Timeout: '60'
Env1: !Sub 'AWS_REGION|${AWS::Region}'
When I deploy this stack, I am getting below error. Can anybody help me to resolve this one?
Template format error: [/Resources/LambdaFunction/Type/Environment/Variables/Fn::If/1/0] map keys must be strings; received a map instead
Passing key-value parameter is referred from here
So, I tried so many ways to achieve this, but we can not pass the dynamic key-value pair to nested lambda stack from the parent stack. I had a confirmation from the AWS support that this is not possible as this moment.
They suggested a another way which I liked and implemented and its mentioned as below:
Pass the key: value pair as a JSON string and parse it appropriately in the lambda function.
Environment:
Variables:
Env1: '{"REGION": "REGION_VALUE", "ENDPOINT": "http://SOME_ENDPOINT"}'
This suggestion has a little overhead on programming to parse the JSON string, but at this moment I will recommend this as solution for above problem.
I achieved this with the PyPlate macro.
Take environment variables list in a commalimited
Parameters:
EnvVars:
Type: CommaDelimitedList
Description: Comma separated list of Env vars key=value pairs (key1=value1,key2=value2)
and use it in the Lambda Resource:
Environment:
Variables: |
#!PyPlate
output = dict()
for envVar in params['EnvVars']:
key, value = envVar.split('=')
output.update({key: value})
This is the right way to use global variables.
Globals:
Function:
Timeout: 60
Runtime: nodejs10.x
Environment:
Variables:
STAGE: !Ref Stage
DatabaseName: !Ref DatabaseName
DatabaseUsername: !Ref DatabaseUsername
DatabasePassword: !Ref DatabasePassword
DatabaseHostname: !Ref DatabaseHostname
AuthyAPIKey: !Ref AuthyApiKey
Related
I'm facing an issue parsing SSM parameters from a root stack into a child stack. When creating the stack, the first stack resource fails with "
Unable to fetch parameters [value1,value2,value3,value4] from parameter store for this account."
However, the value of the parameters is fetched. The values are strings too, which are supported by CloudFormation. Also, when using the same template on independent stacks, the stacks are deployed as intended. The values of the parameters in SSM are like this: subnet1,subnet2 hence the need to split the values afterwards during ALB creation.
Below are the stacks.
root template
Parameters:
PublicSubnetAZ:
Type: AWS::SSM::Parameter::Value<String>
Default: 'PublicSubnetAZ'
Description: "Public Subnet AZs"
AppSubnetAZ:
Type: AWS::SSM::Parameter::Value<String>
Default: 'AppSubnetAZ'
Description: "App Subnet AZs"
Resources:
LBStack:
Type: "AWS::CloudFormation::Stack"
Properties:
TemplateURL: "https://s3bucket.s3.eu-west-1.amazonaws.com/load_balancing.yaml"
Parameters:
PublicSubnetAZ: !Ref PublicSubnetAZ
AppSubnetAZ: !Ref AppSubnetAZ
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-lb"
nested template
Parameters:
PublicSubnetAZ:
Type: String
AppSubnetAZ:
Type: String
Resources:
LoadBalancerExternal:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ['-', [!Ref AWS::StackName, "external"]]
Scheme: "internet-facing"
Type: "application"
Subnets:
- !Select [0, !Split [',', !Ref PublicSubnetAZ]]
- !Select [1, !Split [',', !Ref PublicSubnetAZ]]
SecurityGroups:
- '{{resolve:ssm:ExternalLoadBalancerSecurityGroup}}'
IpAddressType: "ipv4"
LoadBalancerInternal:
Type: "AWS::ElasticLoadBalancingV2::LoadBalancer"
Properties:
Name: !Join ['-', [!Ref AWS::StackName, "internal"]]
Scheme: "internal"
Type: "application"
Subnets:
- !Select [0, !Split [',', !Ref AppSubnetAZ]]
- !Select [1, !Split [',', !Ref AppSubnetAZ]]
SecurityGroups:
- '{{resolve:ssm:/InternalLoadBalancerSecurityGroup}}'
IpAddressType: "ipv4"
Any ideas?
Background
I have the following cloud formation template that i am trying to use to spin up a EKS CLUSTER. I am having issues with the logging settings. I want to make them conditional so in the future a user can set a specific logging like say api to true or false and based on that it will be enabled or disabled.
Parameters:
ClusterName:
Type: String
ClusterVersion:
Type: Number
AllowedValues: [1.21, 1.20, 1.19, 1.18]
RoleArnValue:
Type: String
ListOfSubnetIDs:
Description: Array of Subnet IDs
Type: List<AWS::EC2::Subnet::Id>
ListOfSecurityGroupIDs:
Description: Array of security group ids
Type: List<AWS::EC2::SecurityGroup::Id>
ApiLogging:
Type: String
AllowedValues: [true, false]
AuditLogging:
Type: String
AllowedValues: [true, false]
AuthenticatorLogging:
Type: String
AllowedValues: [true, false]
ControllerManagerLogging:
Type: String
AllowedValues: [true, false]
SchedulerLogging:
Type: String
AllowedValues: [true, false]
Conditions:
ApiLoggingEnabled: !Equals [!Ref ApiLogging, 'true']
AuditLoggingEnabled: !Equals [!Ref AuditLogging, 'true']
AuthenticatorLoggingEnabled: !Equals [!Ref AuthenticatorLogging, 'true']
ControllerManagerLoggingEnabled: !Equals [!Ref ControllerManagerLogging, 'true']
SchedulerLoggingEnabled: !Equals [!Ref SchedulerLogging, 'true']
Resources:
EKSCluster:
Type: AWS::EKS::Cluster
Properties:
Name: !Sub ${ClusterName}
Version: !Sub ${ClusterVersion}
RoleArn: !Sub ${RoleArnValue}
ResourcesVpcConfig:
SecurityGroupIds: !Ref ListOfSecurityGroupIDs
SubnetIds: !Ref ListOfSubnetIDs
Logging:
ClusterLogging:
EnabledTypes:
- Type: !If [ApiLoggingEnabled, api, 'AWS::NoValue']
- Type: !If [AuditLoggingEnabled, audit, 'AWS::NoValue']
- Type: !If [AuthenticatorLoggingEnabled, authenticator, 'AWS::NoValue']
- Type: !If [ControllerManagerLoggingEnabled, controllerManager, 'AWS:NoValue']
- Type: !If [SchedulerLoggingEnabled, scheduler, 'AWS:NoValue']
Outputs:
ClusterArn:
Description: Arn of EKS CLUSTER
Value: !Ref EKSCluster
However i get the following Error My template works fine when i get rid of the logging stuff but i want to fix that. I am not sure what i did wrong.
Properties validation failed for resource EKSCluster with message: #/Logging/ClusterLogging/EnabledTypes/2/Type: #: only 1 subschema matches out of 2 #/Logging/ClusterLogging/EnabledTypes/2/Type: failed validation constraint for keyword [enum] #/Logging/ClusterLogging/EnabledTypes/3/Type: #: only 1 subschema matches out of 2 #/Logging/ClusterLogging/EnabledTypes/3/Type: failed validation constraint for keyword [enum] #/Logging/ClusterLogging/EnabledTypes/4/Type: #: only 1 subschema matches out of 2 #/Logging/ClusterLogging/EnabledTypes/4/Type: failed validation constraint for keyword [enum]
It should be !Ref 'AWS::NoValue':
Logging:
ClusterLogging:
EnabledTypes:
- Type: !If [ApiLoggingEnabled, api, !Ref 'AWS::NoValue']
- Type: !If [AuditLoggingEnabled, audit, !Ref 'AWS::NoValue']
- Type: !If [AuthenticatorLoggingEnabled, authenticator, !Ref 'AWS::NoValue']
- Type: !If [ControllerManagerLoggingEnabled, controllerManager, !Ref 'AWS:NoValue']
- Type: !If [SchedulerLoggingEnabled, scheduler, !Ref 'AWS:NoValue']
I have created a simple template which i am going to use to create s3 buckets. My template looks like this.
Parameters:
Environment:
Type: String
Default: prod
AllowedPattern: '[a-z\-]+'
BucketName:
Type: String
AllowedPattern: '[a-z\-]+'
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub foo-${Environment}-${BucketName}
Tags:
- Key: key_1
Value: foo
- Key: key_2
Value: foo
DeletionPolicy: Retain
I want to make a generic template and not create a different template each time a i create an s3 bucket. Only thing that will vary between my s3 buckets is the # of tags i add to it. Some S3 buckets may have 2 tags and others may have more. At the most i will have 5 tags to my s3 bucket. So i am wondering if there is a way to pass tags through parameter such that if i pass 3 tags , 3 tags get created, if i pass 2 tags 2 tags get created.
At the most i will have 5 tags
Then you need 5 parameters:
Tag1:
Type: CommaDelimitedList
Default: ""
# ....
Tag5:
Type: CommaDelimitedList
Default: ""
and 5 conditions:
Conditions:
HasTag1:
!Not [!Equals [!Ref Tag1, ""] ]
# ...
HasTag5:
!Not [!Equals [!Ref Tag5, ""] ]
End then you use the conditions to populate your tags:
Tags:
- !If
- HasTag1
- Key: !Select [0, !Ref Tag1]
Value: !Select [1, !Ref Tag1]
- !Ref "AWS::NoValue"
# ...
- !If
- HasTag5
- Key: !Select [0, !Ref Tag5]
Value: !Select [1, !Ref Tag5]
- !Ref "AWS::NoValue"
I'm getting an error in CloudFormation using the !Join function:
Template error: every Fn::Join object requires two parameters, (1) a string delimiter and (2) a list of strings to be joined or a function that returns a list of strings (such as Fn::GetAZs) to be joined.
My Code:
RDSSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Condition: CreationRDS
Properties:
DBSubnetGroupDescription: !Ref RDSName
SubnetIds: !Ref SubnetIds
RDSAuroraServerless:
Type: AWS::RDS::DBCluster
Condition: CreationRDS
Properties:
Engine: aurora-mysql
EngineMode: serverless
EngineVersion: !Ref EngineVersion
DatabaseName: !Ref RDSName
MasterUsername: admin
MasterUserPassword: !Ref MasterUserPassword
DBClusterIdentifier: !Ref RDSName
ScalingConfiguration:
MinCapacity: 1
MaxCapacity: 2
SecondsUntilAutoPause: 300
BackupRetentionPeriod: 7
DeletionProtection: false
VpcSecurityGroupIds:
- !Ref SecurityGroup
DBSubnetGroupName: !Ref RDSSubnetGroup
Outputs:
JDBCOutput:
Condition: CreationRDS
Value:
!Join [ '', [!GetAtt RDSAuroraServerless.Endpoint.Address, /, !Ref RDSName] ]
Can you help me and show me where I'm going wrong?
Based on the comments.
The Join statement in the question is correct. Upon further investigation, the OP found the problem in other Join (not shown in the question), which lead to solution.
Up until now, for my project, I had just a single cloudformation stack template. However, I recently encountered that limit for the number of bytes you can have in a stack template, which is why I've been looking into nested stacks. I'm having a bit of trouble wrapping my head around the formatting though, because I've seen a lot of differences in the examples provided.
Below is a snippet of the original (non-nested) stack template before hitting the limit.
---
AWSTemplateFormatVersion: "2010-09-09"
Description: "Template for wgs-pipeline"
Parameters:
CloudspanLambdaFuncS3BucketName:
Type: String
CloudspanLambdaFuncS3KeyName:
Default: 'sfn.deployable.zip'
Type: String
CloudspanLambdaFuncModuleName:
Default: 'cloudspan'
Type: String
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
InternetGateway:
Type: AWS::EC2::InternetGateway
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref 'VPC'
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref 'VPC'
InternetGatewayId: !Ref 'InternetGateway'
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: EC2 Security Group for instances launched in
the VPC by Batch
VpcId: !Ref 'VPC'
# Lambda Resources (the section I want to place in a separate stack)
CloudspanLambdaExecutionRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: "sts:AssumeRole"
Policies:
- PolicyName: CanListBuckets
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- "s3:GetBucketLocation"
- "s3:ListAllMyBuckets"
Resource: "arn:aws:s3:::*"
- PolicyName: CanLog
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:*
Resource: arn:aws:logs:*:*:*
CloudspanLambdaFunction:
Type: "AWS::Lambda::Function"
Properties:
Handler:
Fn::Join: [ ".", [ Ref: CloudspanLambdaFuncModuleName, "handler"] ]
Role:
Fn::GetAtt: [ CloudspanLambdaExecutionRole, Arn ]
Code:
S3Bucket:
Ref: CloudspanLambdaFuncS3BucketName
S3Key:
Ref: CloudspanLambdaFuncS3KeyName
Runtime: "python3.6"
Timeout: "60"
What I want to do is isolate everything under the "Lambda resources" comment into a separate stack ("lambda stack") and then have this master stack call to that separate "lambda stack".
Below is my current set-up attempt, but I don't know if I'm doing it correctly:
Master template:
---
AWSTemplateFormatVersion: "2010-09-09"
Description: "Master template for wgs-pipeline. Contains network resources, lambda parameters, and batch
parameters"
Parameters:
CloudspanLambdaFuncS3BucketName:
Type: String
CloudspanLambdaFuncS3KeyName:
Default: 'sfn.deployable.zip'
Type: String
CloudspanLambdaFuncModuleName:
Default: 'cloudspan'
Type: String
Resources:
# The stuff that was already in the Resources section in the original stack
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
InternetGateway:
Type: AWS::EC2::InternetGateway
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref 'VPC'
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref 'VPC'
InternetGatewayId: !Ref 'InternetGateway'
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: EC2 Security Group for instances launched in the VPC by Batch
VpcId: !Ref 'VPC'
# The section that is referencing a separate stack
LambdaStack:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
LambdaFunction:
# ref?
LambdaExecutionRole:
# ref?
TemplateURL: # URL to S3 location of lambda stack
TimeoutInMinutes: '60'
LambdaStack template:
---
AWSTEmplateFormatVersion: '2010-09-09'
Description: Lambda functions stack, contains lambda function and lambda function execution roles.
Parameters:
LambdaFunction:
Description: the lambda function.
LambdaExecutionRole:
Description: the lambda execution roles.
Resources:
Type: "AWS::Lambda::Function"
Properties:
Handler:
#
# Need to figure out how to reference these since they are parameters that exist in the master template
#
Fn::Join: [ ".", [ Ref: CloudspanLambdaFuncModuleName, "handler"] ]
Role:
Fn::GetAtt: [ CloudspanLambdaExecutionRole, Arn ]
Code:
S3Bucket:
Ref: CloudspanLambdaFuncS3BucketName
S3Key:
Ref: CloudspanLambdaFuncS3KeyName
Runtime: "python3.6"
Timeout: "60" # This will be included in the master stack?
Outputs:
LambdaStack:
Description: Lambda stack ID.
Value:
Ref: LambdaFunction
Export:
Name:
Fn::Sub: "${AWS::StackName}-LambdaFunction"
LambdaExecutionRoleStack:
Description: Lambda execution role stack ID.
Value:
Ref: LambdaExecutionRole
Export:
Name:
Fn::Sub: "${AWS::StackName}-LambdaExecutionRole"
Have I formatted the LambdaStack template and master template correctly so far? An in terms of the CloudspanLambdaFunction and CloudspanLambdaExecutionRole parameters that were in the original (un-nested) stack, how exactly do I format those in the LambdaStack (nested) template? Would it be just adding the two parameters under the resources section of the LambdaStack yaml?