Conditional statement in cloud-formation - amazon-web-services

Background
I have the following cloud formation template that i am trying to use to spin up a EKS CLUSTER. I am having issues with the logging settings. I want to make them conditional so in the future a user can set a specific logging like say api to true or false and based on that it will be enabled or disabled.
Parameters:
ClusterName:
Type: String
ClusterVersion:
Type: Number
AllowedValues: [1.21, 1.20, 1.19, 1.18]
RoleArnValue:
Type: String
ListOfSubnetIDs:
Description: Array of Subnet IDs
Type: List<AWS::EC2::Subnet::Id>
ListOfSecurityGroupIDs:
Description: Array of security group ids
Type: List<AWS::EC2::SecurityGroup::Id>
ApiLogging:
Type: String
AllowedValues: [true, false]
AuditLogging:
Type: String
AllowedValues: [true, false]
AuthenticatorLogging:
Type: String
AllowedValues: [true, false]
ControllerManagerLogging:
Type: String
AllowedValues: [true, false]
SchedulerLogging:
Type: String
AllowedValues: [true, false]
Conditions:
ApiLoggingEnabled: !Equals [!Ref ApiLogging, 'true']
AuditLoggingEnabled: !Equals [!Ref AuditLogging, 'true']
AuthenticatorLoggingEnabled: !Equals [!Ref AuthenticatorLogging, 'true']
ControllerManagerLoggingEnabled: !Equals [!Ref ControllerManagerLogging, 'true']
SchedulerLoggingEnabled: !Equals [!Ref SchedulerLogging, 'true']
Resources:
EKSCluster:
Type: AWS::EKS::Cluster
Properties:
Name: !Sub ${ClusterName}
Version: !Sub ${ClusterVersion}
RoleArn: !Sub ${RoleArnValue}
ResourcesVpcConfig:
SecurityGroupIds: !Ref ListOfSecurityGroupIDs
SubnetIds: !Ref ListOfSubnetIDs
Logging:
ClusterLogging:
EnabledTypes:
- Type: !If [ApiLoggingEnabled, api, 'AWS::NoValue']
- Type: !If [AuditLoggingEnabled, audit, 'AWS::NoValue']
- Type: !If [AuthenticatorLoggingEnabled, authenticator, 'AWS::NoValue']
- Type: !If [ControllerManagerLoggingEnabled, controllerManager, 'AWS:NoValue']
- Type: !If [SchedulerLoggingEnabled, scheduler, 'AWS:NoValue']
Outputs:
ClusterArn:
Description: Arn of EKS CLUSTER
Value: !Ref EKSCluster
However i get the following Error My template works fine when i get rid of the logging stuff but i want to fix that. I am not sure what i did wrong.
Properties validation failed for resource EKSCluster with message: #/Logging/ClusterLogging/EnabledTypes/2/Type: #: only 1 subschema matches out of 2 #/Logging/ClusterLogging/EnabledTypes/2/Type: failed validation constraint for keyword [enum] #/Logging/ClusterLogging/EnabledTypes/3/Type: #: only 1 subschema matches out of 2 #/Logging/ClusterLogging/EnabledTypes/3/Type: failed validation constraint for keyword [enum] #/Logging/ClusterLogging/EnabledTypes/4/Type: #: only 1 subschema matches out of 2 #/Logging/ClusterLogging/EnabledTypes/4/Type: failed validation constraint for keyword [enum]

It should be !Ref 'AWS::NoValue':
Logging:
ClusterLogging:
EnabledTypes:
- Type: !If [ApiLoggingEnabled, api, !Ref 'AWS::NoValue']
- Type: !If [AuditLoggingEnabled, audit, !Ref 'AWS::NoValue']
- Type: !If [AuthenticatorLoggingEnabled, authenticator, !Ref 'AWS::NoValue']
- Type: !If [ControllerManagerLoggingEnabled, controllerManager, !Ref 'AWS:NoValue']
- Type: !If [SchedulerLoggingEnabled, scheduler, !Ref 'AWS:NoValue']

Related

CloudFormation conditional EMR instances

I struggle a bit with the use of conditionals in CF templates, and I would like to conditionally specify EMR cluster instances groups or fleets in the most concise way.
This builds w/o error. It chooses to use instance groups if prod, or instance fleets if non-prod using two separate conditionals:
Parameters:
EnvironmentName:
Type: String
Description: 'Example: ci, qa, stage, prod'
Conditions:
IsPreProd: !Or
[!Equals [!Ref EnvironmentName, ci], !Equals [!Ref EnvironmentName, qa]]
IsProd: !Or
[!Equals [!Ref EnvironmentName, stage], !Equals [!Ref EnvironmentName, prod]]
Resources:
EMRCluster:
Type: 'AWS::EMR::Cluster'
Properties:
Instances:
CoreInstanceGroup:
!If
- IsProd
- InstanceCount: 1
InstanceType: m5.8xlarge
Market: ON_DEMAND
Name: CoreInstance
- !Ref "AWS::NoValue"
CoreInstanceFleet:
!If
- IsPreProd
- InstanceTypeConfigs:
- InstanceType: m5.8xlarge
TargetOnDemandCapacity: 1
TargetSpotCapacity: 1
LaunchSpecifications:
SpotSpecification:
TimeoutAction: SWITCH_TO_ON_DEMAND
TimeoutDurationMinutes: 10
- !Ref "AWS::NoValue"
I would like to use just a single conditional, like below, except the build fails telling me 'YAML not well-formed' on the line where the 'If' is. If I implement it like above, I would end up having four separate conditionals since I also have to add master instance groups or fleets as well. Is is possible to do it like this all as one conditional?
Parameters:
EnvironmentName:
Type: String
Description: 'Example: ci, qa, stage, prod'
Conditions:
IsProd: !Or
[!Equals [!Ref EnvironmentName, stage], !Equals [!Ref EnvironmentName, prod]]
Resources:
EMRCluster:
Type: 'AWS::EMR::Cluster'
Properties:
Instances:
- !If
- IsProd
- CoreInstanceGroup:
InstanceCount: 1
InstanceType: m5.8xlarge
Market: ON_DEMAND
Name: CoreInstance
- CoreInstanceFleet:
InstanceTypeConfigs:
- InstanceType: m5.8xlarge
TargetOnDemandCapacity: 1
TargetSpotCapacity: 1
LaunchSpecifications:
SpotSpecification:
BlockDurationMinutes: 60
TimeoutAction: SWITCH_TO_ON_DEMAND
TimeoutDurationMinutes: 10
Instances is not a list. You don't need - before !If:
Resources:
EMRCluster:
Type: 'AWS::EMR::Cluster'
Properties:
Instances:
!If
- IsProd
- CoreInstanceGroup:
InstanceCount: 1
InstanceType: m5.8xlarge
Market: ON_DEMAND
Name: CoreInstance
- CoreInstanceFleet:
InstanceTypeConfigs:
- InstanceType: m5.8xlarge
TargetOnDemandCapacity: 1
TargetSpotCapacity: 1
LaunchSpecifications:
SpotSpecification:
BlockDurationMinutes: 60
TimeoutAction: SWITCH_TO_ON_DEMAND
TimeoutDurationMinutes: 10

Cloudformation's condition statement (Glue's subnet)

I need my Glue job to use specific subnet based on environment it is ran in. Below line SubnetId throws syntax error. I read in aws' doc that true/false evaluation can be addressed with !Ref, issue seems to be with syntax for condition.
SubnetId: !If [!Ref UseProdCondition, !Ref PrivateSubnetAz2, !Ref PrivateSubnetAz3]
GlueJDBCConnection:
Type: AWS::Glue::Connection
UseProdCondition: !Equals [!Ref "${AppEnv}", "production"]
Properties:
CatalogId: !Ref AWS::AccountId
ConnectionInput:
ConnectionType: "JDBC"
ConnectionProperties:
USERNAME: !Ref Username
PASSWORD: !Ref Password
JDBC_CONNECTION_URL: !Ref GlueJDBCStringTarget
sslMode: 'REQUIRED'
PhysicalConnectionRequirements:
AvailabilityZone:
Ref: AvailabilityZone2
SecurityGroupIdList:
- Fn::GetAtt: GlueJobSecurityGroup.GroupId
SubnetId: !If [!Ref UseProdCondition, !Ref PrivateSubnetAz2, !Ref PrivateSubnetAz3]
Name: !Ref JDBCConnectionName
Condition needs to be defined as a separate resource, later referenced in specific resource.
Thanks #MisterSmith!
AWSTemplateFormatVersion: 2010-09-09
Description: AWS Glue Spark Job
Conditions:
UseProdCondition: !Equals [!Ref AppEnv, "production"]
GlueJDBCConnection:
Type: AWS::Glue::Connection
Properties:
CatalogId: !Ref AWS::AccountId
ConnectionInput:
ConnectionType: "JDBC"
ConnectionProperties:
USERNAME: !Ref Username
PASSWORD: !Ref Password
JDBC_CONNECTION_URL: !Ref GlueJDBCStringTarget
sslMode: 'REQUIRED'
PhysicalConnectionRequirements:
AvailabilityZone:
Ref: AvailabilityZone2
SecurityGroupIdList:
- Fn::GetAtt: GlueJobSecurityGroup.GroupId
#SubnetId: !Ref PrivateSubnetAz2
SubnetId: !If [UseProdCondition, !Ref PrivateSubnetAz2, !Ref PrivateSubnetAz3]
Name: !Ref RTMIJDBCConnectionName

CloudFormation: conditional AutoScalingGroup notifications

I want to receive AutoScaling Event notifications using SNS, but only in my PROD environment. How can I configure my CloudFormation template to do so?
Should it be like this:
Parameters:
Environment:
Description: Environment of the application
Type: String
Default: dev
AllowedValues:
- dev
- prod
Conditions:
IsDev: !Equals [ !Ref Environment, dev]
IsProd: !Equals [ !Ref Environment, prod]
Resources:
mySNSTopic:
Type: AWS::SNS::Topic
Properties:
Subscription:
- Endpoint: "my#email.com"
Protocol: "email"
myProdAutoScalingGroupWithNotifications:
Type: AWS::AutoScaling::AutoScalingGroup
Condition: IsProd
Properties:
NotificationConfigurations:
- NotificationTypes:
- "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
- "autoscaling:EC2_INSTANCE_TERMINATE"
- "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
TopicARN: !Ref "mySNSTopic"
myDevAutoScalingGroupWithoutNotifications:
Type: AWS::AutoScaling::AutoScalingGroup
Condition: IsDev
Properties:
Or does CloudFormation support the following too:
Parameters:
Environment:
Description: Environment of the application
Type: String
Default: dev
AllowedValues:
- dev
- prod
Conditions:
IsProd: !Equals [ !Ref Environment, prod]
Resources:
mySNSTopic:
Type: AWS::SNS::Topic
Properties:
Subscription:
- Endpoint: "my#email.com"
Protocol: "email"
myAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
NotificationConfigurations:
- Condition: IsProd
NotificationTypes:
- "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
- "autoscaling:EC2_INSTANCE_TERMINATE"
- "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
TopicARN: !Ref "mySNSTopic"
It should be double using Fn::If function:
NotificationConfigurations:
- !If
- IsProd
- NotificationTypes:
- "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
- "autoscaling:EC2_INSTANCE_TERMINATE"
- "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
TopicARN: !Ref "mySNSTopic"
- !Ref "AWS::NoValue"
Can also try the following form:
NotificationConfigurations:
!If
- IsProd
- - NotificationTypes:
- "autoscaling:EC2_INSTANCE_LAUNCH_ERROR"
- "autoscaling:EC2_INSTANCE_TERMINATE"
- "autoscaling:EC2_INSTANCE_TERMINATE_ERROR"
TopicARN: !Ref "mySNSTopic"
- !Ref "AWS::NoValue"
Please be careful about indentation. You may need to adjust it to match your template.

ReadEndpoint.Address was not found for DBCluster

I am adding route 53 to my DBCluster and keep running into the error: Attribute: ReadEndpoint.Address was not found for resource: <DBCluster-name>
The entire stack is created via cloudformation.
Also, it should be noted that this is for Serverless Aurora in case that matters?
Here is my code:
AWSTemplateFormatVersion: 2010-09-09
Description: RDS Aurora serverless template
Parameters:
CustomFunctionArn:
Default: arn:aws:lambda:us-west-2:123456789:function:vault-secrets-read-lambda-prod
Description: The ARN of the lambda function to retrieve password from Vault
Type: String
DBName:
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
Description: Name of the database
Type: String
DBMasterUsername:
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
Description: The master user name for the DB instance
Type: String
DBScalingAutoPauseEnabled:
AllowedValues:
- 'true'
- 'false'
Default: 'true'
Description: Pause all DB instances after some inactivity
Type: String
DBScalingMaxCapacity:
AllowedValues:
- 2
- 4
- 8
- 16
- 32
- 64
- 192
- 384
Default: 8
Description: The maximum capacity for an Aurora DB cluster in serverless DB engine mode
Type: Number
DBScalingMinCapacity:
AllowedValues:
- 2
- 4
- 8
- 16
- 32
- 64
- 192
- 384
Default: 2
Description: The minimum capacity for an Aurora DB cluster in serverless DB engine mode
Type: Number
DBScalingSecondsUntilAutoPause:
Default: 300
Description: Auto pause after consecutive seconds of inactivity
MinValue: 300
MaxValue: 86400
Type: Number
Env:
AllowedValues:
- prod
- qa
- dev
Type: String
Description: Environment
VaultPath:
Default: secret/dev/dbPassword
Type: String
SnapshotId:
Description: snapshot ID to restore DB cluster from
Type: String
Conditions:
EnableAutoPause:
!Equals [!Ref DBScalingAutoPauseEnabled, 'true']
DoNotUseSnapshot: !Equals
- !Ref SnapshotId
- ''
Mappings:
Configuration:
prod:
HostedZoneEnv: mydomain.com
HostedZoneId: 'XXX'
SecurityGroup: sg-123321
SubnetGroups:
- subnet-123
- subnet-456
- subnet-789
VPCId: vpc-555
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: prod
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
qa:
HostedZoneEnv: mydomain-qa.com
HostedZoneId: 'XXX'
SecurityGroup: sg-321123
SubnetGroups:
- subnet-098
- subnet-765
- subnet-432
VPCId: vpc-345543
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: qa
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
dev:
HostedZoneEnv: mydomain-dev.com
HostedZoneId: 'XXX'
SecurityGroup: sg-f3453f
SubnetGroups:
- subnet-dsf24327
- subnet-82542gsda
- subnet-casaf2344
VPCId: vpc-23dfsf
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: dev
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
Resources:
AuroraSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allows access to RDS
GroupName: !Sub '${AWS::StackName}-aurora-rds-${Env}'
SecurityGroupIngress:
- IpProtocol: -1
CidrIp: 0.0.0.0/0
FromPort: 5432
ToPort: 5432
Tags: !FindInMap [Configuration, !Ref Env, Tags]
VpcId: !FindInMap [Configuration, !Ref Env, VPCId]
GetValuefromVault:
Type: Custom::CustomResource
Properties:
ServiceToken: !Ref CustomFunctionArn
VaultKeyPath: !Ref VaultPath
DBCluster:
Type: 'AWS::RDS::DBCluster'
DeletionPolicy: Snapshot
UpdateReplacePolicy: Snapshot
Properties:
BackupRetentionPeriod: 7
DBClusterParameterGroupName: default.aurora-postgresql10
DBSubnetGroupName: !Ref DBSubnetGroup
DatabaseName: !Ref DBName
DeletionProtection: false
# EnableHttpEndpoint: true
Engine: aurora-postgresql
EngineMode: serverless
EngineVersion: '10.7'
KmsKeyId: !If [DoNotUseSnapshot, !Ref KMSkey, !Ref 'AWS::NoValue']
MasterUserPassword: !If [DoNotUseSnapshot, !GetAtt 'GetValuefromVault.ValueFromVault', !Ref 'AWS::NoValue']
MasterUsername: !If [DoNotUseSnapshot, !Ref DBMasterUsername, !Ref 'AWS::NoValue']
Port: 5432
ScalingConfiguration:
AutoPause: !If [EnableAutoPause, true, false]
MaxCapacity: !Ref DBScalingMaxCapacity
MinCapacity: !Ref DBScalingMinCapacity
SecondsUntilAutoPause: !Ref DBScalingSecondsUntilAutoPause
SnapshotIdentifier: !If [DoNotUseSnapshot, !Ref 'AWS::NoValue', !Ref SnapshotId]
StorageEncrypted: true
Tags: !FindInMap [Configuration, !Ref Env, Tags]
VpcSecurityGroupIds:
- !GetAtt [AuroraSG, GroupId]
- !FindInMap [Configuration, !Ref Env, SecurityGroup]
DBSubnetGroup:
Type: 'AWS::RDS::DBSubnetGroup'
Properties:
DBSubnetGroupDescription: !Sub '${AWS::StackName}-${Env}'
SubnetIds: !FindInMap [Configuration, !Ref Env, SubnetGroups]
Tags: !FindInMap [Configuration, !Ref Env, Tags]
KmsAlias:
Type: AWS::KMS::Alias
Properties:
AliasName: !Sub 'alias/${AWS::StackName}-${Env}-aurora-rds'
TargetKeyId: !Ref KMSkey
KMSkey:
Type: AWS::KMS::Key
Properties:
KeyPolicy:
Id: key-consolepolicy-3
Version: 2012-10-17
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
Action: 'kms:*'
Resource: '*'
RecordSet:
Type: AWS::Route53::RecordSet
DependsOn: DBCluster
Properties:
HostedZoneId: !FindInMap [Configuration, !Ref Env, HostedZoneId]
Name: !Join ['', [!Ref DBName, -writer-db, ., !FindInMap [Configuration, !Ref Env, HostedZoneEnv], .]]
ResourceRecords:
- !GetAtt DBCluster.Endpoint.Address
TTL: '60'
Type: CNAME
ReadRecordSet:
Type: 'AWS::Route53::RecordSet'
DependsOn:
- DBCluster
Properties:
HostedZoneId: !FindInMap [Configuration, !Ref Env, HostedZoneId]
Name: !Join ['', [!Ref DBName, -reader-db, ., !FindInMap [Configuration, !Ref Env, HostedZoneEnv], .]]
ResourceRecords:
- !GetAtt DBCluster.ReadEndpoint.Address
TTL: '60'
Type: CNAME
Outputs:
AuroraHost:
Value: !GetAtt [DBCluster, Endpoint.Address]
Export:
Name: !Join [":", [ !Ref "AWS::StackName", 'Host' ]]
AuroraSG:
Value: !GetAtt AuroraSG.GroupId
Export:
Name: !Join [":", [ !Ref "AWS::StackName", AuroraSG ]]
KMS:
Value: !GetAtt [KMSkey, Arn]
Export:
Name: !Join [":", [ !Ref "AWS::StackName", 'KMS' ]]
DNSName:
Description: 'The connection endpoint for the DB cluster.'
Value: !GetAtt 'DBCluster.Endpoint.Address'
Export:
Name: !Sub '${AWS::StackName}-DNSName'
ReadDNSName:
Description: 'The reader endpoint for the DB cluster.'
Value: !GetAtt 'DBCluster.ReadEndpoint.Address'
Export:
Name: !Sub '${AWS::StackName}-ReadDNSName'
Some things i have tried:
Create new stack: FAIL
Create new stack without ReadRecordSet: FAIL
Create new stack without RecordSet (old name for read recordset): FAIL
Create new stack without RecordSet (new name for read recordset): FAIL
Add a DependsOn to ReadRecordSet (for first RecordSet): FAIL
Enabling HTTP endpoint on Cluster: FAIL
Update TTL to 60: FAIL Update TTL to 0: FAIL
The RecordSet seems to be creating okay (I tested that by adding a DependsOn: - RecordSet in the ReadRecordSet to allow RecordSet to create first), so it's the ReadRecordSet that is failing and can't find ReadEndpoint.Address
Not sure what I am missing here, been googling like mad and don't see much about this error. Any help is appreciated!
It turns out that Aurora Serverless doesn't require ReadRecordSet, so that entire section is only applicable to provisioned DB, so ReadEndpoint doesn't exist indeed. Unfortunately AWS documentation doesn't mention that explicitly.

Dynamic environment variables for AWS Lambda using cloudformation template

I have to use AWS lambda in various stack of my application, thus I have created a generic cloud-formation template to create a lambda function. This template can be included in another cloud-formation template for further use as a nested stack.
# Basics
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation Template to create a lambda function for java 8 or nodejs
# Parameters
Parameters:
FunctionName:
Type: String
Description: Funciton Name
HandlerName:
Type: String
Description: Handler Name
FunctionCodeS3Bucket:
Type: String
Description: Name of s3 bucket where the function code is present
Default: my-deployment-bucket
FunctionCodeS3Key:
Type: String
Description: Function code present in s3 bucket
MemorySize:
Type: Number
Description: Memory size between 128 MB - 1536 MB and multiple of 64
MinValue: '128'
MaxValue: '1536'
Default: '128'
RoleARN:
Type: String
Description: Role ARN for this function
Runtime:
Type: String
Description: Runtime Environment name e.g nodejs, java8
AllowedPattern: ^(nodejs6.10|nodejs4.3|java8)$
ConstraintDescription: must be a valid environment (nodejs6.10|nodejs4.3|java8) name.
Timeout:
Type: Number
Description: Timeout in seconds
Default: '3'
Env1:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env2:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env3:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env4:
Type: String
Description: Environment Variable with format Key|value
Default: ''
# Conditions
Conditions:
Env1Exist: !Not [ !Equals [!Ref Env1, '']]
Env2Exist: !Not [ !Equals [!Ref Env2, '']]
Env3Exist: !Not [ !Equals [!Ref Env3, '']]
Env4Exist: !Not [ !Equals [!Ref Env4, '']]
# Resources
Resources:
LambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref 'FunctionCodeS3Bucket'
S3Key: !Ref 'FunctionCodeS3Key'
Description: !Sub 'Lambda function for: ${FunctionName}'
Environment:
Variables:
'Fn::If':
- Env1Exist
-
- !Select [0, !Split ["|", !Ref Env1]]: !Select [1, !Split ["|", !Ref Env1]]
- 'Fn::If':
- Env2Exist
- !Select [0, !Split ["|", !Ref Env2]]: !Select [1, !Split ["|", !Ref Env2]]
- !Ref "AWS::NoValue"
- 'Fn::If':
- Env3Exist
- !Select [0, !Split ["|", !Ref Env3]]: !Select [1, !Split ["|", !Ref Env3]]
- !Ref "AWS::NoValue"
- 'Fn::If':
- Env4Exist
- !Select [0, !Split ["|", !Ref Env4]]: !Select [1, !Split ["|", !Ref Env4]]
- !Ref "AWS::NoValue"
- !Ref "AWS::NoValue"
FunctionName: !Ref 'FunctionName'
Handler: !Ref 'HandlerName'
MemorySize: !Ref 'MemorySize'
Role: !Ref 'RoleARN'
Runtime: !Ref 'Runtime'
Timeout: !Ref 'Timeout'
Outputs:
LambdaFunctionARN:
Value: !GetAtt 'LambdaFunction.Arn'
I want to inject the environment variables to the the function and that will be passed from parent stack as below:
# Resouces
Resources:
# Lambda for search Function
ChildStackLambdaFunction:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: <<REF_TO_ABOVE_LAMBDA_STACK.yml>>
Parameters:
FunctionName: test
HandlerName: 'index.handler'
FunctionCodeS3Bucket: <<BUCKET_NAME>>
FunctionCodeS3Key: <<FUNCTION_DEPLOYMENT_NAME>>
MemorySize: '256'
RoleARN: <<ROLE_ARN>>
Runtime: nodejs6.10
Timeout: '60'
Env1: !Sub 'AWS_REGION|${AWS::Region}'
When I deploy this stack, I am getting below error. Can anybody help me to resolve this one?
Template format error: [/Resources/LambdaFunction/Type/Environment/Variables/Fn::If/1/0] map keys must be strings; received a map instead
Passing key-value parameter is referred from here
So, I tried so many ways to achieve this, but we can not pass the dynamic key-value pair to nested lambda stack from the parent stack. I had a confirmation from the AWS support that this is not possible as this moment.
They suggested a another way which I liked and implemented and its mentioned as below:
Pass the key: value pair as a JSON string and parse it appropriately in the lambda function.
Environment:
Variables:
Env1: '{"REGION": "REGION_VALUE", "ENDPOINT": "http://SOME_ENDPOINT"}'
This suggestion has a little overhead on programming to parse the JSON string, but at this moment I will recommend this as solution for above problem.
I achieved this with the PyPlate macro.
Take environment variables list in a commalimited
Parameters:
EnvVars:
Type: CommaDelimitedList
Description: Comma separated list of Env vars key=value pairs (key1=value1,key2=value2)
and use it in the Lambda Resource:
Environment:
Variables: |
#!PyPlate
output = dict()
for envVar in params['EnvVars']:
key, value = envVar.split('=')
output.update({key: value})
This is the right way to use global variables.
Globals:
Function:
Timeout: 60
Runtime: nodejs10.x
Environment:
Variables:
STAGE: !Ref Stage
DatabaseName: !Ref DatabaseName
DatabaseUsername: !Ref DatabaseUsername
DatabasePassword: !Ref DatabasePassword
DatabaseHostname: !Ref DatabaseHostname
AuthyAPIKey: !Ref AuthyApiKey