I'm trying to find a way to use ImportValue inside If function but can't find a proper syntax. Any help is appreciated.
Below the code I'm trying:
SomeTaskdefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: 'FamilyName'
ContainerDefinitions:
- Name: ContainerName
Image: 'imagename:net/v2/'
Environment:
- Name: ENV_VARIABLE_1
Value:
Fn::If:
Fn::Equals:
Fn::ImportValue:
!Sub "${ImportStackname}-ECSCluster"
''
'notpresent'
'present'
I come across here with similar problem.. my idea was to either be able specify DatabaseHost as parameter, if left empty - value should be taken from DatabaseStack export. Here is my sample code - it uses !ImportValue inside !If function. You will get the idea (instead of constructing only Value - construct whole Name Value list object)
Conditions:
DatabaseHostPresent: !Not [ !Equals [ !Ref DatabaseHost, ""]]
Resources:
...
ContainerDefinitions:
- Name: !Sub ${ApplicationName}-web-${EnvironmentName}
Environment:
- !If
- DatabaseHostPresent
- Name: DB_HOST
Value: !Ref DatabaseHost
- Name: DB_HOST
Value: !ImportValue
Fn::Sub: ${DatabaseStack}-EndpointAddress
I don't believe this is possible. You cannot use ImportValue inside an Equals function.
Another way to work around this is to use a nested template:
Add a Parameter to the Child template
In the Parent template, use the Import to populate the Parameter
In the Child template, you can do everything you'd want including using it in the Conditions section
Untested example:
# in Parent
Resources:
ChildStack:
Type: 'AWS::CloudFormation::Stack'
Properties:
Parameters:
Stackname: {'Fn::ImportValue': !Sub "${ImportStackname}-ECSCluster"}
TemplateURL: './child.yaml'
# in Child
Parameters:
Stackname:
Type: String
Default: ''
Conditions:
HasStack: !Not [!Equals [!Ref Stackname, '']]
Resources:
SomeTaskdefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: 'FamilyName'
ContainerDefinitions:
- Name: ContainerName
Image: 'imagename:net/v2/
Environment:
- Name: ENV_VARIABLE_1
Value: !If [HasStack, 'present', 'notpresent']
Using a combination of intrinsic function I've obtained the result:
parameter1: !If
- condition1
- !ImportValue
Fn::Sub: "${name}-dev-parameter"
- AWS::NoValue
Make sure to use the sequence of intrinsic function as I've listed. It seems to be the only way.
Related
I want to reuse this template but it when I up this template using nested stack it gives an error Export with name ExRole is already exported by stack Root-role. How can I improve the reuseability of the template. So that I can deploy same template in Prod, dev and other environments. I have tried using environment variable in the names of the role but how can I use it in the output and if the output is to be used in next template what should be the syntax?
Role:
---
AWSTemplateFormatVersion: 2010-09-09
Parameters:
Env:
Type: String
Resources:
ExRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- ecs-tasks.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
RoleName: !Sub "excutionrole-${Env}"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
Policies:
- PolicyName: AccessECR
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ecr:BatchGetImage
- ecr:GetAuthorizationToken
- ecr:GetDownloadUrlForLayer
Resource: '*'
ContainerInstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role
Path: '/'
RoleName: !Sub "ContainerInstanceRole-${Env}"
InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref ContainerInstanceRole
Outputs:
ExRole:
Description: Task excution role
Value: !Ref ExRole
Export:
Name: "ExRole"
InstanceProfile:
Description: profile for container instances
Value: !Ref InstanceProfile
Export:
Name: "InstanceProfile"
Task:
---
AWSTemplateFormatVersion: 2010-09-09
Parameters:
ExRole:
Type: String
RDS:
Type: String
DBUSER:
Type: String
Default: mysqldb
DBPASSWORD:
Type: String
Default: 1234123a
DBNAME:
Type: String
Default: mysqldb
Resources:
Task:
Type: AWS::ECS::TaskDefinition
Properties:
Family: wordpress
Cpu: 1 vCPU
ExecutionRoleArn: !Ref ExRole
Memory: 1 GB
NetworkMode: bridge
RequiresCompatibilities:
- EC2
TaskRoleArn: !Ref ExRole
ContainerDefinitions:
- Essential: true
Image: wordpress:latest
Name: wordpress
PortMappings:
- ContainerPort: 80
HostPort: 0
Protocol: tcp
Environment:
- Name: WORDPRESS_DB_HOST
Value: !Ref RDS
- Name: WORDPRESS_DB_USER
Value: !Ref DBUSER
- Name: WORDPRESS_DB_PASSWORD
Value: !Ref DBPASSWORD
- Name: WORDPRESS_DB_NAME
Value: !Ref DBNAME
Outputs:
Task:
Description: Contains all the task specifications
Value: !Ref Task
Export:
Name: "Task"
Root:
Resources:
Vpcstack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://${bucketname}.s3.us-east-2.amazonaws.com${bucketpath}/vpc.yml"
Parameters:
Env: !Ref Env
Cidr: !Ref Cidr
Publicsubnet1: !Ref Publicsubnet1
Publicsubnet2: !Ref Publicsubnet2
Privatesubnet1: !Ref Privatesubnet1
Privatesubnet2: !Ref Privatesubnet2
role:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://${bucketname}.s3.us-east-2.amazonaws.com${bucketpath}/role.yml"
Parameters:
Env: !Ref Env
Generally when people use output, if the template is being used multiple times within the same parent stack they will prefix the export with a variable (such as the stack name) to make it unique.
This can be done using the sub intrinsic function such as in the example below
Outputs:
ExRole:
Description: Task excution role
Value: !Ref ExRole
Export:
Name: !Sub "${AWS::StackName}-ExRole"
InstanceProfile:
Description: profile for container instances
Value: !Ref InstanceProfile
Export:
Name: !Sub "${AWS::StackName}-InstanceProfile"
Then you would need to pass in this stack ID value as a parameter into the nested stack that needs to reference this file. This would again used the sub intrinsic function to reference the export name.
To get this value in the ImportValue intrinsic function you would reference it like below, to do this you would need to pass the stack name as a parameter to the stack:
Fn::ImportValue: !Sub "${NestedStack}-ExRole"
If you call the other stack from the parent stack you can ignore exporting and instead pass the output into the next stack using the GetAtt intrinsic function instead.
Resources:
Vpcstack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://${bucketname}.s3.us-east-2.amazonaws.com${bucketpath}/vpc.yml"
Parameters:
Env: !Ref Env
Cidr: !Ref Cidr
Publicsubnet1: !Ref Publicsubnet1
Publicsubnet2: !Ref Publicsubnet2
Privatesubnet1: !Ref Privatesubnet1
Privatesubnet2: !Ref Privatesubnet2
role:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://${bucketname}.s3.us-east-2.amazonaws.com${bucketpath}/role.yml"
Parameters:
Env: !Ref Env
dbStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://${bucketname}.s3.us-east-2.amazonaws.com${bucketpath}/db.yml"
Parameters:
Role: !GetAtt role.Outputs.ExRole
You can also use the syntax of Fn::GetAtt: [role, Outputs.ExRole] also valid syntax.
As I knew, I can use ImportValue to reference value from another cloudformation stack in part of Resources.
NetworkInterfaces:
- GroupSet:
- Fn::ImportValue:
Fn::Sub: "${NetworkStackNameParameter}-SecurityGroupID"
AssociatePublicIpAddress: 'true'
DeviceIndex: '0'
DeleteOnTermination: 'true'
SubnetId:
Fn::ImportValue:
Fn::Sub: "${NetworkStackNameParameter}-SubnetID"
But seems this feature can't be used in Parameters
Parameters:
VPC:
Description: VPC ID
Type: String
Default:
Fn::ImportValue:
!Sub "${NetworkStackNameParameter}-VPC"
If I use above way, will get the error:
An error occurred (ValidationError) when calling the CreateChangeSet operation: Template format error: Every Default member must be a string.
Anyway to work around? because the same vpc id, subnet id, security group Id, will be used not only one place.
updates
So I have to give up:
In your AWS CloudFormation template, confirm that the Parameters section doesn't contain any intrinsic functions.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-template-validation/
One way of doing this is to use a condition:
Parameters:
MyValue:
Type: String
Value: ''
Conditions:
MyValueExists: !Not [ !Equals [!Ref MyValue, '']]
Resources:
Resource:
Type: AWS::Something
Properties:
Key: !If [MyValueExists, !Ref MyValue, !ImportValue 'Imported']
I want to create an Elastic Beanstalk using CloudFormation template. I want to define an environment variable ENV_VAR_1 and set it's value to value of template parameter var1. But don't want ENV_VAR_1 to exist at all if var1 is an empty string. I.e. I don't want ENV_VAR_1 with no value.
First I tried the Conditions, but I get "Encountered unsupported property Condition" during creation of ElasticBeanstalkEnvironment resource.
Parameters:
var1:
Type: String
Conditions:
isVar1Empty: !Equals [ !Ref var1, "" ]
Resources:
ElasticBeanstalkEnvironment:
Type: 'AWS::ElasticBeanstalk::Environment'
Properties:
OptionSettings:
- Namespace: 'aws:elasticbeanstalk:application:environment'
Condition: isVar1Empty
OptionName: ENV_VAR_1
Value: !Ref var1
Then I tried AWS::NoValue
Parameters:
var1:
Type: String
Resources:
ElasticBeanstalkEnvironment:
Type: 'AWS::ElasticBeanstalk::Environment'
Properties:
OptionSettings:
- Namespace: 'aws:elasticbeanstalk:application:environment'
OptionName: ENV_VAR_1
Value: !If [[!Equals [ !Ref var1, "" ]], !Ref 'AWS::NoValue', !Ref var1]
and many permutation combinations of this. With the same result: When var1 is empty, Elastic Beanstalk gets created with ENV_VAR_1 set to ""
Conditions are going to be applied at the Resource level...currently, you cannot apply a condition to a specific property.
What you could do to satisfy these exact requirements (and this is a bit ugly), is create two conditions, one negating the other. Then with these two conditions, have them conditionally create the specific resource.
Parameters:
var1:
Type: String
Conditions:
isVar1Empty: !Equals [ !Ref var1, "" ]
isVar1NonEmpty: !Not [ !Equals [ !Ref var1, "" ] ]
Resources:
ElasticBeanstalkEnvironmentWithVar1:
Type: 'AWS::ElasticBeanstalk::Environment'
Condition: isVar1NonEmpty
Properties:
OptionSettings:
- Namespace: 'aws:elasticbeanstalk:application:environment'
OptionName: ENV_VAR_1
Value: !Ref var1
ElasticBeanstalkEnvironmentWithoutVar1:
Type: 'AWS::ElasticBeanstalk::Environment'
Condition: isVar1Empty
Properties:
OptionSettings:
- Namespace: 'aws:elasticbeanstalk:application:environment'
Like I said...a bit ugly. Note that this will only really work well if you have one or two variables like this. As soon as you add a second or third 'optional' parameter, this quickly starts spiraling out of control.
A better option might be to generate your CloudFormation template using a templating library like mustache.
Another workaround to handle conditions at option level:
Conditions:
CreateProdResources: !Equals [!Ref Env, "prod"]
EBEnvironment:
Type: AWS::ElasticBeanstalk::Environment
Properties:
OptionSettings:
- Namespace : "aws:elasticbeanstalk:command"
OptionName: Timeout
Value : 1200
- Namespace : !If [CreateProdResources, "aws:elbv2:listener:443", "aws:elasticbeanstalk:command"]
OptionName: !If [CreateProdResources, Protocol, Timeout]
Value : !If [CreateProdResources, HTTPS, 1200]
- Namespace : !If [CreateProdResources, "aws:elbv2:listener:443", "aws:elasticbeanstalk:command"]
OptionName: !If [CreateProdResources, SSLPolicy, Timeout]
Value : !If [CreateProdResources, "ELBSecurityPolicy-2016-08", 1200]
- Namespace : !If [CreateProdResources, "aws:elbv2:listener:443", "aws:elasticbeanstalk:command"]
OptionName: !If [CreateProdResources, SSLCertificateArns, Timeout]
Value : !If [CreateProdResources, !Ref ACMCertificate, 1200]
Repeated options are considered only once in Elastic Beanstalk.
I have to use AWS lambda in various stack of my application, thus I have created a generic cloud-formation template to create a lambda function. This template can be included in another cloud-formation template for further use as a nested stack.
# Basics
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation Template to create a lambda function for java 8 or nodejs
# Parameters
Parameters:
FunctionName:
Type: String
Description: Funciton Name
HandlerName:
Type: String
Description: Handler Name
FunctionCodeS3Bucket:
Type: String
Description: Name of s3 bucket where the function code is present
Default: my-deployment-bucket
FunctionCodeS3Key:
Type: String
Description: Function code present in s3 bucket
MemorySize:
Type: Number
Description: Memory size between 128 MB - 1536 MB and multiple of 64
MinValue: '128'
MaxValue: '1536'
Default: '128'
RoleARN:
Type: String
Description: Role ARN for this function
Runtime:
Type: String
Description: Runtime Environment name e.g nodejs, java8
AllowedPattern: ^(nodejs6.10|nodejs4.3|java8)$
ConstraintDescription: must be a valid environment (nodejs6.10|nodejs4.3|java8) name.
Timeout:
Type: Number
Description: Timeout in seconds
Default: '3'
Env1:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env2:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env3:
Type: String
Description: Environment Variable with format Key|value
Default: ''
Env4:
Type: String
Description: Environment Variable with format Key|value
Default: ''
# Conditions
Conditions:
Env1Exist: !Not [ !Equals [!Ref Env1, '']]
Env2Exist: !Not [ !Equals [!Ref Env2, '']]
Env3Exist: !Not [ !Equals [!Ref Env3, '']]
Env4Exist: !Not [ !Equals [!Ref Env4, '']]
# Resources
Resources:
LambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref 'FunctionCodeS3Bucket'
S3Key: !Ref 'FunctionCodeS3Key'
Description: !Sub 'Lambda function for: ${FunctionName}'
Environment:
Variables:
'Fn::If':
- Env1Exist
-
- !Select [0, !Split ["|", !Ref Env1]]: !Select [1, !Split ["|", !Ref Env1]]
- 'Fn::If':
- Env2Exist
- !Select [0, !Split ["|", !Ref Env2]]: !Select [1, !Split ["|", !Ref Env2]]
- !Ref "AWS::NoValue"
- 'Fn::If':
- Env3Exist
- !Select [0, !Split ["|", !Ref Env3]]: !Select [1, !Split ["|", !Ref Env3]]
- !Ref "AWS::NoValue"
- 'Fn::If':
- Env4Exist
- !Select [0, !Split ["|", !Ref Env4]]: !Select [1, !Split ["|", !Ref Env4]]
- !Ref "AWS::NoValue"
- !Ref "AWS::NoValue"
FunctionName: !Ref 'FunctionName'
Handler: !Ref 'HandlerName'
MemorySize: !Ref 'MemorySize'
Role: !Ref 'RoleARN'
Runtime: !Ref 'Runtime'
Timeout: !Ref 'Timeout'
Outputs:
LambdaFunctionARN:
Value: !GetAtt 'LambdaFunction.Arn'
I want to inject the environment variables to the the function and that will be passed from parent stack as below:
# Resouces
Resources:
# Lambda for search Function
ChildStackLambdaFunction:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: <<REF_TO_ABOVE_LAMBDA_STACK.yml>>
Parameters:
FunctionName: test
HandlerName: 'index.handler'
FunctionCodeS3Bucket: <<BUCKET_NAME>>
FunctionCodeS3Key: <<FUNCTION_DEPLOYMENT_NAME>>
MemorySize: '256'
RoleARN: <<ROLE_ARN>>
Runtime: nodejs6.10
Timeout: '60'
Env1: !Sub 'AWS_REGION|${AWS::Region}'
When I deploy this stack, I am getting below error. Can anybody help me to resolve this one?
Template format error: [/Resources/LambdaFunction/Type/Environment/Variables/Fn::If/1/0] map keys must be strings; received a map instead
Passing key-value parameter is referred from here
So, I tried so many ways to achieve this, but we can not pass the dynamic key-value pair to nested lambda stack from the parent stack. I had a confirmation from the AWS support that this is not possible as this moment.
They suggested a another way which I liked and implemented and its mentioned as below:
Pass the key: value pair as a JSON string and parse it appropriately in the lambda function.
Environment:
Variables:
Env1: '{"REGION": "REGION_VALUE", "ENDPOINT": "http://SOME_ENDPOINT"}'
This suggestion has a little overhead on programming to parse the JSON string, but at this moment I will recommend this as solution for above problem.
I achieved this with the PyPlate macro.
Take environment variables list in a commalimited
Parameters:
EnvVars:
Type: CommaDelimitedList
Description: Comma separated list of Env vars key=value pairs (key1=value1,key2=value2)
and use it in the Lambda Resource:
Environment:
Variables: |
#!PyPlate
output = dict()
for envVar in params['EnvVars']:
key, value = envVar.split('=')
output.update({key: value})
This is the right way to use global variables.
Globals:
Function:
Timeout: 60
Runtime: nodejs10.x
Environment:
Variables:
STAGE: !Ref Stage
DatabaseName: !Ref DatabaseName
DatabaseUsername: !Ref DatabaseUsername
DatabasePassword: !Ref DatabasePassword
DatabaseHostname: !Ref DatabaseHostname
AuthyAPIKey: !Ref AuthyApiKey
I have this under parameter section ,
Parameters:
PlatformSelect:
Description: Cockpit platform Select.
Type: String
Default: qa-1
AllowedValues: [qa-1, qa-2, staging, production]
I need to reference this value in my UserData. I’m using Mappings in between.
Mappings:
bootstrap:
ubuntu:
print: echo ${PlatformSelect} >>test.txt
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref ‘InstanceType’
KeyName: !Ref ‘KeyName’
Tags:
- Key: Name
Value: Test
UserData:
Fn::Base64:
Fn::Join:
- ‘’
- - |
#!/bin/bash
- Fn::FindInMap:
- bootstrap
- ubuntu
- print
- |2+
This is not working. Not sure the way I refer it is wrong in first place!!
Should I use something before it like, ‘${AWS::Parameters:PlatformSelect}’ ?
Is there a reason why you are using Mapping in between?
You could easily use !Sub instead
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
KeyName: !Ref KeyName
Tags:
- Key: Name
Value: Test
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
${PlatformSelect}
What about a combination of Fn::Join and Ref
UserData:
Fn::Base64:
Fn::Join:
- ''
- - '#!/bin/bash\n'
- 'print: echo'
- !Ref 'PlatformSelect'
- '>>test.txt\n'