I'm trying to setup a cloudformation template that will either launch a clean instance or one from snapshots. I'd like to be able to use an if / else type statement so that would look something like
pseudo code:
if InputSnapshotId:
"SnapshotId" : {"Ref" : "InputSnapshotId"},
else:
"Size" : 20,
In cloudformation I have tried a number of things like:
"WebserverInstanceDataVolume" : {
"Type" : "AWS::EC2::Volume",
"Properties" : {
"Fn::If" : [
{"Ref" : "FromSnapshot"},
{"SnapshotId" : { "Ref" : "InputSnapshotId" }},
{"Size" : "20"}
],
"VolumeType" : "standard",
"AvailabilityZone" : { "Fn::GetAtt" : [ "WebserverInstance", "AvailabilityZone" ]},
"Tags" : [
{"Key" : "Role", "Value": "data" },
]
},
"DeletionPolicy" : "Delete"
},
Or wrapping the in Fn::If in {}:
{"Fn::If" : [
{"Ref" : "FromSnapshot"},
{"SnapshotId" : { "Ref" : "InputSnapshotId" }},
{"Size" : "20"}
]}
All of which kicks different types or errors. The first one gives a "Encountered unsupported property Fn::If" in cloudformation, the second, just isn't valid JSON. I could snapshot an empty volume and define a size parameter then always pass a SnapshotId and size but I feel like there must be a way to have an optional line in cloudformation.
Any ideas?
You can do like this:
"Conditions" : {
"NotUseSnapshot" : {"Fn::Equals" : [{"Ref" : "InputSnapshotId"}, ""]}
},
"Resources" : {
"WebserverInstanceDataVolume" : {
"Type" : "AWS::EC2::Volume",
"Properties" : {
"Size" : {
"Fn::If" : [
"NotUseSnapshot",
"20",
{"Ref" : "AWS::NoValue"}
]
},
"SnapshotId" : {
"Fn::If" : [
"NotUseSnapshot",
{"Ref" : "AWS::NoValue"},
{"Ref" : "InputSnapshotId"}
]
},
"VolumeType" : "standard",
"AvailabilityZone" : { "Fn::GetAtt" : [ "WebserverInstance", "AvailabilityZone" ]},
"Tags" : [
{"Key" : "Role", "Value": "data" }
]
},
"DeletionPolicy" : "Delete"
}
}
Here is a link to a functional template: https://github.com/caussourd/public-cloudformation-templates/blob/master/conditional_volume_creation.template
Related
I wrote a Cloudformation template that creates a stack with a DynamoDB Table:
"FeedStageDynamoTable" : {
"Type" : "AWS::DynamoDB::Table",
"UpdateReplacePolicy" : "Retain",
"Properties" : {
"TableName" : { "Fn::Sub": [ "feed-${Year}-${Environment}-table", { "Year": {"Ref" : "BundleYear" }, "Environment" : {"Ref" : "DeployEnvironment"}} ]},
"AttributeDefinitions" : [
{"AttributeName" : "Guid", "AttributeType" : "S"}
],
"KeySchema" : [
{"AttributeName" : "Guid", "KeyType" : "HASH"}
],
"ProvisionedThroughput" : {
"ReadCapacityUnits" : "2",
"WriteCapacityUnits" : "2"
},
"StreamSpecification": {
"StreamViewType": "NEW_AND_OLD_IMAGES"
}
}
}
and an Output for the Table's Stream:
"Outputs" : {
"FeedStageTableStreamArn": {
"Description" : "The security group ID to use for public web servers",
"Value" : { "Fn::GetAtt" : ["FeedStageDynamoTable", "StreamArn"] },
"Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-FeedStageDynamoTableStreamArn" }}
},
The output is used by a lambda function from another template (for the second stack):
"NotifyWebConsumer" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Environment": {
"Variables" : {
"EnvironmentCodename" : { "Fn::Sub": [ "${Environment}", { "Environment" : {"Ref" : "DeployEnvironment"}} ]}
}
},
"Handler": "AmazonServerlessStageWebUpdate::AmazonServerlessStageWebUpdate.Functions::NotifyWebConsumer",
"FunctionName": "NotifyWebConsumer",
"Runtime": "dotnetcore2.1",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Policies":[{ "Fn::ImportValue" : {"Fn::Sub" : "${DeployEnvironment}-LambdaExecutionPolicy"} }],
"Role":{"Fn::GetAtt": ["NotifyConsumerRole","Arn"]},
"Events": {
}
}
}
...
"EventSourceMapping": {
"Type": "AWS::Lambda::EventSourceMapping",
"Properties": {
"EventSourceArn": { "Fn::ImportValue" : {"Fn::Sub" : "StageEnvironment-FeedStageDynamoTableStreamArn"} },
"FunctionName" : {
"Fn::GetAtt": [
"NotifyWebConsumer", "Arn"
]
},
"StartingPosition" : "LATEST"
}
}
After I published both stacks I can not update the first one anymore, because the output stream is in use by the second stack. Now my question: is there a cloudformation property like "UpdateReplacePolicy": "Retain" for outputs? Or is there any other way to update the first stack without deleting the second one?
The cloudformation is failing when trying to create lambda function with the error message "Encountered unsupported property Value"
There is no reference to the unsupported value and I couldn't find any incorrect value. All the values were used from AWS lambda cloud formation template only.
Also for the Dev I get the error indicating security group is string type but for QA doesn't get the error.
Can you please check point out what's causing unsupported value error and how to resolve security group related error for Dev environment.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Create Lambda Function For abc",
"Parameters": {
"ID" : {
"Description" : "OwnerContact Value",
"Type" : "String",
"Default" : "abc#xyz.com"
},
"abcVPCNAME": {
"Description": "abc VPC NAME",
"Type": "String",
"Default": "abc-e-dev",
"AllowedValues": [
"abc-e-dev",
"abc-e-qa",
"abc-e-prod",
"abc-w-qa",
"abc-w-prod",
]
}
},
"Mappings" : {
"params" : {
"abc-e-dev" : {
"S3bukcet" : "abc-dev-east",
"S3Key" : "/lambda/abc_S3.zip",
"TicketSNS" : "arn:aws:sns:us-east-1:212:abc",
"HOSTNAME" : "abc.com",
"ROLENAME" : "arn:aws:iam::454:role/Lambda-role",
"Subnets" : ["subnet-1","subnet-2","subnet-3"],
"SecGrps" : ["sg-1","sg-2"],
"TAG1" : "xyz",
"TAG2" : "123"
},
"abc-e-qa" : {
"S3bukcet" : "abc-qa-east",
"S3Key" : "/lambda/abc_S3.zip",
"TicketSNS" : "arn:aws:sns:us-east-1:212:abc",
"HOSTNAME" : "xyz.com",
"ROLENAME" : "arn:aws:iam::454:role/Lambda-role",
"Subnets" : ["subnet-1","subnet-2","subnet-3"],
"SecGrps" : "sg-123",
"TAG1" : "xyz",
"TAG2" : "123"
},
}
},
"Resources": {
"abcS3Get": {
"Type" : "AWS::Lambda::Function",
"Properties" : {
"Code" : {
"S3Bucket" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "S3bukcet" ]},
"S3Key" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "S3Key" ]}
},
"DeadLetterConfig" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "TicketSNS" ]},
"Description" : "abc Lambda Function For File Pickup",
"Environment" : {
"Key": "abcHOST",
"Value": { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "HOSTNAME" ]}
},
"FunctionName" : "abc-S3-Pickup",
"Handler" : "abc_S3_Get.lambda_handler",
"MemorySize" : 128,
"Role" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "ROLENAME" ]},
"Runtime" : "python2.7",
"Timeout" : 3,
"VpcConfig" : {
"SecurityGroupIds" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "SecGrps" ]},
"SubnetIds" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "Subnets" ]}
},
"Tags" : [{
"Key" : "KEY1",
"Value" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "TAG1" ]}
},
{
"Key" : "KEY2",
"Value" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "TAG2" ]}
},
{
"Key" : "KEY3",
"Value" : {"Ref":"ID"}
}
]
}
}
}
}
Found the resolution. It was issue with Value parameter in Environment which is incorrect.
Corrected to below and resolved the issue.
"Environment" : {
"Variables" : {
"abcHOST": {
"Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "HOSTNAME" ]
}
}
There were couple of other issues as well.
"S3Key" : "/lambda/abc_S3.zip",
should be
"S3Key" : "lambda/abc_S3.zip",
Also Deadletterconfig parameter needs to be altered as well.
Current Value:
"DeadLetterConfig" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "TicketSNS" ]},
Correct Value:
"DeadLetterConfig" : {
"TargetArn" : { "Fn::FindInMap" : [ "params", {"Ref":"abcVPCNAME"}, "TicketSNS" ]}
},
The CFT started working after all of the above changes.
I'm trying to setup my cloudformation for my database:
"VPC" : {
"Type" : "AWS::EC2::VPC",
"Properties" : {
"CidrBlock" : "10.0.0.0/16",
"EnableDnsSupport" : "false",
"EnableDnsHostnames" : "false",
"InstanceTenancy" : "default",
"Tags" : [ { "Key" : "Name", "Value" : "DomainName" } ]
}
},
"Subnet" : {
"Type" : "AWS::EC2::Subnet",
"Properties" : {
"VpcId" : { "Ref" : "VPC" },
"CidrBlock" : "10.0.0.0/16",
"AvailabilityZone" : { "Fn::Select": [ "0", { "Fn::GetAZs" : { "Ref" : "AWS::Region" } }]},
"Tags" : [ { "Key" : "Name", "Value" : "DomainName" } ]
}
},
"SecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Allow http to client host",
"VpcId" : {"Ref" : "VPC"},
"SecurityGroupIngress" : [{
"IpProtocol" : "tcp",
"FromPort" : "3306",
"ToPort" : "3306",
"CidrIp" : "10.0.0.0/16"
}],
"Tags" : [ { "Key" : "Name", "Value" : "DomainName" } ]
}
},
"Database" : {
"Type" : "AWS::RDS::DBInstance",
"Properties" : {
"DBName" : { "Fn::Join": ["", { "Fn::Split": [".", { "Ref" : "DomainName" }]}]},
"AllocatedStorage" : "5",
"DBInstanceClass" : "db.t2.micro",
"Engine" : "MySQL",
"EngineVersion" : "5.5",
"MasterUsername" : { "Ref": "DBUsername" },
"MasterUserPassword" : { "Ref": "DBPassword" },
"VPCSecurityGroups" : [ { "Fn::GetAtt": [ "SecurityGroup", "GroupId" ] } ],
"Tags" : [ { "Key" : "Name", "Value" : "DomainName" } ]
},
"DeletionPolicy" : "Snapshot"
},
Should be setting up a VPC for the database. But when I run the cloudformation template I get the following error:
UPDATE_FAILED AWS::RDS::DBInstance Database Database is in vpc-3081245b, but Ec2 Security Group sg-b122ffca is in vpc-f7173290
How do I get my database in the VPC properly?
As part of your Database definition, you can specify a DBSubnetGroupName.
A DB Subnet Group provides a list of subnets in which the Database is allowed to run. Each subnet in a DB Subnet Group belongs to a VPC.
Therefore, you need to do the following to your Amazon CloudFormation template:
Add a AWS::RDS::DBSubnetGroup, specifying the Subnet already defined in your template
Add a DBSubnetGroupName parameter to your AWS::RDS::DBInstance definition
I´m using cloudformation to create a ecs container and add this new container into route53 hostzone.
But when I run this script I´m having problems with the HostedZone tags
Here The error
A client error (ValidationError) occurred when calling the CreateStack operation: Invalid template parameter property 'Properties'
Here the json
"Parameters" : {
"InstanceType" : {
"Description" : "Container Instance type",
"Type" : "String",
"Default" : "t2.medium",
"AllowedValues" : [ "t2.micro", "t2.small", "t2.medium", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge" ],
"ConstraintDescription" : "must be a valid EC2 instance type."
},
"HostedZone" : {
"Type": "AWS::Route53::HostedZone",
"Properties": {
"HostedZoneConfig": {
"Comment": "My hosted zone for example.com"
},
"Name": "***.couchbase.com",
"VPCs": [
{
"VPCId": "*********",
"VPCRegion": "eu-west-1"
}
],
"HostedZoneTags": [
{
"Key": "Name",
"Value": "Couchbase DNS"
}
]
}
}
},
"Resources" : {
"ContainerInstance" : {
"Type": "AWS::EC2::Instance",
"Properties": {
"Tags": [{
"Key" : "Name",
"Value" : "Couchbase-1"
},
{
"Key" : "Type",
"Value" : "ECS-Couchbase"
}],
"IamInstanceProfile" : { "Ref" : "ECSIamInstanceProfile" },
"ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
{ "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
"InstanceType" : { "Ref" : "InstanceType" },
"SecurityGroups" : [ "ssh","default", "couchbase" ],
"KeyName" : { "Ref" : "KeyName" },
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"echo ECS_CLUSTER=", { "Ref" : "ClusterName" },
" >> /etc/ecs/ecs.config\n"
]]}}
}
},
"CouchbaseDNSRecord" : {
"Type" : "AWS::Route53::RecordSet",
"Properties" : {
"HostedZoneName" : {
"Fn::Join" : [ "", [
{ "Ref" : "HostedZone" }, "."
] ]
},
"Comment" : "DNS name for my instance.",
"Name" : {
"Fn::Join" : [ "", [
{"Ref" : "ContainerInstance"}, ".",
{"Ref" : "AWS::Region"}, ".",
{"Ref" : "HostedZone"} ,"."
] ]
},
"Type" : "A",
"TTL" : "900",
"ResourceRecords" : [
{ "Fn::GetAtt" : [ "ContainerInstance", "PublicIp" ] }
]
}
},
The HostedZone should be inside the Resources section.
"Parameters" : {
"InstanceType" : {
...
}
},
"Resources" : {
"HostedZone" : {
...
},
"ContainerInstance" : {
...
},
...
}
All the resources you want to create using Cloudformation should be within the resources section. This gives a better anatomy of the template, http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html
I am sort of confused about two AWS::EC2::Instance properties:
BlockDeviceMappings and Volumes.
I have read documentation a number of times but still don't really understand the difference.
Here is my template:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "kappoowTest",
"Mappings" : {
"AmazonLinuxAMI" : {
"eu-west-1" :
{ "AMI" : "ami-d8f9f1ac" },
"us-west-1" :
{ "AMI" : "ami-b63210f3" }
}
},
"Resources" : {
"SomeInstance" :{
"Type" : "AWS::EC2::Instance",
"Properties" : {
"AvailabilityZone" : "eu-west-1a",
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sdc",
"Ebs" : { "VolumeSize" : "50" }
},
{
"DeviceName" : "/dev/sdd",
"Ebs" : { "VolumeSize" : "100" }
}
],
"DisableApiTermination" : "true",
"EbsOptimized" : "true",
"ImageId" : { "Fn::FindInMap" : [ "AmazonLinuxAMI", { "Ref" : "AWS::Region" }, "AMI" ]},
"InstanceType" : "m1.large",
"KeyName" : "mongo_test",
"Monitoring" : "true",
"SecurityGroups" : [ "default" ],
"Volumes" : [
{ "VolumeId" : { "Ref" : "NewVolume" }, "Device" : "/dev/sdk" }
]
}
},
"NewVolume" : {
"Type" : "AWS::EC2::Volume",
"Properties" : {
"Size" : "100",
"AvailabilityZone" : "eu-west-1a"
}
}
}}
Here I have created 3 volumes. 2 with
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sdc",
"Ebs" : { "VolumeSize" : "50" }
},
{
"DeviceName" : "/dev/sdd",
"Ebs" : { "VolumeSize" : "100" }
}
]
and another one with:
"Volumes" : [
{ "VolumeId" :
{ "Ref" : "NewVolume" }, "Device" : "/dev/sdk" }
]
CloudFormation ran fine, but I fail to see the difference.
Could someone tell me what which way is better of adding EBS volumes to EC2 instance and what is the difference between these two methods ?
With BlockDeviceMappings you can mount ephemeral storage not only ebs.
Volumes is only ebs volumes, and provides better options (like choosing the AZ, or specifying the IOPs if you want to use PIOPs).
If all what you want is simple ebs volumes, then there is no difference.