I am trying to create a stack in AWS CloudFormation, My template basically consists of Ec2 instance, RDS instance for DB (MySQL engine) and a S3 bucket. but, its throwing error stating (db.t2.micro) this DB instance class cannot be created without a VPC, then I changed the DB instance class to (db.m1.small) again am getting same error. I even created a VPC too, but not sure how do I create my stack within the VPC which I created. I work in my company's AWS account. where already few other VPCs are available.
Thanks in advance :)
Modified the JSON script after getting answers. This script is in working condition and could create stack. TESTED!
Updated Code
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"DBSubnetGroup": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"DBSubnetGroupDescription": "This subnet belongs to Abdul's VPC",
"DBSubnetGroupName": "somename",
"SubnetIds": [
"subnet-f6b15491",
"subnet-b154569e"
]
}
},
"DB": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"AllocatedStorage": "5",
"StorageType": "gp2",
"DBInstanceClass": "db.m1.small",
"DBName": "wordpress",
"Engine": "MySQL",
"MasterUsername": "wordpress",
"MasterUserPassword": "Word12345",
"DBSubnetGroupName": {
"Ref": "DBSubnetGroup"
}
}
},
"EC2": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-c481fad3",
"InstanceType": "t2.micro",
"SubnetId": "subnet-b154569e"
}
},
"S3": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "wp-abdultestbuck"
}
}
}
}
You need to create an AWS::RDS::DBSubnetGroup and then reference in the AWS::RDS::DBInstance
{
"Resources": {
"DBSubnetGroup": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"DBSubnetGroupDescription": "",
"SubnetIds": [ "<Subnet ID 1","<Subnet ID 2>" ],
}
},
"DB": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
....
"DBSubnetGroupName": { "Ref": "DBSubnetGroup" }
}
},
"EC2": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-c481fad3",
"InstanceType": "t2.micro",
"SubnetId": "<SubnetID>"
}
}
}
}
Related
I've been trying to figure out why my VPC and subnet show side by side instead of the subnet inside of the VPC? (I used Atom to generate this.)
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "vpc",
"Metadata": {
},
"Parameters": {"siggyVpcCidr": {
"Description": "vpc cidr",
"Type": "String",
"Default": "10.0.0.0/16"
},
"siggySubnetCidr": {
"Description": "cidr for the subnet",
"Type": "String",
"Default": "10.0.1.0/2"
},
"Subnet1Az": {
"Description": "AZ for siggySubnetCidr",
"Type": "AWS::EC2::AvailabilityZone::Name"
}
},
"Mappings": {
},
"Conditions": {
},
"Resources": {
"siggyVpc": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": { "Ref": "siggyVpcCidr" },
"Tags": [{ "Key": "Name", "Value": "siggyVpc" }]
}
},
"siggyIgw": {
"Type": "AWS::EC2::InternetGateway",
"Properties": {
"Tags": [{ "Key": "Name", "Value": "siggyIgw1" }]
}
},
"AttachGateway": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"VpcId": { "Ref": "siggyVpc" },
"InternetGatewayId": { "Ref": "siggyIgw" }
}
},
"SubnetSiggy": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"AvailabilityZone": { "Ref": "Subnet1Az" },
"VpcId": { "Ref": "siggyVpc" },
"CidrBlock": { "Ref": "siggySubnetCidr" },
"Tags": [{ "Key": "Name", "Value": "siggySubnetCidr" }]
}
}
},
"Outputs": {
}
}
They are separate resources. CloudFormation templates arrange resources in a flat array. This is pretty much true of most resources. Some resources can be implicitly defined when creating resources, but that probably won't be reflected with an export where you create a template from existing resources.
You would need to inspect the VpcId property to determine the VPC to which the subnet belongs.
I am trying to create an Amazon EC2 instance then create an Amazon EBS volume and attach it to the instance. I am using a CloudFormation template for this. Unfortunately the stack creation is failing when attaching newly created volume to the instance with the following error:
Instance 'i-01eebc8c9c492c035' is not 'running'. (Service: AmazonEC2; Status Code: 400; Error Code: IncorrectState; Request ID: 635572fd-dd25-4a02-9306-6e22f88e13dc)
What I do not understand is, when the instance creation is complete, that means the instance is up and running. How can this error be possible?
I am using the following CloudFormation template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "single instance template",
"Parameters": {
"InstanceType": {
"Type": "String",
"Default": "t2.micro"
},
"InstanceName": {
"Type": "String",
"Default": "test_CFT"
},
"RootVolumeSize": {
"Type": "String",
"Default": "50"
},
"Volume1Size": {
"Type": "String",
"Default": "8"
},
"Region": {
"Type": "String",
"Default": "us-east-2"
},
"AMIID": {
"Type": "String",
"Default": "ami-8c122be9"
},
"SubnetIds": {
"Type": "CommaDelimitedList",
"Default": "subnet-595e7422"
},
"SecurityGroupIDs": {
"Type": "CommaDelimitedList",
"Default": "sg-082faee8335351537"
}
},
"Resources": {
"Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": {
"Ref": "AMIID"
},
"InstanceType": {
"Ref": "InstanceType"
},
"KeyName": "thehope",
"NetworkInterfaces": [
{
"AssociatePublicIpAddress": "false",
"DeviceIndex": "0",
"SubnetId": {
"Fn::Select": [
0,
{
"Ref": "SubnetIds"
}
]
},
"GroupSet": {
"Ref": "SecurityGroupIDs"
}
}
],
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"VolumeSize": {
"Ref": "RootVolumeSize"
},
"DeleteOnTermination": "true",
"VolumeType": "gp2"
}
}
],
"Tags": [
{
"Key": "Name",
"Value": {
"Ref": "InstanceName"
}
}
]
}
},
"Volume1": {
"DeletionPolicy": "Delete",
"Properties": {
"AvailabilityZone": {
"Fn::GetAtt": [
"Instance",
"AvailabilityZone"
]
},
"Encrypted": "False",
"Size": {
"Ref": "Volume1Size"
},
"Tags": [
{
"Key": "Name",
"Value": "New_volume"
}
],
"VolumeType": "gp2"
},
"Type": "AWS::EC2::Volume"
},
"VolumeAttachment1": {
"Properties": {
"Device": "/dev/xvdb",
"InstanceId": {
"Ref": "Instance"
},
"VolumeId": {
"Ref": "Volume1"
}
},
"Type": "AWS::EC2::VolumeAttachment"
}
},
"Outputs": {
"InstanceId": {
"Description": "InstanceId of the instance",
"Value": {
"Ref": "Instance"
}
},
"AZ": {
"Description": "Availability Zone of the instance",
"Value": {
"Fn::GetAtt": [
"Instance",
"AvailabilityZone"
]
}
},
"PrivateIP": {
"Description": "PrivateIP of the instance",
"Value": {
"Fn::GetAtt": [
"Instance",
"PrivateIp"
]
}
}
}
}
What am I doing wrong?
Since you are creating new volumes, it would be easier to simply specify the volumes as part of the instance rather than specifying an Amazon EBS volume and then attaching it to the instance.
From Amazon EC2 Block Device Mapping Property - AWS CloudFormation:
This example sets the EBS-backed root device (/dev/sda1) size to 50 GiB, and another EBS-backed device mapped to /dev/sdm that is 100 GiB in size.
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sda1",
"Ebs" : { "VolumeSize" : "50" }
},
{
"DeviceName" : "/dev/sdm",
"Ebs" : { "VolumeSize" : "100" }
}
]
That was quite fascinating, seeing how the instance stops!
When using Amazon Linux 2, it can be fixed by changing:
"DeviceName": "/dev/sda1",
into:
"DeviceName": "/dev/xvda",
Or, it can be fixed by using Amazon Linux (version 1) with /dev/sda1.
However, this doesn't fix your VolumeAttachment issue.
I was facing the same issue until I changed the AMI in my template. Initially, I was testing with Linux AMI in the N.Virginia region where it failed but when I used a CENTOS AMI that I had subscribed to it works.
I have the following AWS Cloudformation config, which sets up S3, Repositories.
When I run it via an ansible playbook, on the second time running the playbook this happens
AWS::ECR::Repository Repository CREATE_FAILED: production-app-name already exists
etc
How can I make it so that when this is ran multiple times, it will keep the existing s3 and repository instead of just blowing up? (I had assumed the param "DeletionPolicy": "Retain", would do this)
What I'd like to achieve:
If i run this 100x, I want the same resource state as it was after run #1. I do not want any resources deleted/wiped of any data.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Pre-reqs for Elastic Beanstalk application",
"Parameters": {
"BucketName": {
"Type": "String",
"Description": "S3 Bucket name"
},
"RepositoryName": {
"Type": "String",
"Description": "ECR Repository name"
}
},
"Resources": {
"Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": { "Fn::Join": [ "-", [
{ "Ref": "BucketName" },
{ "Ref": "AWS::Region" }
]]}
}
},
"Repository": {
"Type": "AWS::ECR::Repository",
"DeletionPolicy": "Retain",
"Properties": {
"RepositoryName": { "Ref": "RepositoryName" }
}
}
},
"Outputs": {
"S3Bucket": {
"Description": "Full S3 Bucket name",
"Value": { "Ref": "Bucket" }
},
"Repository": {
"Description": "ECR Repo",
"Value": { "Fn::Join": [ "/", [
{
"Fn::Join": [ ".", [
{ "Ref": "AWS::AccountId" },
"dkr",
"ecr",
{ "Ref": "AWS::Region" },
"amazonaws.com"
]]
},
{ "Ref": "Repository" }
]]}
}
}
}
edit:
DB with similar issue when ran twice
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"DBPassword": {
"MinLength": "8",
"NoEcho": true,
"Type": "String"
},
"Environment": {
"MinLength": "1",
"Type": "String"
},
"DBName": {
"Type": "String",
"Description": "DBName"
},
"DBInstanceIdentifier": {
"Type": "String",
"Description": "DBInstanceIdentifier"
},
"DBPort": {
"Type": "String",
"Description": "DBPort"
},
"DBUsername": {
"Type": "String",
"Description": "DBName"
}
},
"Outputs": {
"Url": {
"Value": {
"Fn::Sub": "postgres://${DBUsername}:${DBPassword}#${Instance.Endpoint.Address}:${Instance.Endpoint.Port}/${DBName}"
}
}
},
"Resources": {
"Instance": {
"Type": "AWS::RDS::DBInstance",
"DeletionPolicy": "Retain",
"Properties": {
"AllocatedStorage": "10",
"DBInstanceClass": "db.t2.micro",
"DBInstanceIdentifier": {"Ref": "DBInstanceIdentifier"},
"DBName": {
"Ref": "DBName"
},
"Engine": "postgres",
"EngineVersion": "9.6.6",
"MasterUsername": {
"Ref": "DBUsername"
},
"MasterUserPassword": {
"Ref": "DBPassword"
},
"MultiAZ": "false",
"Port": {
"Ref": "DBPort"
},
"PubliclyAccessible": "false",
"StorageType": "gp2"
}
}
}
}
The field RepositoryName in AWS::ECR::Repository is actually not required and I would advise against specifying one. By letting CloudFormation dynamically assign a unique name to the repository you'll avoid collision.
If you later want to use the repository name, for exemple: in a task definition, you can use the "Ref" function like so { "Ref": "Repository" } to extract the unique name generated by CloudFormation.
As for the issue with the RDS instance, tt comes down to the same problem of hardcoding resources name.
Using retain will keep the resource alive but it will no longer be managed by CloudFormation which is a big problem.
Just make sure when doing updates to never modify a parameter that require a resource "replacement". The documentation always states what kind of update a parameter change will incur.
Image taken from (here)
If you really need to change a parameter that requires a replacement. Create a new resource with the adapter parameters, migrate whatever data you had in the database or ECR repository, then remove the old resource from the template. If you don't need to migrate anything, make sure you don't have hardcoded names and let CloudFormation perform the replacement.
I'm using ECS-CLI (0.4.5) to launch a CFN template, and now I'm trying to put an Aurora cluster into the CFN template and update the stack with a changeset through the CFN SDK.
I can't figure out why it's upset about my subnets. The subnets are created by the initial 'ecs-cli up' call. They are in the same vpc as the rest of the stack, they already exist before I try to deploy the changeset, and they are in different availability zones (us-west-2b and us-west-2c).
The only info CFN is giving me is that 'some input subnets are invalid'.
CFN Failure:
Subnets:
I can create a DBSubnetGroup through the management console with the exact same subnets with no problems.
Any ideas on what could be going wrong? Is this a bug in CloudFormation? Let me know if more information is needed to solve this... I'm honestly at such a loss
Here's what my initial template boils down to (It's built into ecs-cli):
"PubSubnetAz1": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {
"Ref": "Vpc"
},
"CidrBlock": "10.0.0.0/24",
"AvailabilityZone": "us-west-2b"
}
},
"PubSubnetAz2": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {
"Ref": "Vpc"
},
"CidrBlock": "10.0.1.0/24",
"AvailabilityZone": "us-west-2c"
}
},
"InternetGateway": {
"Type": "AWS::EC2::InternetGateway"
},
"AttachGateway": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"VpcId": {
"Ref": "Vpc"
},
"InternetGatewayId": {
"Ref": "InternetGateway"
}
}
},
"RouteViaIgw": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "Vpc"
}
}
},
"PublicRouteViaIgw": {
"DependsOn": "AttachGateway",
"Type": "AWS::EC2::Route",
"Properties": {
"RouteTableId": {
"Ref": "RouteViaIgw"
},
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": {
"Ref": "InternetGateway"
}
}
},
"PubSubnet1RouteTableAssociation": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"SubnetId": {
"Ref": "PubSubnetAz1"
},
"RouteTableId": {
"Ref": "RouteViaIgw"
}
}
},
"PubSubnet2RouteTableAssociation": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"SubnetId": {
"Ref": "PubSubnetAz2"
},
"RouteTableId": {
"Ref": "RouteViaIgw"
}
}
},
And then when I go to update it, I add this:
"DBSubnetGroup": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"DBSubnetGroupDescription": "Aurora Subnet Group using subnets from 2 AZs",
"SubnetIds": {
"Fn::Join": [
",", [{
"Ref": "pubSubnetAz1"
},
{
"Ref": "pubSubnetAz2"
}
]
]
}]
}
}
}
The changeset should be simple enough...
"Changes": [
{
"Type": "Resource",
"ResourceChange": {
"Action": "Add",
"LogicalResourceId": "DBSubnetGroup",
"ResourceType": "AWS::RDS::DBSubnetGroup",
"Scope": [],
"Details": []
}
}
]
I'm using AWSTemplateFormatVersion 2010-09-09 and the JavaScript aws-sdk "^2.7.21"
The issue is that you're concatenating your subnet IDs into a string. Instead, you should pass them in an array. Try this:
"PrivateSubnetGroup": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"SubnetIds": [
{
"Ref": "PubSubnetAz1"
},
{
"Ref": "PubSubnetAz2"
}
],
"DBSubnetGroupDescription": "Aurora Subnet Group using subnets from 2 AZs"
}
}
Also, I would highly recommend trying to use yaml instead of json. Cloudformation now supports this natively, along with some shortcut functions to make using references easier, and I think in the long run you'll find it much easier to both read and write.
Here's an example of how you could write equivalent json in yaml:
PrivateSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: Subnet group for Aurora Database
SubnetIds:
- !Ref PubSubnetAz1
- !Ref PubSubnetAz2
According to the AWS::RDS::DBSubnetGroup documentation, the SubnetIDs parameter accepts a List of strings, not a CommaDelimitedList which is what you provided in your example. You should pass the subnets in a JSON array directly, without using Fn::Join:
"SubnetIds": [
{"Ref": "pubSubnetAz1"},
{"Ref": "pubSubnetAz2"}
]
Below is my AWS cloud formation template for creating VPC and subnets.The VPC is getting created successfully, but not the subnet's. I have tried giving my specific IP range , but it's failing with the error 'The CIDR '172.31.48.0/20' is invalid' How can i create the respective CIDRBlock dynamically in the template using JSON ?
"VPC1": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.10.0.0/16",
"InstanceTenancy": "default",
"EnableDnsSupport": "true",
"EnableDnsHostnames": "false",
"Tags": [
{
"Key": "Name",
"Value": "My Dashboard"
}
]
}
},
"subnet1": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "172.31.48.0/20",
"AvailabilityZone": "us-east-2a",
"VpcId": {
"Ref": "VPC1"
}
}
},
"subnet2": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "172.31.0.0/20",
"AvailabilityZone": "us-east-2b",
"VpcId": {
"Ref": "VPC1"
},
"Tags": [
{
"Key": "Name",
"Value": "MyDashboard"
}
]
}
},
"subnet3": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "172.31.32.0/20",
"AvailabilityZone": "us-east-2a",
"VpcId": {
"Ref": "VPC1"
}
}
}
The VPC is getting created successfully, but not the subnet's. I have tried giving my specific IP range , but it's failing with the error 'The CIDR '172.31.48.0/20' is invalid' How can i create the respective CIDRBlock dynamically in the template using JSON ?
Subnets should be in the same network as the vpc. Trying 10.10.1.0/24 , 10.10.2.0/24 and 10.10.3.0/24 worked for subnets.
got the answer here : https://forums.aws.amazon.com/thread.jspa?messageID=756147#756147