I'm using Terraform with AWS as a provider.
In one of my networks I accidentally configured wrong values which led to
failure in resources creation.
So the situation was that some parts of the resources were up and running,
but I would prefer that the all process was executed as one transaction.
I'm familiar with the output the Terraform gives in such cases:
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
My question is: Is there still a way to setup a rollback policy in cases that some resources where created and some failed?
Below is a simple example to reproduce the problem.
In the local variable 'az_list' just the change value from 'names' to 'zone_ids':
az_list = "${data.aws_availability_zones.available.zone_ids}"
And a VPC will be created with some default security groups and Route tables but without subnets.
resources.tf:
provider "aws" {
region = "${var.region}"
}
### Local data ###
data "aws_availability_zones" "available" {}
locals {
#In order to reproduce an error: Change 'names' to 'zone_ids'
az_list = "${data.aws_availability_zones.available.names}"
}
### Vpc ###
resource "aws_vpc" "base_vpc" {
cidr_block = "${var.cidr}"
instance_tenancy = "default"
enable_dns_hostnames = "false"
enable_dns_support = "true"
}
### Subnets ###
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet( var.cidr, 8, count.index + 1 + length(local.az_list) )}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet(var.cidr, 8, count.index + 1)}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
map_public_ip_on_launch = true
}
variables.tf:
variable "region" {
description = "Name of region"
default = "ap-south-1"
}
variable "cidr" {
description = "The CIDR block for the VPC"
default = "10.0.0.0/16"
}
Related
I am trying to implement an availability zone in AWS using Terraform. I have tried using the following:
data "aws_availability_zones" "available_zones" {
state = "available"
}
#Create subnets in the first two available availability zones
resource "aws_subnet" "primary" {
availability_zone = data.aws_availability_zones.available_zones.names[0]
vpc_id = aws_vpc.main_vpc.id
}
resource "aws_subnet" "secondary" {
availability_zone = data.aws_availability_zones.available_zones.names[1]
vpc_id = aws_vpc.main_vpc.id
}
However, when I do this and run my terraform plan command, I end up with the following error:
Error: Incorrect attribute value type
on autoscaling.tf line 20, in resource "aws_autoscaling_group" "auto_scaling_group":
20: availability_zones = data.aws_availability_zones.available_zones
|------
| data.aws_availability_zones.available_zones is object with 10 attributes
This is the example provided by the Hashicorp Registry docs. Any suggestions on how to rectify this issue? Thanks in advance for any assistance.
I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"
A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.
It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish
I have a private subnet with default route targeting to a nat-gateway. Both were created by terraform.
Now I have another code to raise an ec2 to use as NAT in my VPC (as cloud-nat-gateway become very expensive). I'm trying to change the default route in my rtb to this new ec2 and getting the error below:
Error: Error applying plan:
1 error occurred:
* module.ec2-nat.aws_route.defaultroute_to_ec2-nat: 1 error occurred:
* aws_route.defaultroute_to_ec2-nat: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
status code: 400, request id: 408deb59-d223-4c9f-9a28-209e2e0478e9
I know this route already exists, but how to change this already existing route to a new target? In this case my new ec2 network interface?
Thanks for your help.
Follow are the code I'm using:
#####################
# FIRST TERRAFORM
# create the internet gateway
resource "aws_internet_gateway" "this" {
count = "${var.create_vpc && length(var.public_subnets) > 0 ? 1 : 0}"
vpc_id = "${aws_vpc.this.id}"
tags = "${merge(map("Name", format("%s", var.name)), var.igw_tags, var.tags)}"
}
# Add default route (0.0.0.0/0) to internet gateway
resource "aws_route" "public_internet_gateway" {
count = "${var.create_vpc && length(var.public_subnets) > 0 ? 1 : 0}"
route_table_id = "${aws_route_table.public.id}"
destination_cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.this.id}"
timeouts {
create = "5m"
}
}
#####################
# SECOND TERRAFORM
# Spin EC2 to run as NAT
resource "aws_instance" "ec2-nat" {
count = "${var.instance_qtd}"
ami = "${data.aws_ami.nat.id}"
availability_zone = "${var.region}a"
instance_type = "${var.instance_type}"
key_name = "${var.aws_key_name}"
vpc_security_group_ids = ["${var.sg_ec2}","${var.sg_ops}"]
subnet_id = "${var.public_subnet_id}"
iam_instance_profile = "${var.iam_instance_profile}"
associate_public_ip_address = true
source_dest_check = false
tags = {
Name = "ec2-nat-${var.brand}-${var.role}-${count.index}"
Brand = "${var.brand}"
Role = "${var.role}"
Type = "ec2-nat"
}
}
# Add default route (0.0.0.0/0) to aws_instance.ec2-nat
variable "default_route" {
default = "0.0.0.0/0"
}
resource "aws_route" "defaultroute_to_ec2-nat" {
route_table_id = "${var.private_route_id}"
destination_cidr_block = "${var.default_route}"
instance_id = "${element(aws_instance.ec2-nat.*.id, 0)}"
}
I am new to terraform development, trying to create simple loop of variable that can be used later, some thing like below:
This perfectly worked for me and creates two subnet as expected.
variable "availability_zones" {
description = "Available Availability Zones"
type = "list"
default = [ "us-east-1a", "us-east-1b" ]
}
variable "public_subnet_cidr" {
description = "CIDR for Public Subnets"
type = "list"
default = [ "10.240.32.0/26", "10.240.32.64/26" ]
# Define Public Subnet
resource "aws_subnet" "public-subnets" {
count = 2
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${element(var.public_subnet_cidr, count.index)}"
availability_zone = "${element(var.availability_zones, count.index)}"
tags {
Name = "${element(var.availability_zones, count.index)}_${element(var.public_subnet_cidr, count.index)}"
}
}
But while trying to associate those subnets to default route, I am not able to figure out how to fetch individual subnet id from those subnets created earlier. And ended up with below code. Is there a way to fetch subnet.id of individual subnets?
# Assign Default Public Route Table to Public Subnet
resource "aws_route_table_association" "default_public_route" {
subnet_id = "${aws_subnet.public-subnets.id}" <<-- This is the line I am trying to figure out
route_table_id = "${aws_route_table.default_public_route_table.id}"
}
Thanks in advance.
Sam
You're close on how to use it. Here's a walk through that can help you.
resource "aws_route_table_association" "default_public_route" {
count = 2
subnet_id = "${element(aws_subnet.public-subnets.*.id, count.index)}"
route_table_id = "${aws_route_table.default_public_route_table.id}"
}
In my application I am using AWS autoscaling group using terraform. I launch an Autoscaling group giving it a number of instances in a region. But Since, only 20 are instances allowed in a region. I want to launch an autoscaling group that will create instances across multiple regions so that I can launch multiple. I had this configuration:
# ---------------------------------------------------------------------------------------------------------------------
# THESE TEMPLATES REQUIRE TERRAFORM VERSION 0.8 AND ABOVE
# ---------------------------------------------------------------------------------------------------------------------
terraform {
required_version = ">= 0.9.3"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
provider "aws" {
alias = "eu-west-1"
region = "eu-west-1"
}
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "ap-southeast-1"
region = "ap-southeast-1"
}
provider "aws" {
alias = "ap-southeast-2"
region = "ap-southeast-2"
}
provider "aws" {
alias = "ap-northeast-1"
region = "ap-northeast-1"
}
provider "aws" {
alias = "sa-east-1"
region = "sa-east-1"
}
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "${var.asg_name}-"
image_id = "${var.ami_id}"
instance_type = "${var.instance_type}"
associate_public_ip_address = true
key_name = "${var.key_name}"
security_groups = ["${var.security_group_id}"]
user_data = "${data.template_file.user_data_client.rendered}"
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN AUTO SCALING GROUP (ASG)
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_group" "autoscaling_group" {
name = "${var.asg_name}"
max_size = "${var.max_size}"
min_size = "${var.min_size}"
desired_capacity = "${var.desired_capacity}"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
vpc_zone_identifier = ["${data.aws_subnet_ids.default.ids}"]
lifecycle {
create_before_destroy = true
}
tag {
key = "Environment"
value = "production"
propagate_at_launch = true
}
tag {
key = "Name"
value = "clj-${var.job_id}-instance"
propagate_at_launch = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CLIENT NODE WHEN IT'S BOOTING
# ---------------------------------------------------------------------------------------------------------------------
data "template_file" "user_data_client" {
template = "${file("./user-data-client.sh")}"
vars {
company_location_job_id = "${var.job_id}"
docker_login_username = "${var.docker_login_username}"
docker_login_password = "${var.docker_login_password}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTER IN THE DEFAULT VPC AND SUBNETS
# Using the default VPC and subnets makes this example easy to run and test, but it means Instances are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------
data "aws_subnet_ids" "default" {
vpc_id = "${var.vpc_id}"
}
But this configuration does not work, it is only launching instances in a single region and throwing error as they reach 20.
How can we create instances across multiple regions in an autoscaling group ?
You correctly instantiate multiple aliased providers, but are not using any of them.
If you really need to create resources in different regions from one configuration, you must pass the alias of the provider to the resource:
resource "aws_autoscaling_group" "autoscaling_group_eu-central-1" {
provider = "aws.eu-central-1"
}
And repeat this block as many times as needed (or, better, extract it into a module and pass the providers to module.
But, as mentioned in a comment, if all you want to achieve is to have more than 20 instances, you can increase your limit by opening a ticket with AWS support.