I am trying to implement an availability zone in AWS using Terraform. I have tried using the following:
data "aws_availability_zones" "available_zones" {
state = "available"
}
#Create subnets in the first two available availability zones
resource "aws_subnet" "primary" {
availability_zone = data.aws_availability_zones.available_zones.names[0]
vpc_id = aws_vpc.main_vpc.id
}
resource "aws_subnet" "secondary" {
availability_zone = data.aws_availability_zones.available_zones.names[1]
vpc_id = aws_vpc.main_vpc.id
}
However, when I do this and run my terraform plan command, I end up with the following error:
Error: Incorrect attribute value type
on autoscaling.tf line 20, in resource "aws_autoscaling_group" "auto_scaling_group":
20: availability_zones = data.aws_availability_zones.available_zones
|------
| data.aws_availability_zones.available_zones is object with 10 attributes
This is the example provided by the Hashicorp Registry docs. Any suggestions on how to rectify this issue? Thanks in advance for any assistance.
Related
I need help with the following error on Terraform, when i ran terraform apply, everything seemed to have worked when I checked aws console but then I get the following error at the end:
Error: error reading Main Route Table Association (subnet-09b6d028942d15d8e): empty result
│
│ with aws_main_route_table_association.a,
│ on main.tf line 55, in resource "aws_main_route_table_association" "a":
│ 55: resource "aws_main_route_table_association" "a"{
The below is the code for the route table portion
#3. create custom route table
resource "aws_route_table" "prod-route-table" {
vpc_id = aws_vpc.prod-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
route {
ipv6_cidr_block = "::/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "Prod"
}
}
This is the associate subnet with route table
#5. associate subnet with route table
resource "aws_main_route_table_association" "a"{
vpc_id = aws_subnet.subnet-1.id
route_table_id = aws_route_table.prod-route-table.id
}
This is the subnet portion
#4. create a subnet
resource "aws_subnet" "subnet-1" {
vpc_id = aws_vpc.prod-vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
tags = {
Name = "Prod-subnet"
}
}
Your help will be kindly appreciated. What am i doing wrong?
Thank you.
egress_only_gateway_id applies to only aws_egress_only_internet_gateway, not to aws_internet_gateway. So you have to create aws_egress_only_internet_gateway.
You're setting the attribute egress_only_gateway_id to an internet gateway resource, while it must be using an egress internet gateway resource [1].
References:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/egress_only_internet_gateway
I started a new chapter in my life, and this world of IaC (Infrastructure as code) is really amazing...
I saw a free course in YouTube, how to start working with Terraform in AWS, but something along the way is not working properly, although the code seems the as in the videos, and mine.
here is the code, and the result.
I'll be grateful for your assistance in understanding what is wrong.
Terraform details:
Terraform v0.14.10
provider registry.terraform.io/hashicorp/aws v3.36.0
The code:
3. Create Custom Route Table
resource "aws_route_table" "prod-route-table" {
vpc_id = aws_vpc.prod-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
route {
ipv6_cidr_block = "::/0"
egress_only_gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "example"
}
}
4. Create a Subnet
resource "aws_subnet" "subnet_1" {
vpc_id = aws_vpc.prod-vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1e"
tags = {
"name" = "Prod-subnet"
}
}
5. Assosicate subent with Route Table
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.subnet_1.id
route_table_id = aws_route_table.prod-route-table.id
}
The error:
Error: error creating route: InvalidEgressOnlyInternetGatewayId.Malformed: Invalid id: "igw-07f6dac9f8bd89fd5" (expecting "eigw-...")
status code: 400, request id: 7f7e2445-f537-4113-a52e-ac6b32dee888
on main.tf line 26, in resource "aws_route_table" "prod-route-table":
26: resource "aws_route_table" "prod-route-table" {
I added only the part of the code that the error is pointing me too.
You don't show the code for how you create the aws_internet_gateway.gw resource but the issue is that this resource is a normal Internet Gateway but you are passing the value to the egress_only_gateway_id field under which is expecting an ID for an egress only internet gateway.
The solution would be to either update the aws_internet_gateway resource to be an aws_egress_only_internet_gateway resource or to update the route property to be gateway_id which expects a normal Internet gateway ID and not an egress only gateway.
If you are just starting out with this stuff, I would avoid egress only internet gateways for now.
I currently have 2 workspaces within Terraform, one for Prod and one for Dev.
In prod my Terraform code creates a Route53 entry and then add's a cert as a CNAME to the Route53 hosted zone and then attaches the cert to my load balancer.
resource "aws_acm_certificate" "default" {
domain_name = "www.test.uk"
validation_method = "DNS"
}
resource "aws_route53_record" "validation" {
name = aws_acm_certificate.default.domain_validation_options[0].resource_record_name
type = aws_acm_certificate.default.domain_validation_options[0].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default.domain_validation_options[0].resource_record_value]
ttl = "60"
}
resource "aws_acm_certificate_validation" "default" {
certificate_arn = aws_acm_certificate.default.arn
validation_record_fqdns = [
aws_route53_record.validation.fqdn,
]
}
When I switch my workspace to dev and run terraform apply it tries to creates this Route53 entry again and errors. Is there a way to tell Terraform to ignore this?
I tried adding a count of 0 but it gave me this error
Error: Missing resource instance key
on alb.tf line 12, in resource "aws_route53_record" "validation":
12: type =
aws_acm_certificate.default.domain_validation_options[0].resource_record_type
Because aws_acm_certificate.default has "count" set, its attributes
must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_acm_certificate.default[count.index]
Error: Missing resource instance key
on alb.tf line 15, in resource "aws_route53_record" "validation":
15:
aws_acm_certificate.default.domain_validation_options[0].resource_record_value]
Because aws_acm_certificate.default has "count" set, its attributes
must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_acm_certificate.default[count.index]
The best solution I've come up with is to comment out the Route53 stuff when I run terraform apply in the staging workspace, this obviously isn't an ideal solution.
Untested below but I think you can use a conditional (based on your workspace name) and use count to create (or not create) the resources.
locals {
create_me = terraform.workspace == "dev" ? 0 : 1
}
resource "aws_acm_certificate" "default" {
count = local.create_me
domain_name = "www.test.uk"
validation_method = "DNS"
}
resource "aws_route53_record" "validation" {
count = local.create_me
name = aws_acm_certificate.default.domain_validation_options[count.index].resource_record_name
type = aws_acm_certificate.default.domain_validation_options[count.index].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default.domain_validation_options[count.index].resource_record_value]
ttl = "60"
}
resource "aws_acm_certificate_validation" "default" {
count = local.create_me
certificate_arn = aws_acm_certificate.default[count.index].arn
validation_record_fqdns = [
aws_route53_record.validation[count.index].fqdn,
]
}
I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"
A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.
It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish
I'm using Terraform with AWS as a provider.
In one of my networks I accidentally configured wrong values which led to
failure in resources creation.
So the situation was that some parts of the resources were up and running,
but I would prefer that the all process was executed as one transaction.
I'm familiar with the output the Terraform gives in such cases:
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
My question is: Is there still a way to setup a rollback policy in cases that some resources where created and some failed?
Below is a simple example to reproduce the problem.
In the local variable 'az_list' just the change value from 'names' to 'zone_ids':
az_list = "${data.aws_availability_zones.available.zone_ids}"
And a VPC will be created with some default security groups and Route tables but without subnets.
resources.tf:
provider "aws" {
region = "${var.region}"
}
### Local data ###
data "aws_availability_zones" "available" {}
locals {
#In order to reproduce an error: Change 'names' to 'zone_ids'
az_list = "${data.aws_availability_zones.available.names}"
}
### Vpc ###
resource "aws_vpc" "base_vpc" {
cidr_block = "${var.cidr}"
instance_tenancy = "default"
enable_dns_hostnames = "false"
enable_dns_support = "true"
}
### Subnets ###
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet( var.cidr, 8, count.index + 1 + length(local.az_list) )}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet(var.cidr, 8, count.index + 1)}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
map_public_ip_on_launch = true
}
variables.tf:
variable "region" {
description = "Name of region"
default = "ap-south-1"
}
variable "cidr" {
description = "The CIDR block for the VPC"
default = "10.0.0.0/16"
}