Acquiring subnet availability zone ids in bulk, in a module - amazon-web-services

The module I'm working on represents one app which is deployed to a VPC. The VPC is declared elsewhere.
The relevant data path includes these resources:
variable "vpc_id" { }
data "aws_subnets" "private" {
filter {
name = "vpc-id"
values = [data.aws_vpc.vpc.id]
}
filter {
name = "tag:Visibility"
values = ["private"]
}
}
data "aws_subnet" "private" {
for_each = toset(data.aws_subnets.private.ids)
vpc_id = data.aws_vpc.vpc.id
id = each.value
}
resource "aws_rds_cluster" "database" {
availability_zones = data.aws_subnet.private.*.availability_zones
}
That feels like the correct syntax, though it is a verbose chain of data retrieval. However, when I terraform plan it:
│ Error: Unsupported attribute
│
│ on ../../../../../appmodule/rds_postgres.tf line 23, in resource "aws_rds_cluster" "webapp":
│ 23: availability_zones = data.aws_subnet.private.*.availability_zone_id
│
│ This object does not have an attribute named "availability_zone_id".
I'm using aws-provider 4.18.0 and Terraform v1.1.2. The documentation for the subnet data source shows that availability_zone_id
Am I missing something obvious here?

As mentioned in the comments, you can get the list of AZs by using the values built-in function [1]. This is necessary as the data source you are relying on to provide the AZs is in a key value format due to for_each meta-argument use:
data "aws_subnet" "private" {
for_each = toset(data.aws_subnets.private.ids)
vpc_id = data.aws_vpc.vpc.id
id = each.value
}
The change you need to make is:
resource "aws_rds_cluster" "database" {
availability_zones = values(data.aws_subnet.private)[*].availability_zone
}
A test with an output and a default VPC shows the following result:
+ subnet_azs = [
+ "us-east-1b",
+ "us-east-1c",
+ "us-east-1d",
+ "us-east-1a",
+ "us-east-1f",
+ "us-east-1e",
]
As you can see, it is already a list so you can use it as is.
Note that there is an explanation why you should use the availability_zone attribute:
availability_zone_id - (Optional) ID of the Availability Zone for the subnet. This argument is not supported in all regions or partitions. If necessary, use availability_zone instead
[1] https://www.terraform.io/language/functions/values

Related

The "count" object can only be used in "module", "resource", and "data" blocks, and only when the "count" argument is set

I'm trying to deploy a subnet to each of 3 availability zones in AWS. I have my public subnet resource block have a count of 3 to deploy 3 subnets, one to each az
resource "aws_subnet" "public_subnet" {
count = length(var.azs)
vpc_id = aws_vpc.vpc.id
cidr_block = var.public_cidrs[count.index]
availability_zone = var.azs[count.index]
map_public_ip_on_launch = true
tags = {
Name = "${var.name}-public-subnet"
}
}
That worked fine, now I'm trying to deploy a nat gateway to each subnet and that's where I'm having issues. Here's my nat gateway resource block
resource "aws_nat_gateway" "nat_gateway" {
allocation_id = aws_eip.nat_eip.id
subnet_id = aws_subnet.public_subnet[count.index].id
tags = {
Name = "${var.name}-NAT-gateway"
}
It's giving me this error
│ Error: Reference to "count" in non-counted context
│
│ on main.tf line 48, in resource "aws_nat_gateway" "nat_gateway":
│ 48: subnet_id = aws_subnet.public_subnet[count.index].id
│
│ The "count" object can only be used in "module", "resource", and "data" blocks, and only when the "count"
│ argument is set.
I know that this error is occurring because I don't have a count argument in my NAT gateway resource block, but on Terraforms docs, I can't use count as an argument for NAT gateways. So how exactly do I accomplish what I'm trying to do? I want 3 NAT gateways, one in each subnet and I can't figure out how to achieve that
You can create NAT for each subnest as follows:
resource "aws_nat_gateway" "nat_gateway" {
count = length(aws_subnet.public_subnet)
allocation_id = aws_eip.nat_eip.id
subnet_id = aws_subnet.public_subnet[count.index].id
tags = {
Name = "${var.name}-NAT-gateway"
}
You will have problem with EIP, as you can't reuse the same EIP for three different NATs. But this is an issue for a new question.

Using Terraform to create an AWS EC2 bastion

I am trying to spin-up an AWS bastion host on AWS EC2. I am using the Terraform module provided by Guimove. I am getting stuck on the bastion_host_key_pair field. I need to provide a keypair that can be used to launch the EC2 template, but the bucket (aws_s3_bucket.bucket) that needs to contain the public key of the key pair gets created during the module, therefore the key isn't there when it tries to launch the instance and it fails. It feels like a chicken and egg scenario, so I am obviously doing something wrong. What am I doing wrong?
Error:
╷
│ Error: Error creating Auto Scaling Group: AccessDenied: You are not authorized to use launch template: lt-004b0af2895c684b3
│ status code: 403, request id: c6096e0d-dc83-4384-a036-f35b8ca292f8
│
│ with module.bastion.aws_autoscaling_group.bastion_auto_scaling_group,
│ on .terraform\modules\bastion\main.tf line 300, in resource "aws_autoscaling_group" "bastion_auto_scaling_group":
│ 300: resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
│
╵
Terraform:
resource "tls_private_key" "bastion_host" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "bastion_host" {
key_name = "bastion_user"
public_key = tls_private_key.bastion_host.public_key_openssh
}
resource "aws_s3_bucket_object" "bucket_public_key" {
bucket = aws_s3_bucket.bucket.id
key = "public-keys/${aws_key_pair.bastion_host.key_name}.pub"
content = aws_key_pair.bastion_host.public_key
kms_key_id = aws_kms_key.key.arn
}
module "bastion" {
source = "Guimove/bastion/aws"
bucket_name = "${var.identifier}-ssh-bastion-bucket-${var.env}"
region = var.aws_region
vpc_id = var.vpc_id
is_lb_private = "false"
bastion_host_key_pair = aws_key_pair.bastion_host.key_name
create_dns_record = "false"
elb_subnets = var.public_subnet_ids
auto_scaling_group_subnets = var.public_subnet_ids
instance_type = "t2.micro"
tags = {
Name = "SSH Bastion Host - ${var.identifier}-${var.env}",
}
}
I had the same issue. The fix was to go into AWS Market place, accept the EULA and subscribe to the AMI I was trying to use.

Tell Terraform to ignore Route53 resource in different workspace

I currently have 2 workspaces within Terraform, one for Prod and one for Dev.
In prod my Terraform code creates a Route53 entry and then add's a cert as a CNAME to the Route53 hosted zone and then attaches the cert to my load balancer.
resource "aws_acm_certificate" "default" {
domain_name = "www.test.uk"
validation_method = "DNS"
}
resource "aws_route53_record" "validation" {
name = aws_acm_certificate.default.domain_validation_options[0].resource_record_name
type = aws_acm_certificate.default.domain_validation_options[0].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default.domain_validation_options[0].resource_record_value]
ttl = "60"
}
resource "aws_acm_certificate_validation" "default" {
certificate_arn = aws_acm_certificate.default.arn
validation_record_fqdns = [
aws_route53_record.validation.fqdn,
]
}
When I switch my workspace to dev and run terraform apply it tries to creates this Route53 entry again and errors. Is there a way to tell Terraform to ignore this?
I tried adding a count of 0 but it gave me this error
Error: Missing resource instance key
on alb.tf line 12, in resource "aws_route53_record" "validation":
12: type =
aws_acm_certificate.default.domain_validation_options[0].resource_record_type
Because aws_acm_certificate.default has "count" set, its attributes
must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_acm_certificate.default[count.index]
Error: Missing resource instance key
on alb.tf line 15, in resource "aws_route53_record" "validation":
15:
aws_acm_certificate.default.domain_validation_options[0].resource_record_value]
Because aws_acm_certificate.default has "count" set, its attributes
must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_acm_certificate.default[count.index]
The best solution I've come up with is to comment out the Route53 stuff when I run terraform apply in the staging workspace, this obviously isn't an ideal solution.
Untested below but I think you can use a conditional (based on your workspace name) and use count to create (or not create) the resources.
locals {
create_me = terraform.workspace == "dev" ? 0 : 1
}
resource "aws_acm_certificate" "default" {
count = local.create_me
domain_name = "www.test.uk"
validation_method = "DNS"
}
resource "aws_route53_record" "validation" {
count = local.create_me
name = aws_acm_certificate.default.domain_validation_options[count.index].resource_record_name
type = aws_acm_certificate.default.domain_validation_options[count.index].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default.domain_validation_options[count.index].resource_record_value]
ttl = "60"
}
resource "aws_acm_certificate_validation" "default" {
count = local.create_me
certificate_arn = aws_acm_certificate.default[count.index].arn
validation_record_fqdns = [
aws_route53_record.validation[count.index].fqdn,
]
}

Terraform error creating subnet dependency

I'm trying to get a documentdb cluster up and running and have it running from within a private subnet I have created.
Running the config below without the depends_on i get the following error message as the subnet hasn't been created:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 59b75d23-50a4-42f9-99a3-367af58e6e16
Added the depends on setup to wait for the subnet to be created but are running into an issue.
cluster_identifier = "my-docdb-cluster"
engine = "docdb"
master_username = "myusername"
master_password = "mypassword"
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
skip_final_snapshot = true
apply_immediately = true
db_subnet_group_name = aws_subnet.eu-west-3a-private
depends_on = [aws_subnet.eu-west-3a-private]
}
On running terraform apply I an getting an error on the config:
Error: error creating DocDB cluster: DBSubnetGroupNotFoundFault: DB subnet group 'subnet-0b97a3f5bf6db758f' does not exist.
status code: 404, request id: 8b992d86-eb7f-427e-8f69-d05cc13d5b2d
on main.tf line 230, in resource "aws_docdb_cluster" "docdb":
230: resource "aws_docdb_cluster" "docdb"
A DB subnet group is a logical resource in itself that tells AWS where it may schedule a database instance in a VPC. It is not referring to the subnets directly which is what you're trying to do there.
To create a DB subnet group you should use the aws_db_subnet_group resource. You then refer to it by name directly when creating database instances or clusters.
A basic example would look like this:
resource "aws_vpc" "example" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "eu-west-3a" {
vpc_id = aws_vpc.example.id
availability_zone = "a"
cidr_block = "10.0.1.0/24"
tags = {
AZ = "a"
}
}
resource "aws_subnet" "eu-west-3b" {
vpc_id = aws_vpc.example.id
availability_zone = "b"
cidr_block = "10.0.2.0/24"
tags = {
AZ = "b"
}
}
resource "aws_db_subnet_group" "example" {
name = "main"
subnet_ids = [
aws_subnet.eu-west-3a.id,
aws_subnet.eu-west-3b.id
]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_db_instance" "example" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.example.name
}
The same thing applies to Elasticache subnet groups which use the aws_elasticache_subnet_group resource.
It's also worth noting that adding depends_on to a resource that already references the dependent resource via interpolation does nothing. The depends_on meta parameter is for resources that don't expose a parameter that would provide this dependency information directly only.
It seems value in parameter is wrong. db_subnet_group_name created somewhere else gives the output id/arn. So u need to use id value. although depends_on clause looks okie.
db_subnet_group_name = aws_db_subnet_group.eu-west-3a-private.id
So that would be correct/You can try to use arn in place of id.
Thanks,
Ashish

Terraform - Creating resources in one transaction / setting rollback policies

I'm using Terraform with AWS as a provider.
In one of my networks I accidentally configured wrong values which led to
failure in resources creation.
So the situation was that some parts of the resources were up and running,
but I would prefer that the all process was executed as one transaction.
I'm familiar with the output the Terraform gives in such cases:
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with any
resources that successfully completed. Please address the error above
and apply again to incrementally change your infrastructure.
My question is: Is there still a way to setup a rollback policy in cases that some resources where created and some failed?
Below is a simple example to reproduce the problem.
In the local variable 'az_list' just the change value from 'names' to 'zone_ids':
az_list = "${data.aws_availability_zones.available.zone_ids}"
And a VPC will be created with some default security groups and Route tables but without subnets.
resources.tf:
provider "aws" {
region = "${var.region}"
}
### Local data ###
data "aws_availability_zones" "available" {}
locals {
#In order to reproduce an error: Change 'names' to 'zone_ids'
az_list = "${data.aws_availability_zones.available.names}"
}
### Vpc ###
resource "aws_vpc" "base_vpc" {
cidr_block = "${var.cidr}"
instance_tenancy = "default"
enable_dns_hostnames = "false"
enable_dns_support = "true"
}
### Subnets ###
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet( var.cidr, 8, count.index + 1 + length(local.az_list) )}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.base_vpc.id}"
cidr_block = "${cidrsubnet(var.cidr, 8, count.index + 1)}"
availability_zone = "${element(local.az_list, count.index)}"
count = 2
map_public_ip_on_launch = true
}
variables.tf:
variable "region" {
description = "Name of region"
default = "ap-south-1"
}
variable "cidr" {
description = "The CIDR block for the VPC"
default = "10.0.0.0/16"
}