Terraform: length() on data source cannot be determined until apply? - amazon-web-services

I am trying to dynamically declare multiple aws_nat_gateway data sources by retrieving the list of public subnets through the aws_subnet_ids data source. However, when I try to set the count parameter to be equal to the length of the subnet IDs, I get an error saying The "count" value depends on resource attributes that cannot be determined until apply....
This is almost in direct contradiction to the example in their documentation!. How do I fix this? Is their documentation wrong?
I am using Terraform v0.12.
data "aws_vpc" "environment_vpc" {
id = var.vpc_id
}
data "aws_subnet_ids" "public_subnet_ids" {
vpc_id = data.aws_vpc.environment_vpc.id
tags = {
Tier = "public"
}
depends_on = [data.aws_vpc.environment_vpc]
}
data "aws_nat_gateway" "nat_gateway" {
count = length(data.aws_subnet_ids.public_subnet_ids.ids) # <= Error
subnet_id = data.aws_subnet_ids.public_subnet_ids.ids.*[count.index]
depends_on = [data.aws_subnet_ids.public_subnet_ids]
}
I expect to be able to apply this template successfully, but I am getting the following error:
Error: Invalid count argument
on ../src/variables.tf line 78, in data "aws_nat_gateway" "nat_gateway":
78: count = "${length(data.aws_subnet_ids.public_subnet_ids.ids)}"
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

It seems you are trying to fetch subnets that weren't created yet or they couldn't be determinated, the terraform cmd output suggests you add -target flag to create the VPC and subnets or do another task first, after that, you'll apply the nat_gateway resource. I suggest you use the AZs list instead of subnets ids, I'll add a simple example below.
variable "vpc_azs_list" {
default = [
"us-east-1d",
"us-east-1e"
]
}
resource "aws_nat_gateway" "nat" {
count = var.enable_nat_gateways ? length(var.azs_list) : 0
allocation_id = "xxxxxxxxx"
subnet_id = "xxxxxxxxx"
depends_on = [
aws_internet_gateway.main,
aws_eip.nat_eip,
]
tags = {
"Name" = "nat-gateway-name"
"costCenter" = "xxxxxxxxx"
"owner" = "xxxxxxxxx"
}
}
I hope will be useful to you and other users.

Related

Filter out Subnet IDs based on sufficient capacity in availability zones in Terraform

I'm trying to deploy an EKS cluster and everything seems to be fine except for one!
The facade module looks like this:
module "eks" {
source = "../../../../infrastructure_modules/eks"
## EKS ##
create_eks = var.create_eks
cluster_version = var.cluster_version
cluster_name = local.cluster_name
vpc_id = data.aws_vpc.this.id
subnets = data.aws_subnet_ids.this.ids
# note: either pass worker_groups or node_groups
# this is for (EKSCTL API) unmanaged node group
worker_groups = var.worker_groups
# this is for (EKS API) managed node group
node_groups = var.node_groups
## Common tag metadata ##
env = var.env
app_name = var.app_name
tags = local.eks_tags
region = var.region
}
The VPC id is retrieved through the following block :
data "aws_vpc" "this" {
tags = {
Name = "tagName"
}
}
Which then is used to retrieve the subnet_IDs as following:
data "aws_subnet_ids" "this" {
vpc_id = data.aws_vpc.this.id
}
Nevertheless, deploying this results in error stating:
Error: error creating EKS Cluster (data-layer-eks):
UnsupportedAvailabilityZoneException: Cannot create cluster
'data-layer-eks' because us-east-1e, the targeted availability zone,
does not currently have sufficient capacity to support the cluster.
Which is a well known error, and anybody can come across this for even EC2s.
I could solve this by simply hardcoding the subnet value, but that's really undesirable and hardly maintainable.
So the question is, how can I filter out subnet_IDs based on availability zones that have sufficient capacity?
First you need to collect the subnets with all of their attributes:
data "aws_subnets" "this" {
filter {
name = "vpc-id"
values = [data.aws_vpc.this.id]
}
}
data "aws_subnet" "this" {
for_each = toset(data.aws_subnets.this.ids)
id = each.value
}
data.aws_subnet.this is now a map(object) with all of the subnets and their attributes. You can now filter by availability zone accordingly:
subnets = [for subnet in data.aws_subnet.this : subnet.id if subnet.availability_zone != "us-east-1e"]
You can also filter by truthy conditionals if that condition is easier for you:
subnets = [for subnet in data.aws_subnet.this : subnet.id if contains(["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d"], subnet.availability_zone)]
It depends on your personal use case.

Dinamically add resources in Terraform

I set up a jenkins pipeline that launches terraform to create a new EC2 instance in our VPC and register it to our private hosted zone on R53 (which is created at the same time) at every run.
I also managed to save the state into S3 so it doesn't fail with the hosted zone being re-created.
the main issue I have is that at every run terraform keeps replacing the previous instance with the new one and not adding it to the pool of instances.
How can avoid this?
here's a snippet of my code
terraform {
backend "s3" {
bucket = "<redacted>"
key = "<redacted>/terraform.tfstate"
region = "eu-west-1"
}
}
provider "aws" {
region = "${var.region}"
}
data "aws_ami" "image" {
# limit search criteria for performance
most_recent = "${var.ami_filter_most_recent}"
name_regex = "${var.ami_filter_name_regex}"
owners = ["${var.ami_filter_name_owners}"]
# filter on tag purpose
filter {
name = "tag:purpose"
values = ["${var.ami_filter_purpose}"]
}
# filter on tag os
filter {
name = "tag:os"
values = ["${var.ami_filter_os}"]
}
}
resource "aws_instance" "server" {
# use extracted ami from image data source
ami = data.aws_ami.image.id
availability_zone = data.aws_subnet.most_available.availability_zone
subnet_id = data.aws_subnet.most_available.id
instance_type = "${var.instance_type}"
vpc_security_group_ids = ["${var.security_group}"]
user_data = "${var.user_data}"
iam_instance_profile = "${var.iam_instance_profile}"
root_block_device {
volume_size = "${var.root_disk_size}"
}
ebs_block_device {
device_name = "${var.extra_disk_device_name}"
volume_size = "${var.extra_disk_size}"
}
tags = {
Name = "${local.available_name}"
}
}
resource "aws_route53_zone" "private" {
name = var.hosted_zone_name
vpc {
vpc_id = var.vpc_id
}
}
resource "aws_route53_record" "record" {
zone_id = aws_route53_zone.private.zone_id
name = "${local.available_name}.${var.hosted_zone_name}"
type = "A"
ttl = "300"
records = [aws_instance.server.private_ip]
depends_on = [
aws_route53_zone.private
]
}
the outcome is that my previously created instance is destroyed and a new one is created. what I want is to keep adding instances with this code.
thank you
Your code creates only one instance aws_instance.server, and any change to its properties will modify that one instance only as your backend is in S3, thus it acts as a global state for each pipeline. The same goes for aws_route53_record.record and anything else in your script.
If you want different pipelines to reuse the same exact script, you should either use different workspaces, or create different TF states for each pipeline. The other alternative is to redefine your TF script to take a map of instances as an input variable and use for_each to create different instances.
If those instances should be same, you should manage their count using using aws_autoscaling_group and desired capacity.

Terraform outs from a resource called via a for_each

I'm wondering if anyone can help me with the following I have a base resource to create aws subnets
resource aws_subnet subnet {
vpc_id = var.vpc_id
cidr_block = var.cidr_block
}
output subnetId {
value = aws_subnet.subnet.id
}
module private_subnet {
source = "linktoresourcedetailedabove"
for_each = var.privateSubnet
vpd.id = var.vpc_id
cidr_block = each.value.cidr_block
}
I have a module which calls using a for_each loop based on a var based in, my question is this resource might be called 10 times and I want to store each id and then access this from another module but I seem to be hitting issues here, I tried updating aws_subnet.subnet.id to aws_subnet.subnet.*.id but am still not having anyluck and can't seem to find anything out there that can help me.
If your private_subnet modules has output
output subnetId {
value = aws_subnet.subnet.id
}
then once you create your private_subnet modules, you can get the list of all subnetId creates as:
values(module.private_subnet)[*].subnetId

Terraform: Handle error if no EC2 Instance Type offerings found in AZ

We are spinning up G4 instances in AWS through Terraform and often encounter issues where one or two of the AZs in the given Region don't support G4 Instance type.
As of now I have hardcoded our TF configuration as per below where I am creating a map of Region and AZs as "azs" variable. From this map I can spin up clusters in targeted AZs of the Region where we have G4 Instance support.
I am using aws command line mentioned in this AWS article to find which AZs are supported in a given Region and updating our "azs" variable as we expand to other Regions.
variable "azs" {
default = {
"us-west-2" = "us-west-2a,us-west-2b,us-west-2c"
"us-east-1" = "us-east-1a,us-east-1b,us-east-1e"
"eu-west-1" = "eu-west-1a,eu-west-1b,eu-west-1c"
"eu-west-2" = "eu-west-2a,eu-west-2b,eu-west-2c"
"eu-west-3" = "eu-west-3a,eu-west-3c"
}
However the above approach required human intervention and updates frequently (If AWS adds support to non-supported AZs in a given region later on )
There is this stack overflow question where User is trying to do the same thing however he can use the fallback instance type lets say if any of the AZs are not supported for given instance type.
In my use-case , I can't use any other fall back instance type since our app-servers only runs on G4.
I have tried to use the workaround mentioned as an Answer in the above stack overflow question however its failing with the following error message.
Error: no EC2 Instance Type Offerings found matching criteria; try
different search
on main.tf line 8, in data "aws_ec2_instance_type_offering"
"example": 8: data "aws_ec2_instance_type_offering" "example" {
I am using the TF config as below where my preferred_instance_types is g4dn.xlarge.
provider "aws" {
version = "2.70"
}
data "aws_availability_zones" "all" {
state = "available"
}
data "aws_ec2_instance_type_offering" "example" {
for_each = toset(data.aws_availability_zones.all.names)
filter {
name = "instance-type"
values = ["g4dn.xlarge"]
}
filter {
name = "location"
values = [each.value]
}
location_type = "availability-zone"
preferred_instance_types = ["g4dn.xlarge"]
}
output "foo" {
value = { for az, details in data.aws_ec2_instance_type_offering.example : az => details.instance_type }
}
I would like to know how to handle this failure as Terraform is not able to find the g4 instance type in one of the AZs of a given region and failing.
Is there any Terraform Error handling I can do to by pass this error for now and get the supported AZs as an Output ?
I had checked that other question you mentioned earlier, but i could never get the output correctly. Thanks to #ydaetskcoR for this response in that post - I could learn a bit and get my loop working.
Here is one way to accomplish what you are looking for... Let me know if it works for you.
Instead of "aws_ec2_instance_type_offering", use "aws_ec2_instance_type_offerings" ... (there is a 's' in the end. they are different Data Sources...
I will just paste the code here and assume you will be able to decode the logic. I am filtering for one specific instance type and if its not supported, instance_types will be black and i make a list of AZ thats does not do not have blank values.
variable "az" {
default="us-east-1"
}
variable "my_inst" {
default="g4dn.xlarge"
}
data "aws_availability_zones" "example" {
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}
data "aws_ec2_instance_type_offerings" "example" {
for_each=toset(data.aws_availability_zones.example.names)
filter {
name = "instance-type"
values = [var.my_inst]
}
filter {
name = "location"
values = ["${each.key}"]
}
location_type = "availability-zone"
}
output "az_where_inst_avail" {
value = keys({ for az, details in data.aws_ec2_instance_type_offerings.example :
az => details.instance_types if length(details.instance_types) != 0 })
}
The output will look like below. us-east-1e does not have the instance type and its not there in the Output. Do test a few cases to see if it works everytime.
Outputs:
az_where_inst_avail = [
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1f",
]
I think there's a cleaner way. The data source already filters for the availability zone based off of the given filter. There is an attribute -> locations that will produce a list of the desired location_type.
provider "aws" {
region = var.region
}
data "aws_ec2_instance_type_offerings" "available" {
filter {
name = "instance-type"
values = [var.instance_type]
}
location_type = "availability-zone"
}
output "azs" {
value = data.aws_ec2_instance_type_offerings.available.locations
}
Where the instance_type is t3.micro and region is us-east-1, this accurately produces:
azs = tolist([
"us-east-1d",
"us-east-1a",
"us-east-1c",
"us-east-1f",
"us-east-1b",
])
You don't need to feed it a list of availability zones because it already gets those from the supplied region.

Terraform correlation to resource with for_each

I'm using below code to assign defualt subnets to ASG
resource "aws_autoscaling_group" "ecs_spot_asg" {
for_each = data.aws_subnet_ids.all_subnets.ids
.... etc...
Subnets done via
data "aws_subnet_ids" "all_subnets" {
vpc_id = data.aws_vpc.default.id
}
Below I have aws_autoscaling_policy and I'm stuck on how to relate one to the other
resource "aws_autoscaling_policy" "ecs_cluster_scale_policy" {
autoscaling_group_name = aws_autoscaling_group.ecs_spot_asg.name
Getting error:
Because aws_autoscaling_group.ecs_spot_asg has "for_each" set, its
attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_autoscaling_group.ecs_spot_asg[each.key]
How this should be modified ?
My mistake was adding [] to vpc_zone_identifier = data.aws_subnet_ids.all_subnets.ids
So instead of vpc_zone_identifier = [data.aws_subnet_ids.all_subnets.ids] it should be vpc_zone_identifier = data.aws_subnet_ids.all_subnets.ids