I get the following whenever I run terraform plan/apply and I don't know why this is saying it always needs to be replaced. The ACM is managed at the root of my project, and then the ARN is passed to my Cognito module.
# module.cognito["users"].aws_cognito_user_pool_domain.main must be replaced
+/- resource "aws_cognito_user_pool_domain" "main" {
~ certificate_arn = "arn:aws:acm:us-east-1:123456789:certificate/bc955b8a-45c6-4003-1b2a-5z66333fef275" -> (known after apply) # forces replacement
}
Update - add module call and DNS file for clarity
cognito.tf (root of the project)
module "cognito" {
source = "../modules/cognito"
for_each = var.cognito_userpools
cognito_name_prefix = "${try(each.value.name_prefix, local.name_prefix)}-${each.key}"
cognito_domain_name = try("${each.value.domain_prefix}.${local.dns_address}", null)
cognito_https_acm_arn = try(aws_acm_certificate.cognito_https_cert[each.key].arn, null)
hosted_zone_id = try(aws_route53_zone.public_hosted_zone.id, null)
cognito_callback_urls = each.value.callback_urls
cognito_logout_urls = each.value.logout_urls
cognito_sms_external_id = each.value.sms_external_id
cognito_userpool_schemas = each.value.userpool_schemas
cognito_mfa_configuration = try(each.value.mfa_configuration, "ON")
cognito_enable_software_token_mfa_configuration = try(each.value.enable_software_token_mfa_configuration, true)
cognito_userpool_groups = try(each.value.groups, [])
tags = local.default_tags
}
acm.tf (root of the project)
resource "aws_acm_certificate" "cognito_https_cert" {
for_each = var.cognito_userpools
provider = aws.us-east-1
domain_name = "${each.value.domain_prefix}.${local.dns_address}"
subject_alternative_names = ["*.${each.value.domain_prefix}.${local.dns_address}"]
validation_method = "DNS"
tags = local.default_tags
lifecycle {
create_before_destroy = true
}
}
modules/cognito/dns.tf
resource "aws_cognito_user_pool_domain" "main" {
domain = var.cognito_domain_name
certificate_arn = var.cognito_https_acm_arn
user_pool_id = aws_cognito_user_pool.pool.id
}
resource "aws_route53_record" "cognito_record" {
name = aws_cognito_user_pool_domain.main.domain
type = "A"
zone_id = var.hosted_zone_id
allow_overwrite = true
alias {
evaluate_target_health = false
name = aws_cognito_user_pool_domain.main.cloudfront_distribution_arn
# NOTE: This zone_id is fixed
zone_id = "Z2FDTNDATAQYW2"
}
}
Update 2
Upon further investigation tag updates were causing the arn not to be known until after apply, this isn’t ideal and only a problem with cognito custom domains - I don’t have the same issue with API Gateway custom domains for example. Does anyone have any idea on a workaround rather than relying on ignoring tag updates as a lifecycle meta argument?
Related
I am trying to generate a certificate and make it validate via DNS... all seems to work, until the last steps when I use resource "aws_acm_certificate_validation"
my code is the following:
# Create Certificate
resource "aws_acm_certificate" "ic_cert" {
provider = aws.us-east-1
domain_name = aws_s3_bucket.ic_bucket_main.bucket
subject_alternative_names = [aws_s3_bucket.ic_bucket_redirect.bucket]
validation_method = "DNS"
tags = {
Billing = "company X"
}
lifecycle {
create_before_destroy = true
}
}
# Validate Certificate via DNS
# get zone_id
data "aws_route53_zone" "selected" {
provider = aws.us-east-1
name = aws_s3_bucket.ic_bucket_main.bucket
}
# Generate DNS Records
resource "aws_route53_record" "ic_DNS_validation" {
provider = aws.us-east-1
for_each = {
for dvo in aws_acm_certificate.ic_cert.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = data.aws_route53_zone.selected.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
# Confirm certificate creation
resource "aws_acm_certificate_validation" "ic_cert_validation" {
certificate_arn = aws_acm_certificate.ic_cert.arn
#validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
#validation_record_fqdns = [aws_route53_record.ic_DNS_validation.fqdn]
validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
}
and I get the following error:
Error: reading ACM Certificate (arn:aws:acm:us-east-1:xxxxxxxxxxxxxxxxxxxxx8:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx): couldn't find resource
│ with aws_acm_certificate_validation.ic_cert_validation,
│ on certificates.tf line 45, in resource "aws_acm_certificate_validation" "ic_cert_validation":
│ 45: resource "aws_acm_certificate_validation" "ic_cert_validation" {
would anybody spot what is the issue?
Since ACM is a regional serivce and the certificate was created using provider = aws.us-east-1 the resource used for certificate validation should also use the same configuration (as the certificates were already created in that region):
resource "aws_acm_certificate_validation" "ic_cert_validation" {
provider = aws.us-east-1
certificate_arn = aws_acm_certificate.ic_cert.arn
#validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
#validation_record_fqdns = [aws_route53_record.ic_DNS_validation.fqdn]
validation_record_fqdns = [for record in aws_route53_record.ic_DNS_validation : record.fqdn]
}
I have a map in a tfvars file that contains, Cloudflare zone id, site address, and zone (domain), I am wanting to iterate through that map, generating an ACM certificate, with a certificate validation DNS record being created in Cloudflare.
My map looks like this;
my_domains = {
example1 = {
cloudflare_zone_id = "00000000000000000000000000001"
address = "dev.example1.com"
domain = "example1.com"
}
example2 = {
cloudflare_zone_id = "0000000000000000000000000000002"
address = "dev.example2.com"
domain = "example2.com"
}
example3 = {
cloudflare_zone_id = "0000000000000000000000000000003"
address = "dev.library.example3.com"
domain = "example3.com"
}
}
I then have the following code for the certificate creation and validation:
resource "aws_acm_certificate" "my_certs" {
for_each = var.my_domains
domain_name = each.value.address
validation_method = "DNS"
subject_alternative_names = [
"*.${each.value.address}"
]
lifecycle {
create_before_destroy = true
}
}
resource "cloudflare_zone" "my_zone" {
for_each = var.my_domains
zone = each.value.domain
type = "full"
}
resource "cloudflare_record" "my_certificate_validation" {
for_each = {
for dvo in aws_acm_certificate.my_certs.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
zone_id = cloudflare_zone.my_zone.id
name = each.value.name
value = trimsuffix(each.value.record, ".")
type = each.value.type
ttl = 1
proxied = false
}
When I run a plan, I get the following errors:
Error: Missing resource instance key
on cfcertvalidation.tf line 23, in resource "cloudflare_record" "my_certificate_validation":
23: for dvo in aws_acm_certificate.my_certs.domain_validation_options : dvo.domain_name => {
Because aws_acm_certificate.my_certs has "for_each" set, its attributes must be
accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
aws_acm_certificate.my_certs[each.key]
Error: Missing resource instance key
on cfcertvalidation.tf line 30, in resource "cloudflare_record" "my_certificate_validation":
30: zone_id = cloudflare_zone.my_zone.id
Because cloudflare_zone.cdt has "for_each" set, its attributes must be
accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
cloudflare_zone.my_zone[each.key]
Note: I added the cloudflare_zone resource rther than using the zone id already in the map as a way to simplify things in troubleshooting.
I am sure the answer is in the suggestion for using a [each.key], but I'm not sure how to implement it.
Any assistance would be greatly appreciated.
I have changed the map somewhat for my solution, so for completeness I have included the changed map here:
variable "my_domains" {
type = map(any)
default = {
example1 = {
cf_zone_id = "0000000000000000000000000000"
address = "example1.com"
zone = "example1.com"
}
example2 = {
cf_zone_id = "0000000000000000000000000000"
address = "example2.com"
zone = "example2.com"
}
example3 = {
cf_zone_id = "0000000000000000000000000000"
address = "library.example3.com"
zone = "example3.com"
}
}
}
What follows is the working solution, we start out by creating a local variable of type list, looping through the my_domains map to get the cert validation records we need. That then gets converted into a map, which is then used by the cloudflare_record resource to create the relevant DNS entries.
resource "aws_acm_certificate" "my_certs" {
for_each = var.my_domains
domain_name = "${var.env_url_prefix}${var.my_domains[each.key] ["address"]}"
validation_method = "DNS"
subject_alternative_names = ["*.${var.env_url_prefix}${var.my_domains[each.key]["address"]}"]
lifecycle {
create_before_destroy = true
}
}
locals {
validation = [
for certs in keys(var.my_domains) : {
for dvo in aws_acm_certificate.my_certs[certs].domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
value = trimsuffix(dvo.resource_record_value, ".")
type = dvo.resource_record_type
zone_id = var.my_domains[certs]["cf_zone_id"] # Get the zone id
}
}
]
# transform the list into a map
validation_map = { for item in local.validation : keys(item)[0] => values(item)[0]
}
}
resource "cloudflare_record" "my_cert_validations" {
for_each = local.validation_map
zone_id = local.validation_map[each.key]["zone_id"]
name = local.validation_map[each.key]["name"]
value = local.validation_map[each.key]["value"]
type = local.validation_map[each.key]["type"]
ttl = 1
proxied = false #important otherwise validation will fail
}
I have several domains and I want to create subdomains with as much DRY as possible. This is the original structure:
variable "domain1" {
type = list(string)
default = ["www", "www2"]
}
variable "domain2" {
type = list(string)
default = ["www3", "www1"]
}
resource "aws_route53_record" "domain1" {
for_each = toset(var.domain1)
type = "A"
name = "${each.key}.domain1.com"
zone_id = ""
}
resource "aws_route53_record" "domain2" {
for_each = toset(var.domain2)
type = "A"
name = "${each.key}.domain2.com"
zone_id = ""
}
that I want to combine to one variable and one resource block:
variable "subdomains" {
type = map(list(string))
default = {
"domain1.com" = ["www", "www2"]
"domain2.com" = ["www3", "www1"]
}
}
resource "aws_route53_record" "domain1" {
for_each = var.subdomains // make magic happen here...
type = "A"
name = "${each.subdomain_part}.${each.domain_part}" // ...and here
zone_id = ""
}
Is there a way to achieve this?
You can flatten your var.subdomains as follows:
locals {
subdomains_flat = flatten([for domain, subdomains in var.subdomains:
[ for subdomain in subdomains:
{
domain_part = domain
subdomain_part = subdomain
}
]
])
}
then:
resource "aws_route53_record" "domain1" {
for_each = {for idx, val in local.subdomains_flat: idx => val }
type = "A"
name = "${each.value.subdomain_part}.${each.value.domain_part}"
zone_id = ""
}
Following up on the comment about a messy state, I would not say messy... but certainly there are some downsides, the index in that answer is numeric, a plan show that the resource ends up:
# aws_route53_record.domain1["0"] will be created
+ resource "aws_route53_record" "domain1" {
# aws_route53_record.domain1["1"] will be created
+ resource "aws_route53_record" "domain1" {
That can create problems when we add or remove subdomains to the list, the order can change and that will cause the resources to be destroyed and recreated, not ideal on route53 records...
Here is another approach that will create a different index in the resource name.
We still use flatten to extract the subdomains but on this case I'm concatenating right away, that local variable is ready for the aws_route53_record resource to consume it.
provider "aws" {
region = "us-east-2"
}
variable "subdomains" {
type = map(list(string))
default = {
"domain1.com" = ["www", "www2"]
"domain2.com" = ["www3", "www1"]
}
}
locals {
records = flatten([for d, subs in var.subdomains: [for s in subs: "${s}.${d}"]])
}
resource "aws_route53_record" "domain1" {
for_each = toset(local.records)
type = "A"
name = each.value
zone_id = "us-east-1"
}
A terraform plan of that looks like:
Terraform will perform the following actions:
# aws_route53_record.domain1["www.domain1.com"] will be created
+ resource "aws_route53_record" "domain1" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "www.domain1.com"
+ type = "A"
+ zone_id = "us-east-1"
}
# aws_route53_record.domain1["www1.domain2.com"] will be created
+ resource "aws_route53_record" "domain1" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "www1.domain2.com"
+ type = "A"
+ zone_id = "us-east-1"
}
...
Terraform CLI and Terraform AWS Provider Version
Installed from https://releases.hashicorp.com/terraform/0.13.5/terraform_0.13.5_linux_amd64.zip
hashicorp/aws v3.15.0
Affected Resource(s)
aws_rds_cluster
aws_rds_cluster_instance
Terraform Configuration Files
# inside ./modules/rds/main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
required_version = "~> 0.13"
}
provider "aws" {
alias = "primary"
}
provider "aws" {
alias = "dr"
}
locals {
region_tags = ["primary", "dr"]
db_name = "${var.project_name}-${var.stage}-db"
db_cluster_0 = "${local.db_name}-cluster-${local.region_tags[0]}"
db_cluster_1 = "${local.db_name}-cluster-${local.region_tags[1]}"
db_instance_name = "${local.db_name}-instance"
}
resource "aws_rds_global_cluster" "global_db" {
global_cluster_identifier = "${var.project_name}-${var.stage}"
database_name = "${var.project_name}${var.stage}db"
engine = "aurora-mysql"
engine_version = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
// force_destroy = true
}
resource "aws_rds_cluster" "primary_cluster" {
depends_on = [aws_rds_global_cluster.global_db]
provider = aws.primary
cluster_identifier = "${local.db_name}-cluster-${local.region_tags[0]}"
# the database name does not allow dashes:
database_name = "${var.project_name}${var.stage}db"
# The engine and engine_version must be repeated in aws_rds_global_cluster,
# aws_rds_cluster, and aws_rds_cluster_instance to
# avoid "Value for engine should match" error
engine = "aurora-mysql"
engine_version = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
engine_mode = "global"
global_cluster_identifier = aws_rds_global_cluster.global_db.id
# backtrack and multi-master not supported by Aurora Global.
master_username = var.username
master_password = var.password
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
db_subnet_group_name = aws_db_subnet_group.primary.id
# We must have these values, because destroying or rolling back requires them
skip_final_snapshot = true
final_snapshot_identifier = "ci-aurora-cluster-backup"
tags = {
Name = local.db_cluster_0
Stage = var.stage
CreatedBy = var.created_by
}
}
resource "aws_rds_cluster_instance" "primary" {
depends_on = [aws_rds_global_cluster.global_db]
provider = aws.primary
cluster_identifier = aws_rds_cluster.primary_cluster.id
engine = "aurora-mysql"
engine_version = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
instance_class = "db.${var.instance_class}.${var.instance_size}"
db_subnet_group_name = aws_db_subnet_group.primary.id
tags = {
Name = local.db_instance_name
Stage = var.stage
CreatedBy = var.created_by
}
}
resource "aws_rds_cluster" "dr_cluster" {
depends_on = [aws_rds_cluster_instance.primary, aws_rds_global_cluster.global_db]
provider = aws.dr
cluster_identifier = "${local.db_name}-cluster-${local.region_tags[1]}"
# db name now allowed to specified on secondary regions
# The engine and engine_version must be repeated in aws_rds_global_cluster,
# aws_rds_cluster, and aws_rds_cluster_instance to
# avoid "Value for engine should match" error
engine = "aurora-mysql"
engine_version = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
engine_mode = "global"
global_cluster_identifier = aws_rds_global_cluster.global_db.id
# backtrack and multi-master not supported by Aurora Global.
# cannot specify username/password in cross-region replication cluster:
backup_retention_period = 5
preferred_backup_window = "07:00-09:00"
db_subnet_group_name = aws_db_subnet_group.dr.id
# We must have these values, because destroying or rolling back requires them
skip_final_snapshot = true
final_snapshot_identifier = "ci-aurora-cluster-backup"
tags = {
Name = local.db_cluster_1
Stage = var.stage
CreatedBy = var.created_by
}
}
resource "aws_rds_cluster_instance" "dr_instance" {
depends_on = [aws_rds_cluster_instance.primary, aws_rds_global_cluster.global_db]
provider = aws.dr
cluster_identifier = aws_rds_cluster.dr_cluster.id
engine = "aurora-mysql"
engine_version = "${var.mysql_version}.mysql_aurora.${var.aurora_version}"
instance_class = "db.${var.instance_class}.${var.instance_size}"
db_subnet_group_name = aws_db_subnet_group.dr.id
tags = {
Name = local.db_instance_name
Stage = var.stage
CreatedBy = var.created_by
}
}
resource "aws_db_subnet_group" "primary" {
name = "${local.db_name}-subnetgroup"
subnet_ids = var.subnet_ids
provider = aws.primary
tags = {
Name = "primary_subnet_group"
Stage = var.stage
CreatedBy = var.created_by
}
}
resource "aws_db_subnet_group" "dr" {
provider = aws.dr
name = "${local.db_name}-subnetgroup"
subnet_ids = var.dr_subnet_ids
tags = {
Name = "dr_subnet_group"
Stage = var.stage
CreatedBy = var.created_by
}
}
resource "aws_rds_cluster_parameter_group" "default" {
name = "rds-cluster-pg"
family = "aurora-mysql${var.mysql_version}"
description = "RDS default cluster parameter group"
parameter {
name = "character_set_server"
value = "utf8"
}
parameter {
name = "character_set_client"
value = "utf8"
}
parameter {
name = "aurora_parallel_query"
value = "ON"
apply_method = "pending-reboot"
}
}
Inside ./modules/sns/main.tf, this is the resource I'm adding when calling terraform apply from within the ./modules directory:
resource "aws_sns_topic" "foo_topic" {
name = "foo-${var.stage}-${var.topic_name}"
tags = {
Name = "foo-${var.stage}-${var.topic_name}"
Stage = var.stage
CreatedBy = var.created_by
CreatedOn = timestamp()
}
}
./modules/main.tf:
terraform {
backend "s3" {
bucket = "terraform-remote-state-s3-bucket-unique-name"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "TerraformLockTable"
}
}
provider "aws" {
alias = "primary"
region = var.region
}
provider "aws" {
alias = "dr"
region = var.dr_region
}
module "vpc" {
stage = var.stage
source = "./vpc"
providers = {
aws = aws.primary
}
}
module "dr_vpc" {
stage = var.stage
source = "./vpc"
providers = {
aws = aws.dr
}
}
module "vpc_security_group" {
source = "./vpc_security_group"
vpc_id = module.vpc.vpc_id
providers = {
aws = aws.primary
}
}
module "rds" {
source = "./rds"
stage = var.stage
created_by = var.created_by
vpc_id = module.vpc.vpc_id
subnet_ids = [module.vpc.subnet_a_id, module.vpc.subnet_b_id, module.vpc.subnet_c_id]
dr_subnet_ids = [module.dr_vpc.subnet_a_id, module.dr_vpc.subnet_b_id, module.dr_vpc.subnet_c_id]
region = var.region
username = var.rds_username
password = var.rds_password
providers = {
aws.primary = aws.primary
aws.dr = aws.dr
}
}
module "sns_start" {
stage = var.stage
source = "./sns"
topic_name = "start"
created_by = var.created_by
}
./modules/variables.tf:
variable "region" {
default = "us-east-2"
}
variable "dr_region" {
default = "us-west-2"
}
variable "service" {
type = string
default = "foo-back"
description = "service to match what serverless framework deploys"
}
variable "stage" {
type = string
default = "sandbox"
description = "The stage to deploy: sandbox, dev, qa, uat, or prod"
validation {
condition = can(regex("sandbox|dev|qa|uat|prod", var.stage))
error_message = "The stage value must be a valid stage: sandbox, dev, qa, uat, or prod."
}
}
variable "created_by" {
description = "Company or vendor name followed by the username part of the email address"
}
variable "rds_username" {
description = "Username for rds"
}
variable "rds_password" {
description = "Password for rds"
}
./modules/sns/main.tf:
resource "aws_sns_topic" "foo_topic" {
name = "foo-${var.stage}-${var.topic_name}"
tags = {
Name = "foo-${var.stage}-${var.topic_name}"
Stage = var.stage
CreatedBy = var.created_by
CreatedOn = timestamp()
}
}
./modules/sns/output.tf:
output "sns_topic_arn" {
value = aws_sns_topic.foo_topic.arn
}
Debug Output
Both outputs have modified keys, names, account IDs, etc:
The plan output from running terraform apply:
https://gist.github.com/ystoneman/95df711ee0a11d44e035b9f8f39b75f3
The state before applying: https://gist.github.com/ystoneman/5c842769c28e1ae5969f9aaff1556b37
Expected Behavior
The entire ./modules/main.tf had already been created, and the only thing that was added was the SNS module, so only the SNS module should be created.
Actual Behavior
But instead, the RDS resources are affected too, and terraform "claims" that engine_mode has changed from provisioned to global, even though it already was global according to the console:
The plan output also says that cluster_identifier is only known after apply and therefore forces replacement, however, I think the cluster_identifier is necessary to let the aws_rds_cluster know it belongs to the aws_rds_global_cluster, and for the aws_rds_cluster_instance to know it belongs to the aws_rds_cluster, respectively.
Steps to Reproduce
comment out the module "sns_start"
cd ./modules
terraform apply (after this step is done is where the state file I included is at)
uncomment out the module "sns_start"
terraform apply (at this point is where I provide the debug output)
Important Factoids
This problem happens whether I run it from my Mac or within AWS CodeBuild.
References
Seems like AWS Terraform tried to destory and rebuild RDS cluster references this too, but it's not specific to a Global Cluster, where you do need identifiers so that instances and clusters know to what they belong to.
It seems like you are using an outdated version of the aws provider and are specifying the engine_mode incorrectly. There was a bug ticket relating to this: https://github.com/hashicorp/terraform-provider-aws/issues/16088
It is fixed in version 3.15.0 which you can use via
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.15.0"
}
}
required_version = "~> 0.13"
}
Additionally you should drop the engine_mode property from your terraform specification completely.
Terraform v0.12.12
+ provider.aws v3.0.0
+ provider.template v2.1.2
Before I was doing this:
resource "aws_route53_record" "derps" {
name = aws_acm_certificate.mycert[0].resource_record_name
type = aws_acm_certificate.mycert[0].resource_record_type
zone_id = var.my_zone_id
records = aws_acm_certificate.mycert[0].resource_record_value
ttl = 60
}
And that worked fine for me about a week ago.
I just did a plan and got an error:
records = [aws_acm_certificate.mycert.domain_validation_options[0].resource_record_value]
This value does not have any indices.
Now I don't pin provider versions, so I'm assuming I pulled a newer version and the resource changed.
After fighting with this and realizing it's not a list (even though when doing show state it sure looked like one) I am now doing this to make it a list:
resource "aws_route53_record" "derps" {
name = sort(aws_acm_certificate.mycert.domain_validation_options[*].resource_record_name)[0]
type = sort(aws_acm_certificate.mycert.domain_validation_options[*].resource_record_type)[0]
zone_id = var.my_zone_id
records = [sort(aws_acm_certificate.mycert.domain_validation_options[*].resource_record_value)[0]]
ttl = 60
}
This resulted in no changes which is good. But if I use the example for doing this from the docs they now use for_each: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/acm_certificate_validation
resource "aws_route53_record" "example" {
for_each = {
for dvo in aws_acm_certificate.example.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = dvo.domain_name == "example.org" ? data.aws_route53_zone.example_org.zone_id : data.aws_route53_zone.example_com.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
resource "aws_acm_certificate_validation" "example" {
certificate_arn = aws_acm_certificate.example.arn
validation_record_fqdns = [for record in aws_route53_record.example : record.fqdn]
}
Is the above the correct way to do this now? Am I going to run into issues doing it the way I currently am? Doing it like above would result in a destroy/recreate (i guess I could import it myself but that's painful).
Is doing it my way not going to result in unexpected diffs?
Edit
So, more specific for my issue. This is what I see when I look at the state:
terraform state show aws_acm_certificate.mycert
...
domain_name = "*.mydom.com"
domain_validation_options = [
{
domain_name = "*.mydom.com"
resource_record_name = "_11111111111.mydom.com."
resource_record_type = "CNAME"
resource_record_value = "_1111111111.11111111.acm-validations.aws."
},
{
domain_name = "mydom.com"
resource_record_name = "_11111111111.mydom.com."
resource_record_type = "CNAME"
resource_record_value = "_1111111111.111111111.acm-validations.aws."
},
]
...
By using sort I'm effectively using count which of course results in a destroy/recreate if the order changes. But in my case I think that's unlikely?? I also don't fully understand the difference between just using the values from the wildcard validation config and using both of them.
The AWS Terraform provider was recently upgraded to version 3.0. This version comes with a list of breaking changes. I recommend consulting the AWS provider 3.0 upgrade guide.
The issue you are encountering is because the domain_validation_options attribute is now a set instead of a list. From that guide:
Since the domain_validation_options attribute changed from a list to a set and sets cannot be indexed in Terraform, the recommendation is to update the configuration to use the more stable resource for_each support instead of count
I recommend using the new foreach syntax, as the upgrade guide recommends, in order to avoid unexpected diffs. The guide states that you will need to use terraform state mv to move the old configuration state to the new configuration, in order to prevent the resources from being recreated.
This is the same problem we were facing just now.. we use for_each to define hosting for multiple sites according to provided local variables, now, since we already use for_each we can't use it for workaround of their changes.. unfortunate.
I didn't want to go with sort so I checked what Jimmy wrote but it didn't work for this case due to output being index, I fixed it by using [0] instead of [*]:
resource "aws_route53_record" "cert_validation" {
for_each = local.web_pages
allow_overwrite = true
name = tolist(aws_acm_certificate.cert[each.key].domain_validation_options)[0].resource_record_name
type = tolist(aws_acm_certificate.cert[each.key].domain_validation_options)[0].resource_record_type
records = [tolist(aws_acm_certificate.cert[each.key].domain_validation_options)[0].resource_record_value]
zone_id = var.aws_hosted_zone
ttl = 60
}
works for us now ;)
These 3 sets of code below work. (I used Terraform v0.15.0)
*The difference between 1st and 2nd is [0] and .0
1st:
resource "aws_route53_record" "myRecord" {
zone_id = aws_route53_zone.myZone.zone_id
name = tolist(aws_acm_certificate.myCert.domain_validation_options)[0].resource_record_name
type = tolist(aws_acm_certificate.myCert.domain_validation_options)[0].resource_record_type
records = [tolist(aws_acm_certificate.myCert.domain_validation_options)[0].resource_record_value]
ttl = "60"
allow_overwrite = true
}
2nd:
resource "aws_route53_record" "myRecord" {
zone_id = aws_route53_zone.myZone.zone_id
name = tolist(aws_acm_certificate.myCert.domain_validation_options).0.resource_record_name
type = tolist(aws_acm_certificate.myCert.domain_validation_options).0.resource_record_type
records = [tolist(aws_acm_certificate.myCert.domain_validation_options).0.resource_record_value]
ttl = "60"
allow_overwrite = true
}
3rd:
resource "aws_route53_record" "myRecord" {
for_each = {
for dvo in aws_acm_certificate.myCert.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = aws_route53_zone.myZone.zone_id
allow_overwrite = true
}
You can also cheat with tolist.
e.g. rewrite aws_acm_certificate.mycert.domain_validation_options[*].resource_record_name as tolist(aws_acm_certificate.mycert.domain_validation_options)[*].resource_record_name
I had to do this in a module, as this particular module resource already had a count in it, and I knew I'd only ever have one entry.