I have a cert setup in the London region and attached to a load balancer listener which works perfectly. I am attempting to create another cert from the same Route53 domain and attach it to a listener but this time in the Ireland region.
My terraform looks like
resource "aws_acm_certificate" "default" {
count = var.prod ? 1 : 0
domain_name = "www.example.uk"
subject_alternative_names = [
"example.uk",
]
validation_method = "DNS"
}
resource "aws_route53_record" "validation" {
count = var.prod ? 1 : 0
name = aws_acm_certificate.default[count.index].domain_validation_options[count.index].resource_record_name
type = aws_acm_certificate.default[count.index].domain_validation_options[count.index].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default[count.index].domain_validation_options[count.index].resource_record_value]
ttl = "60"
}
resource "aws_route53_record" "validation_alt1" {
count = var.prod ? 1 : 0
name = aws_acm_certificate.default[count.index].domain_validation_options[count.index + 1].resource_record_name
type = aws_acm_certificate.default[count.index].domain_validation_options[count.index + 1].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default[count.index].domain_validation_options[count.index + 1].resource_record_value]
ttl = 60
}
resource "aws_acm_certificate_validation" "default" {
count = var.prod ? 1 : 0
certificate_arn = aws_acm_certificate.default[count.index].arn
validation_record_fqdns = [
aws_route53_record.validation[count.index].fqdn,
aws_route53_record.validation_alt1[count.index].fqdn,
]
}
This worked perfectly the first time I set this up in the London region, when I try and run it in the Ireland region on AWS I get the following errors:
I'm not 100% on why the cert validation seems to bring back no records.
There is a change in domain_validation_options attribute with aws provider version 3. Previously it was list type and now it's changed to set type. So you have 2 options:
Version lock aws provider to version 2
provider "aws" {
version = "~>2"
}
Update code to work with new provider version. For that you need to update count with for_each and make similar updates as shown below.
resource "aws_route53_record" "existing" {
for_each = {
for dvo in aws_acm_certificate.existing.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = data.aws_route53_zone.public_root_domain.zone_id
}
resource "aws_acm_certificate_validation" "existing" {
certificate_arn = aws_acm_certificate.existing.arn
validation_record_fqdns = [for record in aws_route53_record.existing : record.fqdn]
}
You can check this for more details: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade#resource-aws_acm_certificate
It looks like the validation record is no longer an array. I'm guessing you upgraded the AWS Terraform provider at some point since you ran this last (if you don't have the version pinned it could have updated automatically). There have been some breaking changes to the aws_acm_certificate_validation Terraform resource. I suggest you look at the latest example usage in the documentation and refactor your Terraform.
Related
I'm creating a DNS record AWS using Terraform (v. 12.10) and want to get the name of the ALB, which already was created (in another module).
I've read the documentation but didn't found any solution. Is there any way to do it?
resource "aws_route53_record" "dns" {
provider = <AWS>
zone_id = <ZONE_ID>
name = <NAME>
ttl = 30
type = "CNAME"
records = <LB_created_previously>
}
You basically have two options here.
Option 1 - if your resource creation (in your case the DNS records) and the ALB created by the module are in the same place (same terraform.tfstate file) - this is more or less covered by samtoddler answer above or with you pseudo-code it will look something like this:
resource "aws_route53_record" "dns" {
provider = <AWS>
zone_id = <ZONE_ID>
name = <NAME>
ttl = 30
type = "CNAME"
records = [module.<LB__module_definiton_name>.elb_dns_name]
}
where in your ELB module you would need something like:
output "elb_dns_name" {
value = aws_elb.<LB_created_previously>.dns_name
}
In option Two, you must have the same output defined in the module itself.
However, if your DNS resource code is in a different folder / terraform state, you'll need to resort to a terraform remote state:
data "terraform_remote_state" "elb" {
backend = "mybackendtype"
config = {
...
}
}
And then you code will look like this:
resource "aws_route53_record" "dns" {
provider = <AWS>
zone_id = <ZONE_ID>
name = <NAME>
ttl = 30
type = "CNAME"
records = [data.terraform_remote_state.elb.outputs.elb_dns_name]
}
Btw, when you have an ELB it's better to use an Alias instead of a CNAME record, which based on the terraform documentation for the dns records resource and your pseudo-code will be:
resource "aws_route53_record" "dns" {
zone_id = <ZONE_ID>
name = <NAME>
type = "A"
alias {
name = module.<LB__module_definiton_name>.elb_dns_name
zone_id = module.<LB__module_definiton_name>.elb_zone_id
evaluate_target_health = true
}
}
Module definition
$ cat module/out.tf
output "somevar" {
value = "somevalue"
}
Using Module:
$ cat main.tf
module "getname" {
source = "./module"
}
resource "aws_sns_topic" "user_updates" {
name = module.getname.somevar
}
Directory structure:
$ tree
.
├── main.tf
├── module
│ └── out.tf
└── terraform.tfstate
terraform apply
$ terraform apply
..
+ create
Terraform will perform the following actions:
# aws_sns_topic.user_updates will be created
+ resource "aws_sns_topic" "user_updates" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = "somevalue"
+ policy = (known after apply)
}
...
Enter a value: yes
aws_sns_topic.user_updates: Creating...
aws_sns_topic.user_updates: Creation complete after 1s [id=arn:aws:sns:us-east-1:123456789:somevalue]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Module Composition
I'm trying to write a terraform module that creates Route53 entries based for a geolocalised app.
My code seems to work fine for the first time but then it breaks with the error in the title.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.3.0"
}
}
}
resource "aws_route53_record" "cdn_cname" {
zone_id = "${var.route53_zone_id}"
name = "${var.domain}"
type = "CNAME"
ttl = 300
allow_overwrite = true
set_identifier = "${var.envName}Id"
records = ["${var.records}"]
geolocation_routing_policy {
continent = "${var.continent}"
country = "${var.country}"
}
}
The error seems explicit enough but whey I try to manually create a CNAME with the same record as another one it seems to work fine. What's the difference between them?
I have an issue with the domain verification on SES, I created a module which is responsible to create a domain on SES and put the TXT records on the hosted zone route53. even tho the domain status on SES still on "Pending Verification".
my module use thes resources:
resource "aws_ses_domain_identity" "domain" {
domain = var.domain
}
resource "aws_route53_record" "domain_amazonses_verification_record" {
count = var.zone_id != "" ? 1 : 0
zone_id = var.zone_id
name = format("_amazonses.%s", var.domain)
type = "TXT"
ttl = var.ses_ttl
records = [aws_ses_domain_identity.domain.verification_token]
}
#main.tf
module "my-module" {
source = "./modules/myses"
domain = "mydomain.com"
zone_id = var.zoneId
}
did I miss something?
I believe you are missing the aws_ses_domain_identity_verification resource.
resource "aws_ses_domain_identity_verification" "example_verification" {
domain = aws_ses_domain_identity.domain.id
depends_on = [aws_route53_record.domain_amazonses_verification_record]
}
I have a hosted zone as customdomain.com and 2 regional API Gateways hosted on AWS.
I want to configure common CNAME as myapp.customdomain.com to call APIGW_REGION_ONE_EXECUTEAPI_URI and APIGW_REGION_TWO_EXECUTEAPI_URI based on latency.
How can I do this? I am confused between custom domain name on API Gateway vs Route 53 CNAME record. Any help or guidance is highly appreciated.
The custom domain name on API Gateway allows it to respond to names other than the AWS provided one (it works via SNI) and to also provide a certificate that has at least one SAN that will match your provided name so you will need to define that as well as any DNS records so that people can then resolve the API Gateway.
As for latency based records you will need to create multiple Route53 records and define the latency policy in each of them. The aws_route53_record docs show how you can create weighted records for shifting 10% of all traffic to a different target:
resource "aws_route53_record" "www-dev" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "www"
type = "CNAME"
ttl = "5"
weighted_routing_policy {
weight = 10
}
set_identifier = "dev"
records = ["dev.example.com"]
}
resource "aws_route53_record" "www-live" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "www"
type = "CNAME"
ttl = "5"
weighted_routing_policy {
weight = 90
}
set_identifier = "live"
records = ["live.example.com"]
}
In your case you are going to want something like this:
data "aws_region" "region_one" {}
data "aws_route53_zone" "selected" {
name = "example.com."
}
resource "aws_api_gateway_domain_name" "example" {
domain_name = "api.example.com"
certificate_name = "example-api"
certificate_body = "${file("${path.module}/example.com/example.crt")}"
certificate_chain = "${file("${path.module}/example.com/ca.crt")}"
certificate_private_key = "${file("${path.module}/example.com/example.key")}"
}
resource "aws_route53_record" "region_one" {
zone_id = "${data.aws_route53_zone.selected.zone_id}"
name = "${aws_api_gateway_domain_name.region_one.domain_name}"
type = "A"
latency_routing_policy {
region = "${data.aws_region.region_one.name}"
}
set_identifier = "${data.aws_region.region_one.name}"
alias {
name = "${aws_api_gateway_domain_name.region_one.regional_domain_name}"
zone_id = "${aws_api_gateway_domain_name.region_one.regional_zone_id}"
evaluate_target_health = true
}
}
And place that where you create each API Gateway or use multiple providers with different region configuration to apply both at the same time.
Do anyone have an example for latency based route53 (AWS) recordset using terraform?
I want to know all the attributes I can pass including evaluate target health-check for alias records.
resource "aws_route53_record" "www" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "example.com"
type = "A"
alias {
name = "${aws_elb.main.dns_name}"
zone_id = "${aws_elb.main.zone_id}"
evaluate_target_health = true
}
latency_routing_policy {
region = ${var.region}
}
}
https://www.terraform.io/docs/providers/aws/r/route53_record.html#latency_routing_policy
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency