I have a hosted zone as customdomain.com and 2 regional API Gateways hosted on AWS.
I want to configure common CNAME as myapp.customdomain.com to call APIGW_REGION_ONE_EXECUTEAPI_URI and APIGW_REGION_TWO_EXECUTEAPI_URI based on latency.
How can I do this? I am confused between custom domain name on API Gateway vs Route 53 CNAME record. Any help or guidance is highly appreciated.
The custom domain name on API Gateway allows it to respond to names other than the AWS provided one (it works via SNI) and to also provide a certificate that has at least one SAN that will match your provided name so you will need to define that as well as any DNS records so that people can then resolve the API Gateway.
As for latency based records you will need to create multiple Route53 records and define the latency policy in each of them. The aws_route53_record docs show how you can create weighted records for shifting 10% of all traffic to a different target:
resource "aws_route53_record" "www-dev" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "www"
type = "CNAME"
ttl = "5"
weighted_routing_policy {
weight = 10
}
set_identifier = "dev"
records = ["dev.example.com"]
}
resource "aws_route53_record" "www-live" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "www"
type = "CNAME"
ttl = "5"
weighted_routing_policy {
weight = 90
}
set_identifier = "live"
records = ["live.example.com"]
}
In your case you are going to want something like this:
data "aws_region" "region_one" {}
data "aws_route53_zone" "selected" {
name = "example.com."
}
resource "aws_api_gateway_domain_name" "example" {
domain_name = "api.example.com"
certificate_name = "example-api"
certificate_body = "${file("${path.module}/example.com/example.crt")}"
certificate_chain = "${file("${path.module}/example.com/ca.crt")}"
certificate_private_key = "${file("${path.module}/example.com/example.key")}"
}
resource "aws_route53_record" "region_one" {
zone_id = "${data.aws_route53_zone.selected.zone_id}"
name = "${aws_api_gateway_domain_name.region_one.domain_name}"
type = "A"
latency_routing_policy {
region = "${data.aws_region.region_one.name}"
}
set_identifier = "${data.aws_region.region_one.name}"
alias {
name = "${aws_api_gateway_domain_name.region_one.regional_domain_name}"
zone_id = "${aws_api_gateway_domain_name.region_one.regional_zone_id}"
evaluate_target_health = true
}
}
And place that where you create each API Gateway or use multiple providers with different region configuration to apply both at the same time.
Related
I have created two AWS NLBs in the same region through terraform. Now I have to make DNS records in Route53 with type alias.
But there are an error.
Error: [ERR]: Error building changeset: InvalidChangeBatch: [Tried to create an alias that targets 11111111111111111111111-xxxxxxxxxxxx.elb.eu-west-2.amazonaws.com., type A in zone ZHURV0000000, but the alias target name does not lie within the target zone] status code: 400, request id: 2xxxxxxxxxxxxxxxxx
It was working fine, when I had only single NLB.
because, we need ELB zone id to make DNS entry with alias type. and both NLB has different zone ID. but terraform is providing only single zone id through below code.
data "aws_elb_hosted_zone_id" "main" {}
Im taking reference from below link:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/elb_hosted_zone_id
The ultimate problem is:
How to get the 2nd, 3rd .. zone id of ELB in the same region ??
There is no such thing as second and third zone id for elastic load balancing. There is one per region for everyone, in fact you can get the IDs from here: https://docs.aws.amazon.com/general/latest/gr/elb.html.
You can reuse the same data block for multiple records. What will change is the domain name which is unique for each load balancer:
resource "aws_route53_record" "www" {
...
type = "A"
alias {
name = aws_lb.my_network_load_balancer.dns_name # This changes based on load balancer
zone_id = data.aws_elb_hosted_zone_id.main # The remains the same for each record
}
}
Update:
data "aws_elb_hosted_zone_id" "main" {} does not work with network load balancers. We can get the canoical hosted zone id by referencing an attribute of aws_elb resource: aws_lb.my_network_load_balancer.zone_id.
Finally I got below solution for NLB DNS entry:
Here, I'm fetching zone id from the NLB name.
Note: "aws_elb_hosted_zone_id" will provide you zone id of the ALB, not NLB
resource "aws_route53_zone" "this" {
name = lower("${var.base_domain_name}")
}
#get DNS zone
data "aws_route53_zone" "this" {
name = lower("${var.base_domain_name}")
depends_on = [
aws_route53_zone.this
]
}
data "aws_lb" "my_nlb" {
name = "my-nlb"
}
resource "aws_route53_record" "nlb_dns" {
zone_id = data.aws_route53_zone.this.zone_id
name = "my-nlb-dns"
type = "A"
alias {
name = "my_nlb-0000000.us-east-1.elb.amazonaws.com"
zone_id = data.aws_lb.my_nlb.zone_id # this is the fix
evaluate_target_health = true
}
}
Here is the snippet how I do this:
data "aws_lb_hosted_zone_id" "route53_zone_id_nlb" {
region = var.region
load_balancer_type = "network"
}
resource "aws_route53_record" "route53_wildcard" {
depends_on = [helm_release.nginx_ingress]
zone_id = data.terraform_remote_state.aws_remote.outputs.domain_zone_id # Replace with your zone ID
name = "*.${var.domain}" # Replace with your subdomain, Note: not valid with "apex" domains, e.g. example.com
type = "A"
alias {
name = data.kubernetes_service.nginx_ingress.status.0.load_balancer.0.ingress.0.hostname
zone_id = data.aws_lb_hosted_zone_id.route53_zone_id_nlb.id
evaluate_target_health = false
}
}
Attention!
Don't mix zone_id of LB (it is static and differs between regions here AWS document) and zone_id of Route 53 zone itself.
I'm trying to write a terraform module that creates Route53 entries based for a geolocalised app.
My code seems to work fine for the first time but then it breaks with the error in the title.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.3.0"
}
}
}
resource "aws_route53_record" "cdn_cname" {
zone_id = "${var.route53_zone_id}"
name = "${var.domain}"
type = "CNAME"
ttl = 300
allow_overwrite = true
set_identifier = "${var.envName}Id"
records = ["${var.records}"]
geolocation_routing_policy {
continent = "${var.continent}"
country = "${var.country}"
}
}
The error seems explicit enough but whey I try to manually create a CNAME with the same record as another one it seems to work fine. What's the difference between them?
I have an issue with the domain verification on SES, I created a module which is responsible to create a domain on SES and put the TXT records on the hosted zone route53. even tho the domain status on SES still on "Pending Verification".
my module use thes resources:
resource "aws_ses_domain_identity" "domain" {
domain = var.domain
}
resource "aws_route53_record" "domain_amazonses_verification_record" {
count = var.zone_id != "" ? 1 : 0
zone_id = var.zone_id
name = format("_amazonses.%s", var.domain)
type = "TXT"
ttl = var.ses_ttl
records = [aws_ses_domain_identity.domain.verification_token]
}
#main.tf
module "my-module" {
source = "./modules/myses"
domain = "mydomain.com"
zone_id = var.zoneId
}
did I miss something?
I believe you are missing the aws_ses_domain_identity_verification resource.
resource "aws_ses_domain_identity_verification" "example_verification" {
domain = aws_ses_domain_identity.domain.id
depends_on = [aws_route53_record.domain_amazonses_verification_record]
}
Do anyone have an example for latency based route53 (AWS) recordset using terraform?
I want to know all the attributes I can pass including evaluate target health-check for alias records.
resource "aws_route53_record" "www" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "example.com"
type = "A"
alias {
name = "${aws_elb.main.dns_name}"
zone_id = "${aws_elb.main.zone_id}"
evaluate_target_health = true
}
latency_routing_policy {
region = ${var.region}
}
}
https://www.terraform.io/docs/providers/aws/r/route53_record.html#latency_routing_policy
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency
When terraform runs the following, it apparently picks random NS servers:
resource "aws_route53_zone" "example.com" {
name = "example.com"
}
The problem with this is that the registered domain that I have in AWS already has specified NS servers. Is there a way to specify the NS servers this resource uses - or maybe change the hosted domain's NS servers to what is picked when the zone is created?
When you create a new zone, AWS generates the Name server list for you. Using this example from Terraform.
resource "aws_route53_zone" "dev" {
name = "dev.example.com"
tags {
Environment = "dev"
}
}
resource "aws_route53_record" "dev-ns" {
zone_id = "${aws_route53_zone.main.zone_id}"
name = "dev.example.com"
type = "NS"
ttl = "30"
records = [
"${aws_route53_zone.dev.name_servers.0}",
"${aws_route53_zone.dev.name_servers.1}",
"${aws_route53_zone.dev.name_servers.2}",
"${aws_route53_zone.dev.name_servers.3}",
]
}
https://www.terraform.io/docs/providers/aws/r/route53_zone.html
API returns a Delegation Set after the call to Create Zone.
http://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.html#API_CreateHostedZone_ResponseSyntax
I have been able to specify DNS servers but I would imagine that AWS is allocating servers based on availability, load etc... so you may want to think hard about baking these configs in.
resource "aws_route53_record" "primary-ns" {
zone_id = "${aws_route53_zone.primary.zone_id}"
name = "www.bacon.rocks"
type = "NS"
ttl = "172800"
records = ["ns-869.awsdns-44.net","ns-1237.awsdns-26.org","ns-1846.awsdns-38.co.uk","ns-325.awsdns-40.com"]
}
or something along those lines