How to parse instance id in Terraform - amazon-web-services

We want to get away from using incremental numbers for the host names of our instances. These machines are not in an auto-scaling group, hence we could keep the prod-webserver-001, prod-webserver-002 etc convention if we wanted but it just doesn't make sense with tags being available.
What we've been doing with auto-scaling groups, we want to utilize parts of the instance id. I'm able to accomplish this with a post script with our ASG servers but unlike that, I want Terraform to keep track of the DNS record so that they get destroyed when I issue a terraform destroy.
Ideally we want the first six characters after the hyphen in the instance_id. For example if the instance_id is i-0876cr2456 we want to use 0876cr within the name.
prod-webserver-0876cr
prod-webserver-09a24i
Terraform code:
resource "aws_instance" "instance" {
...
...
}
resource "aws_route53_record" "instance_dns_a" {
count = "${var.num_instances}"
zone_id = "${var.internal_zone_id}"
# New line but we want the parsed version
name = "${aws_instance.instance.id}"
# old that works
name = "${format(prod-${service_name}-%03d", count.index + 1)}"
}

You could extract the first 6 characters from the instance id by using substr:
$ terraform console
> substr("i-123456adgcgabsadh", 2, 6)
123456
So to use it in your Route53 record you'd want to use something like:
resource "aws_instance" "instance" {
...
...
}
resource "aws_route53_record" "instance_dns_a" {
count = "${var.num_instances}"
zone_id = "${var.internal_zone_id}"
name = "prod-${var.service_name}-${substr(aws_instance.instance.*.id[count.index], 2, 6}"
}
I would be worried about truncating the instance ID for something that needs to be unique though because you will then inevitably end up with instances with ids of i-123456a... and i-123456b and then you'll end up overwriting the prod-webserver-123456 record with the IP address of the second instance and losing the first record. Any reason you need to truncate here? You have 63 characters to play with for the first label in DNS which should be enough to leave this untruncated no?

Related

multiple EC2 Instances matched; use additional constraints to reduce matches to a single EC2 Instance

I am using the following script to query a particular instance. There will be only one running instance with the given name. It is possible that another instance with the same name may exists but in different instance state.
How do I filter instance on instance's state so it only retrieves instance that is running state?
data "aws_instance" "ec2" {
filter {
name = "tag:Name"
values = ["dev-us-west-2-myinstance"]
}
}
Currently I get the following error
multiple EC2 Instances matched; use additional constraints to reduce
matches to a single EC2 Instance
The terraform documentation, links to the AWS documentation for the describe-instances filter.
That documentation indicates you should do the following:
data "aws_instance" "ec2" {
filter {
name = "tag:Name"
values = ["dev-us-west-2-myinstance"]
}
filter {
name = "instance-state-name"
values = ["running"]
}
}

Trigger random_id resource recreation on rds instance destroy and recreate

Folks, am trying to find a way with terraform random_id resource to recreate and provide a new random value when the rds instance destroys and recreates due to a change that went in, say the username on rds has changed.
This random value am trying to attach to final_snapshot_identifier of the aws_db_instance resource so that the snapshot should have a unique value to its id everytime it gets created upon rds instance being destroyed.
Current code:
resource "random_id" "snap_id" {
byte_length = 8
}
locals {
inst_id = "test-rds-inst"
inst_snap_id = "${local.inst_id}-snap-${format("%.4s", random_id.snap_id.dec)}"
}
resource "aws_db_instance" "rds" {
.....
identifier = local.inst_id
final_snapshot_identifier = local.inst_snap_id
skip_final_snapshot = false
username = "foo"
apply_immediately = true
.....
}
output "snap_id" {
value = aws_db_instance.rds.final_snapshot_identifier
}
Output after terraform apply:
snap_id = "test-rds-inst-snap-5553"
Use case am trying out:
#1:
Modify value in rds instance to simulate a destroy & recreate:
Modify username to "foo-tmp"
terraform apply -auto-approve
Output:
snap_id = "test-rds-inst-snap-5553"
I was expecting the random_id to kick in and output a unique id, but it didn't.
Observation:
rds instance in deleting state
snapshot "test-rds-inst-snap-5553" in creating state
rds instance recreated and in available state
snapshot "test-rds-inst-snap-5553" in available state
#2:
Modify value again in rds instance to simulate a destroy & recreate:
Modify username to "foo-new"
terraform apply -auto-approve
Kind of expected below error, coz snap id didn't get a new value in prior attempt, but tired anyways..
Observation:
**Error:** error deleting DB Instance (test-rds-inst): DBSnapshotAlreadyExists: Cannot create the snapshot because a snapshot with the identifier test-rds-inst-snap-5553 already exists.
Am aware of the keepers{} map for random_id resource, but not sure on what from the rds_instance that I need to put in the map so that the random_id resource will be recreated and it ends up providing a new unique value to the snap_id suffix.
Also I feel using any attribute of rds instance in the random_id keepers, might cause a circular dependency issue. I may be wrong but haven't tried it though.
Any suggestions will be helpful. Thanks.
The easiest way to do this would be to use taint on the random_id resource, as per the documentation [1]:
To force a random result to be replaced, the taint command can be used to produce a new result on the next run.
Alternatively, looking at the example from the documentation, you could do something like:
resource "random_id" "snap_id" {
byte_length = 8
keepers {
snapshot_id = var.snapshot_id
}
}
resource "aws_db_instance" "rds" {
.....
identifier = local.inst_id
final_snapshot_identifier = random_id.snap_id.keepers.snapshot_id
skip_final_snapshot = false
username = "foo"
apply_immediately = true
.....
}
This means that until the value of the variable snapshot_id changes, the random_id will generate the same result. Not sure if that would work with locals, but you could try replacing var.snapshot_id with local.inst_snap_id. If that works, you could then name the snapshot using built-in functions like formatdate [2] and timestamp [3] to create a snapshot id which will be tied to the time when you were running apply, something like:
locals {
inst_id = "test-rds-inst"
snap_time = formatdate("YYYYMMDD", timestamp())
inst_snap_id = "${local.inst_id}-snap-${format("%.4s", random_id.snap_id.dec)}-${local.snap_time}"
}
[1] https://registry.terraform.io/providers/hashicorp/random/latest/docs#resource-keepers
[2] https://www.terraform.io/language/functions/formatdate
[3] https://www.terraform.io/language/functions/timestamp

How to create multi-value SRV DNS record with terraform in AWS Route53 service?

I am trying to create multi-value SRV DNS entry in AWS route53 service via terraform. Values should be taken from instances tags. Due to the fact, that this is only one record, approach with count is not applicable.
The trick is, that I have 10 instances but they need to be filtered first by finding specific tags. Based on resultlist, SRV record should be created by using the Name tag assigned to each instance.
Any idea how to approach this issue?
Thanks in advance for any tip.
I did it like this:
resource "aws_instance" "myservers" {
count = 3
#.... other configuration....
}
resource "aws_route53_record" "srv" {
zone_id = aws_route53_zone.myzone.zone_id
name = "_service"
type = "SRV"
records = [for server in data.aws_instance.myservers : "0 10 5000 ${server.private_ip}."]
}
Terraform's for expression is being the key for the solution.
Regarding the SRV record in AWS Route 53, it should have a line per server and each line in the following form: priority weight port target (space is the delimiter). For the example above: 0 is the priority, 10 is the weight, 5000 is the port and the last one is the server IP (or name)

Multiple availability zones with terraform on AWS

The VPC I'm working on has 3 logical tiers: Web, App and DB. For each tier there is one subnet in each availability zone. Total of 6 subnets in the region I'm using.
I'm trying to create EC2 instances using a module and the count parameter but I don't know how to tell terraform to use the two subnets of the App tier. An additional constraint I have is to use static IP addresses (or a way to have a deterministic private name)
I'm playing around with the resource
resource "aws_instance" "app_server" {
...
count = "${var.app_servers_count}"
# Not all at the same time, though!
availability_zone = ...
subnet_id = ...
private_ip = ...
}
Things I've tried/thought so far:
Use data "aws_subnet" "all_app_subnets" {...}, filter by name, get all the subnets that match and use them as a list. But aws_subnet cannot return a list;
Use data "aws_availability_zones" {...} to find all the zones. But I still have the problem of assigning the correct subnet;
Use data "aws_subnet_ids" {...} which looks like the best option. But apparently it doesn't have a filter option to match the networks namel
Pass the subnets IDs as list of strings to the module. But I don't want to hard code the IDs, it's not automation;
Hard code the subnets as data "aws_subnet" "app_subnet_1" {...}, data "aws_subnet" "app_subnet_2" {...} but then I have to use separate sets of variables for each subnet which I don't like;
Get information for each subnet like in the point above but then create a map to access it as a list. But it's not possibile to use interpolation in variables definition;
Not using modules and hard-code each instance for each environment. Mmmm... really?
I really ran out of ideas. It seems that nobody has to deploy instances in specific subnetworks and keep a good degree of abstration. I see only examples where subnetworks are not specified or where people just use default values for everything. Is this really something so unusual?
Thanks in advance to everyone.
It is possible to evenly distribute instances across multiple zones using modulo.
variable "zone" {
description = "for single zone deployment"
default = "europe-west4-b"
}
variable "zones" {
description = "for multi zone deployment"
default = ["europe-west4-b", "europe-west4-c"]
}
resource "google_compute_instance" "default" {
count = "${var.role.count}"
...
zone = "${var.zone != "" ? var.zone: var.zones[ count.index % length(var.zones) ]}"
...
}
This distribution mechanism allow to distribute nodes evenly across zones.
E.g. zones = [A,B] - instance-1 will be in A, instance-2 will in B, instance-3 will be in A again.
By adding zone C to zones will shift instance-3 to C.
The count index in the resource will throw an error if you have more instances than subnets. Use the element interpolation from Terraform
element(list, index) - Returns a single element from a list at the given index. If the index is greater than the number of elements, this function will wrap using a standard mod algorithm. This function only works on flat lists.
subnet_id = "${element(data.aws_subnet_ids.app_tier_ids.ids, count.index)}"
At the end I figured out how to do it, using data "aws_subnet_ids" {...} and more importantly understanding that terraform creates lists out of resources when using count:
variable "target_vpc" {}
variable "app_server_count" {}
variable "app_server_ip_start" {}
# Discover VPC
data "aws_vpc" "target_vpc" {
filter = {
name = "tag:Name"
values = [var.target_vpc]
}
}
# Discover subnet IDs. This requires the subnetworks to be tagged with Tier = "AppTier"
data "aws_subnet_ids" "app_tier_ids" {
vpc_id = data.aws_vpc.target_vpc.id
tags {
Tier = "AppTier"
}
}
# Discover subnets and create a list, one for each found ID
data "aws_subnet" "app_tier" {
count = length(data.aws_subnet_ids.app_tier_ids.ids)
id = data.aws_subnet_ids.app_tier_ids.ids[count.index]
}
resource "aws_instance" "app_server" {
...
# Create N instances
count = var.app_server_count
# Use the "count.index" subnet
subnet_id = data.aws_subnet_ids.app_tier_ids.ids[count.index]
# Create an IP address using the CIDR of the subnet
private_ip = cidrhost(element(data.aws_subnet.app_tier.*.cidr_block, count.index), var.app_server_ip_start + count.index)
...
}
I get Terraform to loop through the subnets in an availability zone by using the aws_subnet_ids data source and filtering by a tag representing the tier (in my case public/private).
This then looks something like this:
variable "vpc" {}
variable "ami" {}
variable "subnet_tier" {}
variable "instance_count" {}
data "aws_vpc" "selected" {
tags {
Name = "${var.vpc}"
}
}
data "aws_subnet_ids" "selected" {
vpc_id = "${data.aws_vpc.selected.id}"
tags {
Tier = "${var.subnet_tier}"
}
}
resource "aws_instance" "instance" {
count = "${var.instance_count}"
ami = "${var.ami}"
subnet_id = "${data.aws_subnet_ids.selected.ids[count.index]}"
instance_type = "${var.instance_type}"
}
This returns a consistent sort order but not necessarily starting with AZ A in your account. I suspect that the AWS API returns the subnets in AZ order but ordered by their own internal id as the AZs are shuffled by account (presumably to stop AZ A being flooded as humans are predictably bad at putting everything in the first place they can use).
You would have to tie yourself in some horrible knots if for some odd reason you particularly care about instances being placed in AZ A first but this minimal example should at least get instances being round-robined through the AZs you have subnets in by relying on Terraform's looping back through arrays when exceeding the array length.

Setting "count" based on the length of an attribute on another resource

I have a fairly simple Terraform configuration, which creates a Route53 zone and then creates NS records in Cloudflare to delegate the subdomain to that zone. At present, it assumes there's always exactly four authoritative DNS servers for every Route53 zone, and creates four separate cloudflare_record resources, but I'd like to generalise that, partially because who knows if AWS will start putting a fifth authoritative server out there in the future, but also as a "test case" for more complicated stuff in the future (like AWS AZs, which I know vary in count between regions).
What I've come up with so far is:
resource "cloudflare_record" "public-zone-ns" {
domain = "example.com"
name = "${terraform.env}"
type = "NS"
ttl = "120"
count = "${length(aws_route53_zone.public-zone.name_servers)}"
value = "${lookup(aws_route53_zone.public-zone.name_servers, count.index)}"
}
resource "aws_route53_zone" "public-zone" {
name = "${terraform.env}.example.com"
}
When I run terraform plan over this, though, I get this error:
Error running plan: 1 error(s) occurred:
* cloudflare_record.public-zone-ns: cloudflare_record.public-zone-ns: value of 'count' cannot be computed
I think what that means is that because the aws_route53_zone hasn't actually be created, terraform doesn't know what length(aws_route53_zone.public-zone.name_servers) is, and therefore the interpolation into cloudflare_record.public-zone-ns.count fails and I'm screwed.
However, it seems surprising to me that Terraform would be so inflexible; surely being able to create a variable number of resources like this would be meat-and-potatoes stuff. Hard-coding the length, or creating separate resources, just seems so... limiting.
So, what am I missing? How can I create a number of resources when I don't know in advance how many I need?
Currently count not being able to be calculated is an open issue in terraform https://github.com/hashicorp/terraform/issues/12570
You could move the name servers to a variable array and then get the length of that, all in one terraform script.
I got your point, this surprised me as well. Even I added depends_on to resource cloudflare_record, it is helpless.
What you can do to pass this issue is to split it into two stacks and make sure the route 53 record is created before cloudflare record.
Stack #1
resource "aws_route53_zone" "public-zone" {
name = "${terraform.env}.example.com"
}
output "name_servers" {
value = "${aws_route53_zone.public-zone.name_servers}"
}
Stack #2
data "terraform_remote_state" "route53" {
backend = "s3"
config {
bucket = "terraform-state-prod"
key = "network/terraform.tfstate"
region = "us-east-1"
}
}
resource "cloudflare_record" "public-zone-ns" {
domain = "example.com"
name = "${terraform.env}"
type = "NS"
ttl = "120"
count = "${length(data.terraform_remote_state.route53.name_servers)}"
value = "${element(data.terraform_remote_state.route53.name_servers, count.index)}"
}