Terraform: "known only after apply" ISSUE - amazon-web-services

I'm creating an aws_subnet and referencing it in another resource.
Example:
resource "aws_subnet" "mango" {
vpc_id = aws_vpc.mango.id
cidr_block = "${var.subnet_cidr}"
}
The reference
network_configuration {
subnets = "${aws_subnet.mango.id}"
}
When planning it I get
aws_subnet.mango.id is a string, known only after apply
error. I'm new to Terraform. Is there something similar to Cloudformation's DependsOn or Export/Import?

This is a case of explicit dependency.
The argument depends_on similar to CloudFormation's DependsOn solves such dependencies.
Note: "Since Terraform will wait to create the dependent resource until after the specified resource is created, adding explicit dependencies can increase the length of time it takes for Terraform to create your infrastructure."
Example:
depends_on = [aws_subnet.mango]

This line:
cidr_block = "${var.subnet_cidr}"
should look like
cidr_block = var.subnet_cidr
And this line:
subnets = "${aws_subnet.mango.id}"
should look like
subnets = aws_subnet.mango.id
Terraform gives a warning when a string value only has a template in it. The reason is that for cases like yours, it's able to make a graph with the bare value and resolve it on apply, but it's unable to make the string without creating the resource first.

The information like ID or other such information which will be generated by AWS, cannot be predicted by terraform plan as this step only does a dry run and doesn't apply any changes.
The fields which have known only after apply is not an error, but just informs the user that these fields only get populated in terraform state after its applied. The dependency order is handled by Terraform and hence referring values (even those which have known only after apply) will be resolved at run time.

The error in this case is not the string "known only after apply" but the message "Incorrect attribute value type".
subnets (plural) requires a list of strings but you gave only one string.
network_configuration {
subnets = ["${aws_subnet.mango.id}"]
}
The depends_on is not necessary in this case, tf resolves this dependency by itself. depends_on is only important if tf can't get this by itself.
writing "${foo.bar}" instead of foo.bar is also no problem but doesn't follow tf's code-style-rules.

Related

Create Terraform Cloudwatch Dashboards dynamically

Overview
Currently, dashboards are being deployed via Terraform using values from a dictionary in locals.tf:
resource "aws_cloudwatch_dashboard" "my_alb" {
for_each = local.env_mapping[var.env]
dashboard_name = "${each.key}_alb_web_operational"
dashboard_body = templatefile("templates/alb_ops.tpl", {
environment = each.value.env,
account = each.value.account,
region = each.value.region,
alb = each.value.alb
tg = each.value.alb_tg
}
This leads to fragility because the values of AWS infrastructure resources like the ALB and ALB target group are hard coded. Sometimes when applying updates AWS resources are destroyed and recreated.
Question
What's the best approach to get these values dynamically? For example, this could be achieved by writing a Python/Boto3 Lambda, which looks up these values and then passes them to Terraform as env variables. Are there any other recommended ways to achieve the same?
It depends on how much environment is dynamical. But sounds like Terraform data sources is what you are looking for.
Usually, loadbalancer names are fixed or generated by some rule and should be known before creating dashboard.
Let's suppose that names are fixed, and names are:
variable "loadbalancers" {
type = object
default = {
alb01 = "alb01",
alb02 = "alb02"
}
}
In this case loadbalancers may be taken by:
data "aws_lb" "albs" {
for_each = var.loadbalancers
name = each.value # or each.key
}
And after that you will be able to get dynamically generated parameters:
data.aws_lb["alb01"].id
data.aws_lb["alb01"].arn
etc
If loadbalancer names are generated by some rule, you should use aws cli or aws cdk to get all names, or just generate names by same rule as it was generated inside AWS environment and pass inside Terraform variable.
Notice: terraform plan (apply, destroy) will raise error if you pass non-existent name. You should check if LB with provided name exists.

Is there a way to tell Terraform to ignore manual changes made to DHCP Options associations in AWS?

I work in an environment that is generally the same, however sometimes I have a customer who wants to use a different DHCP options set from the standard we build and associate via Terraform. It isn't feasible to use a custom Terraform resource due to the sheer number of customers we work with and how we roll out changes via Terraform.
My problem is, the "aws_vpc_dhcp_options_association" resource is completely deleted when the association is changed manually by the customer, therefore I haven't been able to use the "ignore_changes" lifecycle object. So basically, what I need Terraform to do is create the association if one doesn't exist at all (e.g. when the AWS account is created) and ignore whatever happens to the association after that.
resource "aws_vpc_dhcp_options" "this" {
domain_name = "domain.com"
domain_name_servers = ["AmazonProvidedDNS"]
ntp_servers = ["1.2.3.4", "1.2.3.4"]
netbios_node_type =
tags = {
Name = "name"
}
}
resource "aws_vpc_dhcp_options_association" "assocation" {
vpc_id = vpc-id
dhcp_options_id = aws_vpc_dhcp_options.this.id
}
Tried ignore changes, preconditions

How to get the default vpc id with terraform

I am trying to get the vpc_id of default vpc in my aws account using terraform
This is what I tried but it gives an error
Error: Invalid data source
this is what I tried:
data "aws_default_vpc" "default" {
}
# vpc
resource "aws_vpc" "kubernetes-vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
tags = {
Name = "kubernetes-vpc"
}
}
The aws_default_vpc is indeed not a valid data source. But the aws_vpc data source does have a boolean default you can use to choose the default vpc:
data "aws_vpc" "default" {
default = true
}
For completeness, I'll add that an aws_default_vpc resource exists that also manages the default VPC and implements the resource life-cycle without really creating the VPC* but would make changes in the resource like changing tags (and that includes its name).
* Unless you forcefully destroy the default VPC
From the docs:
This is an advanced resource and has special caveats to be aware of when using it. Please read this document in its entirety before using this resource.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc
This
resource "aws_default_vpc" "default" {
}
will do.
I think this is convenient for terraform projects managing a whole AWS account, but I would advise against using it whenever multiple terraform projects are deployed in a single organization account. You should better stay with #blokje5's answer in that case.

Why I've to specify two values when identifying the aws resource?

I do not know why I've to specify two values when identifying an AWS resource in terraform. For example,
resource "aws_instance" "test"
I understand that "aws_instance" is the resource type but what about the other one?
I'm not a terraform expert, but my understanding of the second value is the "Logical ID" of the instance much like Cloudformation, i.e. this is what it will be referred to inside terraform. Meaning that if you create that instance, and then want to export it's IP somewhere else you can then access the resource properties through the second value, like so:
"${aws_instance.test.private_ip}"
The second parameter you give is the "NAME" of the resource you have created. The "NAME" parameter must be set. You can see its importance while using the output of resource that gives input to another resource creation.
While the resource name ("test" in your example) is not useful in a simple configuration with only one or two resources, an important feature of Terraform is using the attributes of one resource to populate another.
A common example of this in AWS is creating VPC and subnet objects:
variable "app_name" {}
variable "env_name" {}
resource "aws_vpc" "main" {
cidr_block = "10.1.0.0/16"
tags = {
Name = "${var.app_name}-${var.env_name}"
}
}
resource "aws_subnet" "a" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 4, 1})"
availability_zone = "us-west-2a"
tags = {
Name = "${var.app_name}-${var.env_name}-usw2a"
}
}
resource "aws_subnet" "b" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "${cidrsubnet(aws_vpc.main.cidr_block, 4, 2})"
availability_zone = "us-west-2b"
tags = {
Name = "${var.app_name}-${var.env_name}-usw2b"
}
}
In this example, the name "main" of the "aws_vpc" resource is used as part of references from the two subnets back to the VPC. This allows Terraform to populate the subnet vpc_id even though its value won't be known until the VPC is created. It also avoids duplicating the VPC's base CIDR block in the subnets, instead calculating a new subnet prefix dynamically.
Notice that the resource names are different than the tag Name on each object, because they have a different scope: the Terraform resource names are required to be unique only within a single module, and so they will usually have short names that just distinguish any resources of the same type within that one module. The Name tags -- and, for some other resource types, the unique resource name -- must instead be unique either within an entire AWS region or possibly across a whole AWS partition (in the case of S3, for example).
The different purpose of these Terraform-specific names becomes particularly important for more complicated systems where the same module is instantiated multiple times in different configurations, such as creating similar infrastructure across different environments. In this case the Terraform-specific names will be the same across all uses of the module -- since the module source code is identical -- but they will need to have distinct names within AWS itself, e.g. qualified by the environment name they belong to. The usual way to achieve that is to add variables to your modules to specify a subsystem and an environment and then use that to produce a consistent naming scheme for the objects in AWS, while Terraform itself just uses its local names for references in configuration.

Take Output From First Terraform and Use in Second Terraform

I need two different Terraform file for different purposes. In the Second Terraform file, I have to take input from the output of First Terraform file.
In my Scenario, My first Terraform creates an AWS Security Group. Now I have to use the the ID of this Security group in my Second Terraform file.
I also want to sure that Second Terraform creation cannot initilaize before First Terraform complete. How can i achieve this?
It doesn't matter how many .tf files you are creating. Terraform first loads all the .tf files and then try to create a graph to create the resources. So you can do it like this.
resource "aws_security_group" "default" {
name = "allow_all"
description = "Allow all inbound traffic"
.
.
}
Now you can use id of this security group in another file/other module. For ex. let's use it for ec2 creation. Like.
resource "aws_instance" "web" {
ami = "${var.ami_id}"
instance_type = "t2.micro"
security_groups = ["${aws_security_group.default.id}"]
}
For more details about security group params, you can check following document.
https://www.terraform.io/docs/providers/aws/r/security_group.html
for this requirement you might want to use terraform modules with which you can have the luxury of code reuse as well as you can supply that ID of security group to as many terraform files as you want. And whenever you do terraform apply it will check for resource dependencies and execution will take place accordingly.