Terraform conditional option_settings in a dynamic option block - amazon-web-services

When using RDS option_groups, there are some options that require option_settings and some that don't. Terraform throws an error if the option_settings block is included with an option that doesn't use option settings, and terraform apply fails. I have a module that accepts a map of objects for RDS instances, including their option groups/options/option_settings. Within this is a resource that has an option that requires the option settings to be omitted (S3_INTEGRATION option). Below is the option_group resource block code being used:
resource "aws_db_option_group" "main" {
for_each = {
for name, rds in var.main : name => rds
if rds.option_group_name != ""
}
name = each.value["option_group_name"]
option_group_description = "Terraform Option Group"
engine_name = each.value["engine"]
major_engine_version = each.value["major_engine_version"]
dynamic "option" {
for_each = each.value["options"]
content {
option_name = option.key
option_settings {
name = option.value["option_name"]
value = option.value["option_value"]
}
}
}
}
Is there a way to make the option_settings block creation in an option conditional to circumvent this?

Terraform supports nested dynamic blocks too, which in my understanding is something that you are looking for.
Hashicorp documentation on Nested Dynamic Blocks: https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks#multi-level-nested-block-structures
You can modify your aws_db_option_group module with the below code and make your option_settings optional for the module. It will work only when the values are supplied(but it also depends on the typing of the variable if its flexible).
If you are already using Terraform version >= 1.3 then you also can use optional object type attributes already.
Hashicorp documentations : https://www.hashicorp.com/blog/terraform-1-3-improves-extensibility-and-maintainability-of-terraform-modules
resource "aws_db_option_group" "main" {
for_each = {
for name, rds in var.main : name => rds
if rds.option_group_name != ""
}
name = each.value["option_group_name"]
option_group_description = "Terraform Option Group"
engine_name = each.value["engine"]
major_engine_version = each.value["major_engine_version"]
dynamic "option" {
for_each = each.value["options"]
content {
option_name = option.key
dynamic "option_settings" {
for_each = option.value["option_settings"]
content {
name = option_settings.key
value = option_settings.value
}
## Uncomment if you find this better and remove the above content block.
# content {
# name = option_settings.value["name"]
# value = option_settings.value["value"]
# }
}
}
}
}
Hope it helps.

Related

How to move existing Terraform resources from single items to set?

I have an SQL db module with single databases like this:
resource "google_sql_database" "projects" {
name = "projects"
instance = google_sql_database_instance.database.name
}
resource "google_sql_database" "markdown" {
name = "markdown"
instance = google_sql_database_instance.database.name
}
I'd like to switch to set of variables instead:
variable "databases" {
type = list(string)
default = ["projects", "markdown"]
}
resource "google_sql_database" "database" {
for_each = toset(var.databases)
name = each.key
instance = google_sql_database_instance.database.name
}
And when I do terraform apply the CLI wants to recreate everything:
# module.sql.google_sql_database.database["markdown"] will be created
+ resource "google_sql_database" "database" {
...
...
# module.sql.google_sql_database.markdown will be destroyed
- resource "google_sql_database" "markdown" {
...
...
How to avoid that and mirror the existing resources to new config?
You either need to run terraform state mv command for each resource, or add moved blocks to your Terraform code.

terraform: tfsec not able to read EKS cluster encryption configuration

I have an EKS cluster resource to which the team has added encryption_config, We are adding a dynamic block probably to add multiple configurations. Now when I am trying to run tfsec ( version 1.28.0 ) on my code I get a Cluster does not have secret encryption enabled.
Here is the dynamic block
resource "aws_eks_cluster" "this" {
...
dynamic "encryption_config" {
for_each = toset(var.cluster_encryption_config)
content {
provider {
key_arn = encryption_config.value["provider_key_arn"]
}
resources = encryption_config.value["resources"]
}
}
}
definition inside variables.tf
variable "cluster_encryption_config" {
description = "Configuration block with encryption configuration for the cluster. See examples/secrets_encryption/main.tf for example format"
type = list(object({
provider_key_arn = string
resources = list(string)
}))
default = []
}
From what you write cluster_encryption_config is set to empty list []. Therefore, encryption_config block does not run, and there is no encryption configured. You have to setup cluster_encryption_config to something with valid values (not an empty list).

Terraform Resource attribute not being removed when passing in empty values

I am working with a GCP Cloud Composer resource and added in a dynamic block to create an attribute for the resource to set allowed_ip_ranges which can be used as an IP filter for accessing the Apache Airflow Web UI.
I was able to get the allowed ranges setup and can update them in place to new values also.
If I attempt to pass in a blank list I am expecting the IP address(es) to be removed as attributes for the resource but Terraform seems to think that no changes are needed.
There is probably something wrong in my code but I am not sure what exactly I would need to do. Does it involve adding in a conditional expression to the for_each loop in the dynamic block?
Child module main.tf
web_server_network_access_control {
dynamic "allowed_ip_range" {
for_each = var.allowed_ip_range
content {
value = allowed_ip_range.value["value"]
description = allowed_ip_range.value["description"]
}
}
}
Child module variables.tf
variable "allowed_ip_range" {
description = "The IP ranges which are allowed to access the Apache Airflow Web Server UI."
type = list(map(string))
default = []
}
Parent module terraform.tfvars
allowed_ip_range = [
{
value = "11.0.0.2/32"
description = "Test dynamic block 1"
},
]
You can set the default value in your variables.tf file:
variable "allowed_ip_range" {
description = "The IP ranges which are allowed to access the Apache Airflow Web Server UI"
type = list(map(string))
default = [
{
value = "0.0.0.0/0"
description = "Allows access from all IPv4 addresses (default value)"
},
{
value = "::0/0"
description = "Allows access from all IPv6 addresses (default value)"
},
]
}
And when you will delete your variable from terraform.tfvars, you will have the default values

Using Count in Terraform to create Launch Configuration

I have 3 different version of an AMI, for 3 different nodes in a cluster.
data "aws_ami" "node1"
{
# Use the most recent AMI that matches the pattern below in 'values'.
most_recent = true
filter {
name = "name"
values = ["AMI_node1*"]
}
filter {
name = "tag:version"
values = ["${var.node1_version}"]
}
}
data "aws_ami" "node2"
{
# Use the most recent AMI that matches the pattern below in 'values'.
most_recent = true
filter {
name = "name"
values = ["AMI_node2*"]
}
filter {
name = "tag:version"
values = ["${var.node2_version}"]
}
}
data "aws_ami" "node3"
{
...
}
I would like to create 3 different Launch Configuration and Auto Scaling Group using each of the AMIs respectively.
resource "aws_launch_configuration" "node"
{
count = "${local.node_instance_count}"
# Name-prefix must be used otherwise terraform fails to perform updates to existing launch configurations due to
# a name conflict: LCs are immutable and the LC cannot be destroyed without destroying attached ASGs as well, which
# terraform will not do. Using name-prefix lets a new LC be created and swapped into the ASG.
name_prefix = "${var.environment_name}-node${count.index + 1}-"
image_id = "${data.aws_ami.node[count.index].image_id}"
instance_type = "${var.default_ec2_instance_type}"
...
}
However, I am not able use aws_ami.node1, aws_ami.node2, aws_ami.node3 using the cound.index the way I have shown above. I get the following error:
Error reading config for aws_launch_configuration[node]: parse error at 1:39: expected "}" but found "."
Is there another way I can do this in Terraform?
Indexing data sources isn't something that's doable; at the moment.
You're likely better off simply dropping the data sources you've defined and codifying the image IDs into a Terraform map variable.
variable "node_image_ids" {
type = "map"
default = {
"node1" = "1234434"
"node2" = "1233334"
"node3" = "1222434"
}
}
Then, consume it:
image_id = "${lookup(var.node_image_ids, concat("node", count.index), "some_default_image_id")}"
The downside of this is that you'll need to manually update the image id when images are upgraded.

Define tags in central section in TerraForm

I'm playing around with Terraform for a bit and I was wondering if this is possible. It's best practice to assign tags to each resource you create on AWS (for example). So, what you do first is come up with a tagging strategy (for example, which business unit, a name of the app, a team responsible for it, ...).
However, in Terraform, this means that you have to repeat each tags-block for each resource. This isn't very convenient and if you want to update 1 of the tag names, you have to update each resource that you created.
For example:
resource "aws_vpc" "vpc" {
cidr_block = "${var.cidr}"
tags {
Name = "${var.name}"
Project = "${var.projectname}"
Environment = "${var.environment}"
}
}
If I want to create a Subnet and EC2 in that VPC with the same tags, I have to repeat that tags-block. If I want to update 1 of the tag names later on, I have to update each resource individually, which is very time consuming and tedious.
Is there a possibility to create a block of tags in a centralized location and refer to that? I was thinking of Modules, but that doesn't seem to fit the definition of a module.
You can also try local values from version 0.10.3. It allows you to assign a symbolic local name to an expression so it can be used multiple times in configuration without repetition.
# Define the common tags for all resources
locals {
common_tags = {
Component = "awesome-app"
Environment = "production"
}
}
# Create a resource that blends the common tags with instance-specific tags.
resource "aws_instance" "server" {
ami = "ami-123456"
instance_type = "t2.micro"
tags = "${merge(
local.common_tags,
map(
"Name", "awesome-app-server",
"Role", "server"
)
)}"
}
Terraform version .12 onwords,
This is the variable
variable "sns_topic_name" {
type = string
default = "VpnTopic"
description = "Name of the sns topic"
}
This is the code
locals {
common_tags = {
Terraform = true
}
}
# Create a Resource
resource "aws_sns_topic" "sns_topic" {
name = var.sns_topic_name
tags = merge(
local.common_tags,
{
"Name" = var.sns_topic_name
}
)
}
Output will be
+ tags = {
+ "Name" = "VpnTopic"
+ "Terraform" = "true"
}
Terraform now natively support it for AWS provider.
Check out here
As of version 3.38.0 of the Terraform AWS Provider, the Terraform Configuration language also enables provider-level tagging. Reference Link
# Terraform 0.12 and later syntax
provider "aws" {
# ... other configuration ...
default_tags {
tags = {
Environment = "Production"
Owner = "Ops"
}
}
}
resource "aws_vpc" "example" {
# ... other configuration ...
# This configuration by default will internally combine tags defined
# within the provider configuration block and those defined here
tags = {
Name = "MyVPC"
}
}
For "aws_vpc.example" resouce below tags will be assigned, which is combination of tags defined under provider and tags defined under aws_vps section:
+ tags = {
+ "Environment" = "Production"
+ "Owner" = "Ops"
+ "Name" = "MyVPC"
}