Dynamically run terraform block based on input variable value - amazon-web-services

The Problem:
AWS doesn't support enhanced monitoring for t3.small instances which is what we use for smaller deployments of RDS but does on larger instance sizes for RDS. We want to disable it in Terraform when the instance class is t3.
Looking at the terraform resource docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance seems like you don't specify interval and role when you don't want to enable enhanced monitoring.
I'm trying to dynamically execute the resource block based on what the monitoring interval is set to. Thus, when its set to 0 run the block without monitoring role arn and when its set to anything other than 0 run the block where it is set.
however I'm getting an error:
╷
│ Error: Missing newline after argument
│
│ On main.tf line 68: An argument definition must end with a newline.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.
I was following the following stack post: How to conditional create resource in Terraform based on a string variable
but it doesn't seem to work with the above error.
Here is my terraform code:
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.monitoring_interval != "0" ? var.cluster_instance_count : 0
identifier = "${var.service}-${var.environment}-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = var.instance_class
engine = aws_rds_cluster.default.engine
monitoring_role_arn = var.monitoring_role
engine_version = aws_rds_cluster.default.engine_version
monitoring_interval = var.monitoring_interval
db_parameter_group_name = var.regional_instance_param_group_name
copy_tags_to_snapshot = true
publicly_accessible = false
db_subnet_group_name = var.regional_subnet_group_name
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.monitoring_interval = "0" ? var.cluster_instance_count : 0
identifier = "${var.service}-${var.environment}-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = var.instance_class
engine = aws_rds_cluster.default.engine
engine_version = aws_rds_cluster.default.engine_version
db_parameter_group_name = var.regional_instance_param_group_name
copy_tags_to_snapshot = true
publicly_accessible = false
db_subnet_group_name = var.regional_subnet_group_name
}
Line Reference:
Thanks for your help. Probably something small I'm just missing or misunderstood about terraform conditionals.

You're missing an = in your condition. Change it to this:
var.monitoring_interval == "0" ? var.cluster_instance_count : 0

Related

Can I create a Redshift cluster in Terraform AND add an additional user to it?

I am trying to get terraform set up to where I can have an array of cluster parameters and then do a for_each in a redshift module to create them all like so:
for_each = local.env[var.tier][var.region].clusters
source = "terraform-aws-modules/redshift/aws"
cluster_identifier = "${each.value.name}"
allow_version_upgrade = true
node_type = "dc2.large"
number_of_nodes = 2
database_name = "${each.value.database}"
master_username = "${each.value.admin_user}"
create_random_password = false
master_password = "${each.value.admin_password}"
encrypted = true
kms_key_arn = xxxxx
enhanced_vpc_routing = false
vpc_security_group_ids = xxxxxx
subnet_ids = xxxxxx
publicly_accessible = true
iam_role_arns = xxxxxx
# Parameter group
parameter_group_name = xxxxxx
# Subnet group
create_subnet_group = false
subnet_group_name = xxxxxx
# Maintenance
preferred_maintenance_window = "sat:01:00-sat:01:30"
# Backup Details
automated_snapshot_retention_period = 30
manual_snapshot_retention_period = -1
}
But I also want to add an additional user aside from the admin user to each of these clusters. I am struggling to find a way to do this in terraform. Any advice would be appreciated! Thanks!
There are two ways to do this:
Can try to use TF Redshift Provider which allows you to create redshift_user.
Use local-exec to invoke JDBC, Python or ODBC tools that will create your user using SQL commands.

Unable to create new EKS with terraform

I'm having problems creating a new EKS version 1.22 in a dev environment.
I'm using the module in Terraform registry, trimming some parts since it's only for testing purposes (we just want to test the version 1.22).
I'm using a VPC that was created for testing EKS's, and 2 public subnets and 2 private subnets.
This is my main.tf:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.21.0"
cluster_name = "EKSv2-update-test"
cluster_version = "1.22"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = "vpc-xxx" # eks-vpc
subnet_ids = ["subnet-priv-1-xxx", "subnet-priv-2-xxx", "subnet-pub-1-xxx", "subnet-pub-2-xxx"]
}
Terraform apply times out after 20 min (it just hangs on module.eks.aws_eks_addon.this["coredns"]: Still creating... [20m0s elapsed])
and this is the error
│ Error: unexpected EKS Add-On (EKSv2-update-test:coredns) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s)
│ [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
│
│ with module.eks.aws_eks_addon.this["coredns"],
│ on .terraform/modules/eks/main.tf line 305, in resource "aws_eks_addon" "this":
│ 305: resource "aws_eks_addon" "this" {
The EKS gets created, but this is clearly not the way to go.
Regarding coredns, what am I missing?
Thanks
a minimum of 2 cluster nodes are required for addon coredns to meet its requirements for its replica set

error modifying Lambda Function configuration : ValidationException with Lambda and VPC

I am building a lambda in terraform using it's AWS module and my code is as below:
module "lambda_function" {
# * Lambda module configs
source = "terraform-aws-modules/lambda/aws"
version = "3.0.0"
# * Lambda Configs
function_name = "${var.function_name}-${var.env}"
description = "My Project"
handler = local.constants.lambda.HANDLER
runtime = local.constants.lambda.VERSION
memory_size = 128
cloudwatch_logs_retention_in_days = 14
source_path = "./function/"
timeout = local.constants.lambda.TIMEOUT
create_async_event_config = true
maximum_retry_attempts = local.constants.lambda.RETRIES_ATTEMPT
layers = [
data.aws_lambda_layer_version.layer_requests.arn
]
environment_variables = {
AWS_ACCOUNT = var.env
SLACK_HOOK_CHANNEL = var.SLACK_HOOK_CHANNEL
}
tags = {
Name = "${var.function_name}-${var.env}"
}
trusted_entities = local.constants.lambda.TRUSTED_ENTITIES
}
This code works fine and the lambda get's deployed. Now i need to put the lambda in the VPC. When i add the code below in the resource block, i get the error error modifying Lambda Function (lambda_name) configuration : ValidationException: │ status code: 400, request id: de2641f6-1125-4c83-87fa-3fe32dee7b06 │ │ with module.lambda_function.aws_lambda_function.this[0], │ on .terraform/modules/lambda_function/main.tf line 22, in resource "aws_lambda_function" "this": │ 22: resource "aws_lambda_function" "this" {
The code for the vpc is:
# * VPC configurations
vpc_subnet_ids = ["10.21.0.0/26", "10.21.0.64/26", "10.21.0.128/26"]
vpc_security_group_ids = ["sg-ffffffffff"] # Using a dummy value here
attach_network_policy = true
If i use the same values in the AWS console and deploy the lambda in the VPC, it works fine.
Can someone please help ?
You have to provide valid subnet ids, not CIDR ranges. So instead of
vpc_subnet_ids = ["10.21.0.0/26", "10.21.0.64/26", "10.21.0.128/26"]
it should be
vpc_subnet_ids = ["subnet-asfid1", "subnet-asfid2", "subnet-as4id1"]

Terraform Error: Argument or block definition required when I run TF plan

I have 2 rds instances being created and when running tf plan I am getting a terraform error regarding unsupported block type:
Error: Unsupported block type
on rds.tf line 85, in module "rds":
85: resource "random_string" "rds_password_dr" {
Blocks of type "resource" are not expected here.
Error: Unsupported block type
on rds.tf line 95, in module "rds":
95: module "rds_dr" {
Blocks of type "module" are not expected here.
This is my code in my rds.tf file:
# PostgreSQL RDS App Instance
module "rds" {
source = "git#github.com:************"
name = var.rds_name_app
engine = var.rds_engine_app
engine_version = var.rds_engine_version_app
family = var.rds_family_app
instance_class = var.rds_instance_class_app
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_app
"
"
# PostgreSQL RDS DR Password
resource "random_string" "rds_password_dr" {
length = 16
override_special = "!&*-_=+[]{}<>:?"
keepers = {
rds_id = "${var.rds_name_dr}-${var.environment}-${var.rds_engine_dr}"
}
}
# PostgreSQL RDS DR Instance
module "rds_dr" {
source = "git#github.com:notarize/terraform-aws-rds.git?ref=v0.0.1"
name = var.rds_name_dr
engine = var.rds_engine_dr
engine_version = var.rds_engine_version_dr
family = var.rds_family_dr
instance_class = var.rds_instance_class_dr
# WARNING: 'terraform taint random_string.rds_password' must be run prior to recreating the DB if it is destroyed
password = random_string.rds_password.result
port = var.rds_port_dr
"
"
I don't know why I am getting this? Someone please help me.
You haven't closed the module blocks (module "rds" and module "rds_dr"). You also have a couple of strange double-quotes at the end of both module blocks.
Remove the double-quotes and close the blocks (with }).

Get endpoint for Terraform with aws_elasticache_replication_group

I have what I think is a simple Terraform config for AWS ElastiCache with Redis:
resource "aws_elasticache_replication_group" "my_replication_group" {
replication_group_id = "my-rep-group",
replication_group_description = "eln00b"
node_type = "cache.m4.large"
port = 6379
parameter_group_name = "default.redis5.0.cluster.on"
snapshot_retention_limit = 1
snapshot_window = "00:00-05:00"
subnet_group_name = "${aws_elasticache_subnet_group.my_subnet_group.name}"
automatic_failover_enabled = true
cluster_mode {
num_node_groups = 1
replicas_per_node_group = 1
}
}
I tried to define the endpoint output using:
output "my_cache" {
value = "${aws_elasticache_replication_group.my_replication_group.primary_endpoint_address}"
}
When I run an apply through terragrunt I get:
Error: Error running plan: 1 error(s) occurred:
module.mod.output.my_cache: Resource 'aws_elasticache_replication_group.my_replication_group' does not have attribute 'primary_endpoint_address' for variable 'aws_elasticache_replication_group.my_replication_group.primary_endpoint_address'
What am I doing wrong here?
The primary_endpoint_address attribute is only available for non cluster-mode Redis replication groups as mentioned in the docs:
primary_endpoint_address - (Redis only) The address of the endpoint for the primary node in the replication group, if the cluster mode is disabled.
When using cluster mode you should use configuration_endpoint_address instead to connect to the Redis cluster.