How to use terraform workspace interpolation to separate resources creation? - amazon-web-services

Lets suppose I have dev, uat and prod environment. I wan't to have some modules to be deployed in the dev environment but not in other environment.
I want to put a condition based on the workspace I have but can't figure it out how. Any recommendation would be appreciated.
I tried to use $(terraform.workspace) to select 'dev' enviroment but wasn't working.
count = $(terraform.workspace) == "dev" ? 1 : 0 and it says
which resulted in:
This character is not used within the language.

You don't need to use $ sign.
count = terraform.workspace == "dev" ? 1 : 0

There are two different styles and two different logics to write this condition.
Different Styles
If terraform.workspace is equal to "dev" then create one instance else zero instance.
count = "${terraform.workspace}" == "dev" ? 1 : 0
count = terraform.workspace == "dev" ? 1 : 0
Another logic
If terraform.workspace is not equal to "dev" then create zero instance else one instance.
count = "${terraform.workspace}" != "dev" ? 0 : 1
count = terraform.workspace != "dev" ? 0 : 1

Related

Terraform separate input variables via IF statement according to values of another input variable

I have two elasticsearch services managed with terraform. But one version is 6.8 while the other is 7.10 . The problem is that I had to describe the ebs_option input variable because of the instance size that I am using. However, when I run the terraform plan command after describing this, I get the following output:
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.aws-opensearch.aws_elasticsearch_domain.elastic-domains[1] will be updated in-place
~ resource "aws_elasticsearch_domain" "elastic-domains" {
id = "arn:aws:es:eu-central-1:xxx:domain/new-elastic"
tags = {
"Environment" = "test"
"Name" = "new-elastic"
"Namespace" = "test"
}
# (9 unchanged attributes hidden)
~ ebs_options {
- iops = 3000 -> null
# (4 unchanged attributes hidden)
}
# (13 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Even though I apply it, I get the same output every time I run the terraform apply command.
When I researched this a bit, when elasticsearch is version 7.10, it uses gp3 storage. But in version 6.8 it uses gp2. There are some differences between the two that come by default. iops is one of them.
How can I overcome this problem? Since I defined it under a single module, I cannot give it separately.
I have terraform configuration below:
main.tf
resource "aws_elasticsearch_domain" "elastic-domains" {
count = length(var.domain_names)
domain_name = var.domain_names[count.index].domain_name
elasticsearch_version = var.domain_names[count.index].elasticsearch_version
...
ebs_options {
ebs_enabled = true
volume_size = 50
}
}
variables.tf
variable domain_names {
type=list(object({
domain_name = string
elasticsearch_version = number
}))
}
terraform.tfvars
domain_names = [
{
domain_name = "elastic"
elasticsearch_version = "6.8"
},
{
domain_name = "new-elastic"
elasticsearch_version = "7.10"
}
]
You conditionally set the iops to null depending on the version. E.g.
ebs_options {
ebs_enabled = true
volume_size = 50
iops = startswith(var.domain_names[count.index].elasticsearch_version, "7") ? 3000 : null
}

Terraform - AWS - TypeError: planResultMessage.search is not a function

I've been scratching my head over this one for longer than I'd like to admit, but I'm throwing in the towel...
I have a large Terraform package and in the Terraform Plan, I get this error:
Terraform Plan (Error) Log
Exception Error in plan - TypeError: planResultMessage.search is not a function
I do not use the planResultMessage.search anywhere in my code, so my guess is that it is a Terraform error?
What I do know is that this set of resources that it is deploying is a bunch of yaml documents that I am trying to leverage to create SSM Documents. They are being loaded as such:
member_data.tf
data "template_file" "member_createmultiregiontrail" {
template = file("${path.module}/member-runbooks/member-asr-CreateCloudTrailMultiRegionTrail.yml")
}
data "template_file" "member_createlogmetricsfilteralarm" {
template = file("${path.module}/member-runbooks/member-asr-CreateLogMetricFilterAndAlarm.yml")
}
asr-member.tf
resource "aws_ssm_document" "asr_document_cloudtrail_multiregion" {
provider = aws.customer
count = var.enabled == true && var.child_account == true ? 1 : 0
name = "ASR-CreateCloudTrailMultiRegionTrail"
document_format = "YAML"
document_type = "Automation"
content = data.template_file.member_createmultiregiontrail.template
}
resource "aws_ssm_document" "asr_document_logs_metricsfilter_alarm" {
provider = aws.customer
count = var.enabled == true && var.child_account == true ? 1 : 0
name = "ASR-CreateLogMetricFilterAndAlarm"
document_format = "YAML"
document_type = "Automation"
content = data.template_file.member_createlogmetricsfilteralarm.template
}
As an example. I think the cause might be in these document files because the Terraform Error populates in the middle of the contents of these documents, it's always a random location in one of the documents...
Example:
This one fell into a document for SecHub's AFSBP Redshift 6 control, but at the beginning of the section contents it acknowledges that the resource will be deployed:
# module.aws-securityhub-master.aws_ssm_document.AFSBP_Redshift_6[0] will be created
I have tried loading the contents directly, using yamlencode, using simply "file", loading them into locals, pulling a file from locals, and now I'm on data sources.
If anyone can offer any help, it would be greatly appreciated.
DISCLAIMER:
This Terraform build out is a deconstruction of Amazon's SHARR solution:
https://aws.amazon.com/solutions/implementations/automated-security-response-on-aws/
you can see the various yaml build-outs here based on which security control:
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/tree/main/source/playbooks
The two that I specifically called out in my data sources are:
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/blob/main/source/remediation_runbooks/CreateCloudTrailMultiRegionTrail.yaml
and
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/blob/main/source/remediation_runbooks/CreateLogMetricFilterAndAlarm.yaml
and the AFSBP yaml can be found here (just in case it matters):
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/blob/main/source/playbooks/AFSBP/ssmdocs/AFSBP_Redshift.6.yaml
Thank you in advance!
This turned out to be a buffer overflow issue. Expanded resources to accommodate the deployment and that solved the issue.

Conditionally provision a gcp vm instance with terraform

I would like to condition the provisioning of a resource (gcp vm instance) on a variable, for example:
resource "${var.param > 0 ? "google_compute_instance" : "null_resource"}" "cluster" {
# ...
}
but the above is not valid syntax:
Error: Invalid resource type name
A name must start with a letter or underscore and may contain only letters, digits, underscores, and dashes.
Error: Invalid string literal
Template sequences are not allowed in this string. To include a literal "$", double it (as "$$") to escape it.
Is there a way to accomplish the same? Ideally using terraform alone.
You can use count for that:
resource "google_compute_instance" {
count = var.param > 0 ? 1 : 0
}
resource "cluster" {
count = var.param > 0 ? 0 : 1
}

How to specify either empty snapshot_identifier or a datasource value

I am trying to make a common module to allow an rds cluster to be built - however I want to be able to choose to have it build from either a snapshot or from scratch.
I used a count to choose whether to perform the datasource lookup or not which works. However if it is set to 0 and doesn't run, the resource will fail as it doesn't know what data.aws_db_cluster_snapshot.latest_cluster_snapshot. Is there a way around this that I can't quite think of myself?
Datasource:
data "aws_db_cluster_snapshot" "latest_cluster_snapshot" {
count = "${var.enable_restore == "true" ? 1 : 0}"
db_cluster_identifier = "${var.snapshot_to_restore_from}"
most_recent = true
}
Resource:
resource "aws_rds_cluster" "aurora_cluster" {
...
snapshot_identifier = "${var.enable_restore == "false" ? "" : data.aws_db_cluster_snapshot.latest_cluster_snapshot.id}"
...
}
Versions:
Terraform v0.11.10
provider.aws v2.33.0

Terraform: order of creating array of resources

I have two ec2 instances defined in terraform using the count method.
resource "aws_instance" "example" {
count = "2"
ami = "ami-2d39803a"
instance_type = "t2.micro"
tags {
Name = "example-${count.index}"
}
}
How can I enforce that they are launched one after the other? e.g. the second instance should be created when the first one finishes.
Attempt 1:
depends_on = [aws_instance.example[0]]
result:
Error: aws_instance.example: resource depends on non-existent resource 'aws_instance.example[0]'
Attempt 2:
tags {
Name = "example-${count.index}"
Active = "${count.index == "1" ? "${aws_instance.example.1.arn}" : "this"}"
}
result:
Error: aws_instance.example[0]: aws_instance.example[0]: self reference not allowed: "aws_instance.example.0.arn"
Which leads me to believe the interpolation is calculated after the instance configurations are complete thus it doesn't see that there isn't in fact a circular dependency.
Any ideas?
Thanks
Use terraform apply -parallelism=1 to limit the number of concurrent operations to 1 at a time.