Terraform: Passing variables while importing module - amazon-web-services

After reading this https://developer.hashicorp.com/terraform/language/values/variables#assigning-values-to-root-module-variables, I was certain that there are 3 ways to set variable value.
Recently I came across code in one our projects which passes variables while importing module.
module "robot_shell" {
source = "./modules/xx_shell"
xx_resource_name_prefix = local.resource_name_prefix
cloudwatch_log_group_retention_days = 30
s3_expiration_days = 30
}
file in the module ./modules/xx_shell
resource "aws_s3_bucket_lifecycle_configuration" "dest" {
bucket = aws_s3_bucket.dest.bucket
rule {
id = "expire"
status = "Enabled"
expiration {
days = var.s3_expiration_days
}
}
}
variable definition variables.tf
variable "s3_expiration_days" {
type = number
description = "S3 bucket objects expiration days https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_lifecycle_configuration#days"
}
Why does terraform documentation does not talk about it? Or is this a old way which is not used anymore?

Its a normal way to pass variables to modules as you described. This is described in TF docs in:
Add module configuration
The link that you provided in the question is about passing variables to root/parent module when you run plan/apply. But when you want to pass variables to sub-modules defined using module block, you pass them through module arguments.

Related

GCP Terraform resource policy in compute module issues

I am trying to add a start-stop schedule to our vm instances in our cloud repository (it is a terraform/terragrunt setup)
The example presented on the official site is this:
So since we use Terragrunt as a wrapper my module looks like this:
And for reference my variable block is this:
When i push the code it errors on step 0 in CloudBuild with the following error:
Error: Reference to undeclared input variable on main.tf line 116, in resource "google_compute_resource_policy" "hourly": 116: time_zone = var.time_zone
An input variable with the name "time_zone" has not been declared. This variable can be declared with a variable "time_zone" {}block.
I have tried placing this variable in different positions of the block but i keep getting the same error. Has anyone got any ideas?
This is now resolved. I want to thank #kornshell93 for pointing me in the right direction.
I ended up using the block as suggested but creating a new module and hitting that from a separate section within my vm instance block. I linked to the project as a dependency this way. The previous method via the main compute instance module kept failing on all other vm instances, almost like it was expecting this block on all of them.
resource "google_compute_resource_policy" "hourly" {
name = var.instance_schedule_policy.name
region = var.region
project = var.project
description = "Start and stop instances"
instance_schedule_policy {
vm_start_schedule {
schedule = var.instance_schedule_policy.vm_start_schedule
}
vm_stop_schedule {
schedule = var.instance_schedule_policy.vm_stop_schedule
}
time_zone = var.instance_schedule_policy.time_zone
}
}
And the vm instance block
inputs = {
#instance start/stop schedules
project = dependency.project.outputs.project_id
region = "europe-west2"
instance_schedule_policy = {
name = "start-stop"
vm_start_schedule = "30 07 * * *"
vm_stop_schedule = "00 18 * * *"
time_zone = "GMT"
}
}

Create multiple GCP storage buckets using terraform

I have used terraform scripts to create resources in GCP. The scripts are working fine. But my question is - how do I create multiple storage buckets using a single script.
I have two files for creating the storage bucket-
main.tf which has the terraform code to create the buckets .
variables.tf which has the actual variables like storage bucket name, project_id, etc, which looks like this:
variable "storage_class" { default = "STANDARD" }
variable "name" { default = "internal-demo-bucket-1"}
variable "location" { default = "asia-southeast1" }
How can I provide more than one bucket name in the variable name? I tried to provide multiple names in an array but the build failed.
I don't know all your requirements, however suppose you need to create a few buckets with different names, while all other bucket characteristics are constant for every bucket in the set under discussion.
I would create a variable, i.e. bucket_name_set in a variables.tf file:
variable "bucket_name_set" {
description = "A set of GCS bucket names..."
type = list(string)
}
Then, in the terraform.tfvars file, I would provide unique names for the buckets:
bucket_name_set = [
"some-bucket-name-001",
"some-bucket-name-002",
"some-bucket-name-003",
]
Now, for example, in the main.tf file I can describe the resources:
resource "google_storage_bucket" "my_bucket_set" {
project = "some project id should be here"
for_each = toset(var.bucket_name_set)
name = each.value # note: each.key and each.value are the same for a set
location = "some region should be here"
storage_class = "STANDARD"
force_destroy = true
uniform_bucket_level_access = true
}
Terraform description is here: The for_each Meta-Argument
Terraform description for the GCS bucket is here: google_storage_bucket
Terraform description for input variables is here: Input Variables
Have you considered using terraform provided modules ? It becomes very easy if you use gcs module for bucket creation. It has an option to specify how many buckets you need to create and even the subfolders. I am including the module below for your reference
https://registry.terraform.io/modules/terraform-google-modules/cloud-storage/google/latest

Terraform Module dependency not working ( version 0.12)

i am trying to pass one output value from one terraform module to another terraform module, but facing below issue
my use case is this, in first module i am creating one IAM role and in the second module i need to use above created IAM role (also, in second module if role is not getting created in first module it will create role itself in second module, please consider it as requirement)
module "createiamrole"{
source = "./modules/createiamrole"
}
// this module creates new role, if role is not supplied from above module (default value of iam_role is "" set in variables.tf).
module "checkiamrole"{
source = "./modules/checkiamrole"
iam_role_depends_on = module.createiamrole.iam_role_name
iam_role = "${module.createiamrole.iam_role_name}"
}
outputs.tf for stroing iam_role_name from first module
output "iam_role_name" {
description = "name for IAM role"
value = aws_iam_role.createiamrole[0].name
}
resource code of module checkiamrole for which i am getting error
resource "aws_iam_role" "newrole" {
count = var.iam_role == "" ? 1 : 0
name = "my-new-iamrole"
assume_role_policy = data.aws_iam_policy_document.iampolicy[0].json
tags = var.tags
depends_on = [var.iam_role_depends_on]
}
Error
Invalid count argument
count = var.iam_role == "" ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
My Query is how to implement module dependency as well how to pass one output value from dependent module to required module
Your count, as the error message says, can't depend on any other resources. The condition for the count must be know before you run your code. So you have to create some new variable, e.g. var.create_role which you specify during your apply. Based on this value, the modules will create or not create the role in question.
Other alternative, again as the error message says, is to first deploy module1, and then module2.

How to send lifecycle_rules to a s3 module in terraform

I have a terraform module that creates a s3 bucket. I want the module to be able to accept lifecycle rules.
resource "aws_s3_bucket" "somebucket" {
bucket = "my-versioning-bucket"
acl = "private"
lifecycle_rule {
prefix = "config/"
enabled = true
noncurrent_version_transition {
days = 30
storage_class = "STANDARD_IA"
}
}
}
I want to be able to to send above lifecycle_rule block of code when I call the module. I tried to send it through a variable but it did not work. I have done some research but no luck. Any help is highly appreciated.
Try to use output, from one module , get the desire value in output
e.g
output "lifecycle_rule" {
value = aws_s3_bucket.somebucket.id
}
and call this value into your another module
like:
module "somename" {
source = "/somewhere"
lifecycle_rule = module.amodule-name-where-output-is-applied.lifecycle_rule
...
You would need to play around this.
Just give it a try, these are my guess as far as I understand terraform and your questing.
below link can also help you:
Terraform: Output a field from a module

How to specify dead letter dependency using modules?

I have the following core module based off this official module:
module "sqs" {
source = "github.com/terraform-aws-modules/terraform-aws-sqs?ref=0d48cbdb6bf924a278d3f7fa326a2a1c864447e2"
name = "${var.site_env}-sqs-${var.service_name}"
}
I'd like to create two queues: xyz and xyz_dead. xyz sends its dead letter messages to xyz_dead.
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
redrive_policy = <<POLICY {
"deadLetterTargetArn" : "${data.TODO.TODO.arn}",
"maxReceiveCount" : 5
}
POLICY
site_env = "${var.site_env}"
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
How do I specify the deadLetterTargetArn dependency?
If I do:
data "aws_sqs_queue" "dead_queue" {
filter {
name = "tag:Name"
values = ["${var.site_env}-sqs-xyz_dead"]
}
}
and set deadLetterTargetArn to "${data.aws_sqs_queue.dead_queue.arn}", then I get this error:
Error: data.aws_sqs_queue.thumbnail_requests_queue_dead: "name":
required field is not set Error:
data.aws_sqs_queue.thumbnail_requests_queue_dead: : invalid or unknown
key: filter
The best way to do this is to use the outputted ARN from the module:
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
site_env = "${var.site_env}"
redrive_policy = <<POLICY
{
"deadLetterTargetArn" : "${module.xyz_dead_queue.this_sqs_queue_arn}",
"maxReceiveCount" : 5
}
POLICY
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
NB: I've also changed the indentation of your HEREDOC here because you normally need to remove the indentation with these.
This will pass the ARN of the SQS queue directly from the xyz_dead_queue module to the xyz_queue.
As for the errors you were getting, the aws_sqs_queue data source takes only a name argument, not a filter block like some of the other data sources do.
If you wanted to use the aws_sqs_queue data source then you'd just want to use:
data "aws_sqs_queue" "dead_queue" {
name = "${var.site_env}-sqs-${var.service_name}"
}
That said, if you are creating two things at the same time then you are going to have issues using a data source to refer to one of those things unless you create the first resource first. This is because data sources run before resources so if neither queue yet exists your data source would run and not find the dead letter queue and thus fail. If the dead letter queue did exist then it would be okay. In general though you're best off avoiding using data sources like this and only use them to refer to things being created in a separate terraform apply (or perhaps even created outside of Terraform).
You are also much better off simply passing the outputs of resources or modules to other resources/modules and allowing Terraform to correctly build a dependency tree for them as well.