Terraform Module dependency not working ( version 0.12) - amazon-web-services

i am trying to pass one output value from one terraform module to another terraform module, but facing below issue
my use case is this, in first module i am creating one IAM role and in the second module i need to use above created IAM role (also, in second module if role is not getting created in first module it will create role itself in second module, please consider it as requirement)
module "createiamrole"{
source = "./modules/createiamrole"
}
// this module creates new role, if role is not supplied from above module (default value of iam_role is "" set in variables.tf).
module "checkiamrole"{
source = "./modules/checkiamrole"
iam_role_depends_on = module.createiamrole.iam_role_name
iam_role = "${module.createiamrole.iam_role_name}"
}
outputs.tf for stroing iam_role_name from first module
output "iam_role_name" {
description = "name for IAM role"
value = aws_iam_role.createiamrole[0].name
}
resource code of module checkiamrole for which i am getting error
resource "aws_iam_role" "newrole" {
count = var.iam_role == "" ? 1 : 0
name = "my-new-iamrole"
assume_role_policy = data.aws_iam_policy_document.iampolicy[0].json
tags = var.tags
depends_on = [var.iam_role_depends_on]
}
Error
Invalid count argument
count = var.iam_role == "" ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
My Query is how to implement module dependency as well how to pass one output value from dependent module to required module

Your count, as the error message says, can't depend on any other resources. The condition for the count must be know before you run your code. So you have to create some new variable, e.g. var.create_role which you specify during your apply. Based on this value, the modules will create or not create the role in question.
Other alternative, again as the error message says, is to first deploy module1, and then module2.

Related

Create multiple GCP storage buckets using terraform

I have used terraform scripts to create resources in GCP. The scripts are working fine. But my question is - how do I create multiple storage buckets using a single script.
I have two files for creating the storage bucket-
main.tf which has the terraform code to create the buckets .
variables.tf which has the actual variables like storage bucket name, project_id, etc, which looks like this:
variable "storage_class" { default = "STANDARD" }
variable "name" { default = "internal-demo-bucket-1"}
variable "location" { default = "asia-southeast1" }
How can I provide more than one bucket name in the variable name? I tried to provide multiple names in an array but the build failed.
I don't know all your requirements, however suppose you need to create a few buckets with different names, while all other bucket characteristics are constant for every bucket in the set under discussion.
I would create a variable, i.e. bucket_name_set in a variables.tf file:
variable "bucket_name_set" {
description = "A set of GCS bucket names..."
type = list(string)
}
Then, in the terraform.tfvars file, I would provide unique names for the buckets:
bucket_name_set = [
"some-bucket-name-001",
"some-bucket-name-002",
"some-bucket-name-003",
]
Now, for example, in the main.tf file I can describe the resources:
resource "google_storage_bucket" "my_bucket_set" {
project = "some project id should be here"
for_each = toset(var.bucket_name_set)
name = each.value # note: each.key and each.value are the same for a set
location = "some region should be here"
storage_class = "STANDARD"
force_destroy = true
uniform_bucket_level_access = true
}
Terraform description is here: The for_each Meta-Argument
Terraform description for the GCS bucket is here: google_storage_bucket
Terraform description for input variables is here: Input Variables
Have you considered using terraform provided modules ? It becomes very easy if you use gcs module for bucket creation. It has an option to specify how many buckets you need to create and even the subfolders. I am including the module below for your reference
https://registry.terraform.io/modules/terraform-google-modules/cloud-storage/google/latest

terraform 12 count date_template not working

I'm upgrading to terraform 12 and have an asg module that references a root repository. As part of this, it uses a data.template_file resource to attach user data to the asg which is then put in to log files on the instances. The module looks as follows;
module "cef_fleet" {
source = "git::ssh://git#github.com/asg-repo.git?ref=terraform12"
user_data_rendered = data.template_file.init.rendered
instance_type = var.instance_type
ami = var.ami
etc ...
which as you can see calls the data resource;
data "template_file" "init" {
count = signum(var.cluster_size_max)
template = file("${path.module}/template-files/init.sh")
vars = {
account_name = var.account_name
aws_account_number = var.aws_account_number
this works fine in terraform 11 but when I changed to terraoform 12 and try to apply I get this error:
*Because data.template_file.init has "count" set, its attributes must be
accessed on specific instances.
For example, to correlate with indices of a referring resource, use:
data.template_file.init[count.index]*
if I change this in my module to user_data_rendered = data.template_file.init[count.index]
I then get this error;
The "count" object can be used only in "resource" and "data" blocks, and only
when the "count" argument is set.
I don't know what to do here. If I leave the .rendered in then it doesn't seem to recognise the [count.index] as I get the first error again. Has anyone any advice on what I need to do?

How do I get Terraform to throw a specific error message depending on what account the user is in?

I have a terraform module that does domain delegation. For several variables there is some validation against a hard-coded value to check that a user is using valid inputs, for example:
resource "null_resource" "validate_region" {
count = contains(local.regions, var.region) == true ? 0 : "Please provide a valid AWS region. E.g. (us-west-2)"
}
with local.regions being hard-coded and var.region being a user-set variable. The above code works in that when a user sets the variable wrong, it throws an error like this:
Error: Incorrect value type
on .terraform/foo/main.tf line 46, in resource "null_resource" "validate_region":
46: count = contains(local.regions, var.region) == true ? 0 : "Please provide a valid AWS region. E.g. (us-west-2)"
Invalid expression value: a number is required.
I now need to validate that the AWS account the user is currently using is the correct one. In this case it's up to the user to set the account id of the correct account in their variables, and my code needs to pull the account id of the account that's running the module and compare it against the user's variable. I've tried something like this:
data "aws_caller_identity" "account" {}
resource "null_resource" "validate_account" {
count = data.aws_caller_identity.account.account_id == var.primary_account_id ? 0 : "Please check that you are using the AWS creds for the primary account for this domain."
}
data "aws_route53_zone" "primary" {
name = local.primary_name
}
with various syntax changes on the "{data.aws_caller_identity.account.account_id == var.primary_account_id}" ? 0 part in an effort to get the logic to work, but no luck. I would like it to throw an error like the region validation does, where it will show the error message I wrote. Instead(depending on syntax), it will work as expected for the correct account and throw a Error: no matching Route53Zone found error for the incorrect account, OR it will throw a completely different error presumably because the syntax is screwing things up.
How do I get this to work? is it possible?
What I do is create an if statement in the locals block and source a file with the error message I want to display.
variable "stage" {
type = string
desciption = "The stage to run the deployment in"
}
locals {
stage_validation = var.stage == "prod" || var.stage == "dev"
? var.stage
: file("[Error] this module should only be ran for stages ['prod' or 'dev' ]")
}
The output of setting the stage variable to anything other than 'dev' or 'prod' is as bellow
╷
│ Error: Invalid function argument
│
│ on main.tf line 10, in locals:
│ 10: stage_validation = var.stage == "prod" || var.stage == "dev"
│ ? var.stage
│ : file("[Error] this module should only be ran for stages ['prod' or 'dev' ]")
│
│ Invalid value for "path" parameter: no file exists at This module should only be run for stages ['prod' or 'dev']; this function works only
│ with files that are distributed as part of the configuration source code, so if this file will be created by a resource in this
│ configuration you must instead obtain this result from an attribute of that resource.
╵
This is helpful because it allows you to write an error message that will be shown to the person trying to run the code.
An alternative option here that should simplify what you are doing is to set the region and account constraints up so that Terraform will automatically use the correct region and fail if the credentials are not for the correct account.
You can define this in the aws provider block. An example might look like this:
provider "aws" {
region = "eu-west-1"
allowed_account_ids = ["123456789012"]
}
Now if you attempt to use credentials for a different AWS account then Terraform will fail during the plan stage:
Error: AWS Account ID not allowed: 234567890123
I figured out that this block:
data "aws_route53_zone" "primary" {
name = local.primary_name
}
was running before the account validation resource block. Add in a depends_on like so:
data "aws_route53_zone" "primary" {
name = local.primary_name
depends_on = [null_resource.validate_account,
]
}
And it's all good.

terraform count dependent on data from target environment

I'm getting the following error when trying to initially plan or apply a resource that is using the data values from the AWS environment to a count.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid count argument
on main.tf line 24, in resource "aws_efs_mount_target" "target":
24: count = length(data.aws_subnet_ids.subnets.ids)
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
$ terraform --version
Terraform v0.12.9
+ provider.aws v2.30.0
I tried using the target option but doesn't seem to work on data type.
$ terraform apply -target aws_subnet_ids.subnets
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
The only solution I found that works is:
remove the resource
apply the project
add the resource back
apply again
Here is a terraform config I created for testing.
provider "aws" {
version = "~> 2.0"
}
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
resource aws_default_vpc default {
}
data aws_subnet_ids subnets {
vpc_id = aws_default_vpc.default.id
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
Finally figured out the answer after researching the answer by Dude0001.
Short Answer. Use the aws_vpc data source with the default argument instead of the aws_default_vpc resource. Here is the working sample with comments on the changes.
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
// Delete this --> resource aws_default_vpc default {}
// Add this
data aws_vpc default {
default = true
}
data "aws_subnet_ids" "subnets" {
// Update this from aws_default_vpc.default.id
vpc_id = "${data.aws_vpc.default.id}"
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
What I couldn't figure out was why my work around of removing aws_efs_mount_target on the first apply worked. It's because after the first apply the aws_default_vpc was loaded into the state file.
So an alternate solution without making change to the original tf file would be to use the target option on the first apply:
$ terraform apply --target aws_default_vpc.default
However, I don't like this as it requires a special case on first deployment which is pretty unique for the terraform deployments I've worked with.
The aws_default_vpc isn't a resource TF can create or destroy. It is the default VPC for your account in each region that AWS creates automatically for you that is protected from being destroyed. You can only (and need to) adopt it in to management and your TF state. This will allow you to begin managing and to inspect when you run plan or apply. Otherwise, TF doesn't know what the resource is or what state it is in, and it cannot create a new one for you as it s a special type of protected resource as described above.
With that said, go get the default VPC id from the correct region you are deploying in your account. Then import it into your TF state. It should then be able to inspect and count the number of subnets.
For example
terraform import aws_default_vpc.default vpc-xxxxxx
https://www.terraform.io/docs/providers/aws/r/default_vpc.html
Using the data element for this looks a little odd to me as well. Can you change your TF script to get the count directly through the aws_default_vpc resource?

How to specify dead letter dependency using modules?

I have the following core module based off this official module:
module "sqs" {
source = "github.com/terraform-aws-modules/terraform-aws-sqs?ref=0d48cbdb6bf924a278d3f7fa326a2a1c864447e2"
name = "${var.site_env}-sqs-${var.service_name}"
}
I'd like to create two queues: xyz and xyz_dead. xyz sends its dead letter messages to xyz_dead.
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
redrive_policy = <<POLICY {
"deadLetterTargetArn" : "${data.TODO.TODO.arn}",
"maxReceiveCount" : 5
}
POLICY
site_env = "${var.site_env}"
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
How do I specify the deadLetterTargetArn dependency?
If I do:
data "aws_sqs_queue" "dead_queue" {
filter {
name = "tag:Name"
values = ["${var.site_env}-sqs-xyz_dead"]
}
}
and set deadLetterTargetArn to "${data.aws_sqs_queue.dead_queue.arn}", then I get this error:
Error: data.aws_sqs_queue.thumbnail_requests_queue_dead: "name":
required field is not set Error:
data.aws_sqs_queue.thumbnail_requests_queue_dead: : invalid or unknown
key: filter
The best way to do this is to use the outputted ARN from the module:
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
site_env = "${var.site_env}"
redrive_policy = <<POLICY
{
"deadLetterTargetArn" : "${module.xyz_dead_queue.this_sqs_queue_arn}",
"maxReceiveCount" : 5
}
POLICY
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
NB: I've also changed the indentation of your HEREDOC here because you normally need to remove the indentation with these.
This will pass the ARN of the SQS queue directly from the xyz_dead_queue module to the xyz_queue.
As for the errors you were getting, the aws_sqs_queue data source takes only a name argument, not a filter block like some of the other data sources do.
If you wanted to use the aws_sqs_queue data source then you'd just want to use:
data "aws_sqs_queue" "dead_queue" {
name = "${var.site_env}-sqs-${var.service_name}"
}
That said, if you are creating two things at the same time then you are going to have issues using a data source to refer to one of those things unless you create the first resource first. This is because data sources run before resources so if neither queue yet exists your data source would run and not find the dead letter queue and thus fail. If the dead letter queue did exist then it would be okay. In general though you're best off avoiding using data sources like this and only use them to refer to things being created in a separate terraform apply (or perhaps even created outside of Terraform).
You are also much better off simply passing the outputs of resources or modules to other resources/modules and allowing Terraform to correctly build a dependency tree for them as well.