How to deal with terraform modules that depends on other modules - google-cloud-platform

I have this problem. I am trying to create a network and subnetworks in gcp and I am using modules to do so.
So my directory structure looks like below:
modules
network
main.tf
variables.tf
subnetworks
main.tf
variables.tf
main.tf
terraform.tfvars
variables.tf
The folders inside the module where I have put the modules as the name suggests.
And main.tf inside the network looks like this:
# module to create the subnet
resource "google_compute_network" "network" {
name = var.network_name
auto_create_subnetworks = "false"
}
And the main.tf inside the subnetworks looks like this:
resource "google_compute_subnetwork" "public-subnetwork" {
network = // how to refer the network name here?
...
}
in normal scenarios when we have a one single terraform file for every resource (when we don't use modules), it would look like this:
# create vpc
resource "google_compute_network" "kubernetes-vpc" {
name = "kubernetes-vpc"
auto_create_subnetworks = "false"
}
resource "google_compute_subnetwork" "master-sub" {
network = google_compute_network.kubernetes-vpc.name
...
}
We can directly call the google_compute_network.kubernetes-vpc.name for the value of network when creating the google_compute_subnetwork. But now since I am using modules, how can I achieve this?
Thanks.

You can create a outputs.tf file in the network.
Inside the outputs.tf file you can declare a resource like this.
output "google_compute_network_name" {
description = "The name of the network"
value = google_compute_network.network.name
}
Now inside the subnetwork module you can use a standard variable to receive the value of the network name.
resource "google_compute_subnetwork" "public-subnetwork" {
// receive network name as variable
network = var.network_name
...
}
And where you use the modules network and subnetworks in main.tf, from the roof folder (I assume) you can pass the output variable from module network to the subnetwork module.
Example:
module "root_network" {
source = "./modules/network"
}
module "subnetwork" {
source = "./modules/subnetworks"
// input variable for subnetwork from the output of the network
network_name = module.root_network.google_compute_network_name
}
If you want to read more about output variables you can find the documentation here.

Related

Create multiple GCP storage buckets using terraform

I have used terraform scripts to create resources in GCP. The scripts are working fine. But my question is - how do I create multiple storage buckets using a single script.
I have two files for creating the storage bucket-
main.tf which has the terraform code to create the buckets .
variables.tf which has the actual variables like storage bucket name, project_id, etc, which looks like this:
variable "storage_class" { default = "STANDARD" }
variable "name" { default = "internal-demo-bucket-1"}
variable "location" { default = "asia-southeast1" }
How can I provide more than one bucket name in the variable name? I tried to provide multiple names in an array but the build failed.
I don't know all your requirements, however suppose you need to create a few buckets with different names, while all other bucket characteristics are constant for every bucket in the set under discussion.
I would create a variable, i.e. bucket_name_set in a variables.tf file:
variable "bucket_name_set" {
description = "A set of GCS bucket names..."
type = list(string)
}
Then, in the terraform.tfvars file, I would provide unique names for the buckets:
bucket_name_set = [
"some-bucket-name-001",
"some-bucket-name-002",
"some-bucket-name-003",
]
Now, for example, in the main.tf file I can describe the resources:
resource "google_storage_bucket" "my_bucket_set" {
project = "some project id should be here"
for_each = toset(var.bucket_name_set)
name = each.value # note: each.key and each.value are the same for a set
location = "some region should be here"
storage_class = "STANDARD"
force_destroy = true
uniform_bucket_level_access = true
}
Terraform description is here: The for_each Meta-Argument
Terraform description for the GCS bucket is here: google_storage_bucket
Terraform description for input variables is here: Input Variables
Have you considered using terraform provided modules ? It becomes very easy if you use gcs module for bucket creation. It has an option to specify how many buckets you need to create and even the subfolders. I am including the module below for your reference
https://registry.terraform.io/modules/terraform-google-modules/cloud-storage/google/latest

How to reference an id value from another main.tf file residing in a different module in Terraform

Is there a way to reference an id value from another object located in a different main.tf file in a different module?
If the two resources or objects are located in the same file you just do this
resource "aws_s3_bucket" "log_bucket" {
bucket = "my-tf-log-bucket"
acl = "log-delivery-write"
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
logging {
target_bucket = aws_s3_bucket.log_bucket.id
target_prefix = "log/"
}
}
You can assign the target_bucket for logging to the id of the bucket in resource "aws_s3_bucket"
Now suppose I have two folders named log_module and s3_module with their respective main.tf files.
The main.tf inside the log_module contains
resource "aws_s3_bucket" "log_bucket" {
bucket = "my-tf-log-bucket"
acl = "log-delivery-write"
}
and the main.tf inside the s3_module contains
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
logging {
target_bucket = "target-bucket"
target_prefix = "log/"
}
}
How would I assign the id for the bucket in resource "aws_s3_bucket" "log_bucket"
to the target_bucket in the main.tf for the s3_module?
You can use Terraform output values to achieve this functionality.
In your log_module directory, you can create a new file named outputs.tf and define a new output like so:
output "bucket_id" {
value = aws_s3_bucket.log_bucket.id
}
In your s3_module, you would need to create a variables file (e.g., variables.tf) which would be used to assign a value to the target_bucket for the aws_s3_bucket resource.
For example:
variable "target_bucket" {
description = "The name of the bucket that will receive the log objects"
type = string
}
Then you would modify the main.tf file in your s3_module directory like so:
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
logging {
target_bucket = var.target_bucket
target_prefix = "log/"
}
}
Where the value of target_bucket is derived from var.target_bucket.
You would then have to create a main.tf file in the root of your repository like so:
module "logging" {
source = "/path/to/log_module/directory"
// Any inputs to the module defined here
}
module "s3" {
source = "/path/to/s3_module/directory"
target_bucket = module.logging.bucket_id
}
The main.tf file in the root of the repository creates in implicit dependency between the s3 and logging modules. The s3 module becomes dependent on the logging module because the value of target_bucket uses the output of the logging module, which is the ID of the S3 bucket.
If you're talking about two modules that are contained within a parent configuration (i.e. everything is all within the same state), then #jasonwalsh has the right answer.
If you're talking about two completely separate Terraform configurations with different state files though (i.e. you run terraform apply on them separately), you'll want to use the remote state data source, combined with outputs in the module that is outputing values.
This allows one Terraform configuration to read output values from an entirely separate Terraform configuration, without them being children of the same parent configuration.

Terraform modules and providers

I have a module that defines a provider as follows
provider "aws" {
region = "${var.region}"
shared_credentials_file = "${module.global_variables.shared_credentials_file}"
profile = "${var.profile}"
}
and an EC instance as follows
resource "aws_instance" "node" {
ami = "${lookup(var.ami, var.region)}"
key_name = "ib-us-east-2-production"
instance_type = "${var.instance_type}"
count = "${var.count}"
security_groups = "${var.security_groups}"
tags {
Name = "${var.name}"
}
root_block_device {
volume_size = 100
}
In the terraform script that calls this module, I would now like to create an ELB and attach it point it to the instance with something along the lines of
resource "aws_elb" "node_elb" {
name = "${var.name}-elb"
.........
However terraform keeps prompting me for the aws region that is already defined in the module. The only way around this is to copy the provider block into the file calling the module. Is there a cleaner way to approach this?
The only way around this is to copy the provider block into the file calling the module.
The provider block should actually be in your file calling the module and you can remove it from your module.
From the docs:
For convenience in simple configurations, a child module automatically inherits default (un-aliased) provider configurations from its parent. This means that explicit provider blocks appear only in the root module, and downstream modules can simply declare resources for that provider and have them automatically associated with the root provider configurations.
https://www.terraform.io/docs/configuration/modules.html#implicit-provider-inheritance

Replicate infrastructure using Terraform module throws error for same name IAM policy

I have created basic infrastructure as below and I'm trying to see if modules works for me to replicate infrastructure on AWS using Terraform.
variable "access_key" {}
variable "secret_key" {}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
alias = "us-east-1"
region = "us-east-1"
}
variable "company" {}
module "test1" {
source = "./modules"
}
module "test2" {
source = "./modules"
}
And my module is as follows:
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
But somehow when I use same module in my main.tf it is giving me an error for same named resource policy. How should I handle such a scenario?
I want to use same main.tf for prod/stage/dev environment. How do I achieve it?
My actual module looks like the code in this question.
How do I make use of modules and be able to name module resources dynamically? e.g. stage_iam_policy / prod_iam_policy etc. Is this the right approach?
You're naming the IAM policy the same regardless of where you use the module. With IAM policies they are uniquely identified by their name rather than some random ID (such as EC2 instances which are identified as i-...) so you can't have 2 IAM policies with the same name in the same AWS account.
Instead you must add some extra uniqueness to the name such as by using a parameter to the module appended to the name with something like this:
module "test1" {
source = "./modules"
enviroment = "foo"
}
module "test1" {
source = "./modules"
enviroment = "bar"
}
and in your module you'd have the following:
variable "enviroment" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${var.enviroment}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
Alternatively if you don't have some useful thing you can use such as name or environment etc then you could just straight up use some randomness:
resource "random_pet" "random" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${random_pet.random.id}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}

'Not a valid output for module' when using output variable with terraform

I'm trying to setup some IaC for a new project using Hashicorp Terraform on AWS. I'm using modules because I want to be able to reuse stuff across multiple environments (staging, prod, dev, etc.)
I'm struggling to understand where I have to set an output variable within a module, and how I then use that in another module. Any pointers to this would be greatly appreciated!
I need to use some things created in my VPC module (subnet IDs) when creating EC2 machines. My understanding is that you can't reference something from one module in another, so I am trying to use an output variable from the VPC module.
I have the following in my site main.tf
module "myapp-vpc" {
source = "dev/vpc"
aws_region = "${var.aws_region}"
}
module "myapp-ec2" {
source = "dev/ec2"
aws_region = "${var.aws_region}"
subnet_id = "${module.vpc.subnetid"}
}
dev/vpc simply sets some values and uses my vpc module:
module "vpc" {
source = "../../modules/vpc"
aws_region = "${var.aws_region}"
vpc-cidr = "10.1.0.0/16"
public-subnet-cidr = "10.1.1.0/24"
private-subnet-cidr = "10.1.2.0/24"
}
In my vpc main.tf, I have the following at the very end, after the aws_vpc and aws_subnet resources (showing subnet resource):
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.main.id}"
map_public_ip_on_launch = true
availability_zone = "${var.aws_region}a"
cidr_block = "${var.public-subnet-cidr}"
}
output "subnetid" {
value = "${aws_subnet.public.id}"
}
When I run terraform plan I get the following error message:
Error: module 'vpc': "subnetid" is not a valid output for module "vpc"
Outputs need to be passed up through each module explicitly each time.
For example if you wanted to output a variable to the screen from a module nested below another module you would need something like this:
child-module.tf
output "child_foo" {
value = "foobar"
}
parent-module.tf
module "child" {
source = "path/to/child"
}
output "parent_foo" {
value = "${module.child.child_foo}"
}
main.tf
module "parent" {
source = "path/to/parent"
}
output "main_foo" {
value = "${module.parent.parent_foo}"
}