I am creating a Terraform module for AWS VPC creation.
Here is my directory structure
➢ tree -L 3
.
├── main.tf
├── modules
│ ├── subnets
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── vpc
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── variables.tf
3 directories, 12 files
In the subnets module, I want to grab the vpc id of the vpc (sub)module.
In modules/vpc/outputs.tf I use:
output "my_vpc_id" {
value = "${aws_vpc.my_vpc.id}"
}
Will this be enough for me doing the following in modules/subnets/main.tf ?
resource "aws_subnet" "env_vpc_sn" {
...
vpc_id = "${aws_vpc.my_vpc.id}"
}
Your main.tf (or wherever you use the subnet module) would need to pass this in from the output of the VPC module and your subnet module needs to take is a required variable.
To access a module's output you need to reference it as module.<MODULE NAME>.<OUTPUT NAME>:
In a parent module, outputs of child modules are available in expressions as module... For example, if a child module named web_server declared an output named instance_ip_addr, you could access that value as module.web_server.instance_ip_addr.
So your main.tf would look something like this:
module "vpc" {
# ...
}
module "subnets" {
vpc_id = "${module.vpc.my_vpc_id}"
# ...
}
and subnets/variables.tf would look like this:
variable "vpc_id" {}
Related
My index.html is in the same level of where my terraform resource file is located
terraform
│ └── site
│ ├── index.html
│ ├── s3.tf
│ └── variables.tf
My config looks like this
resource "aws_s3_bucket_object" "index" {
key = "index.html"
acl = "public-read"
bucket = aws_s3_bucket.my_example_site.id
source = "index.html"
etag = filemd5("index.html")
}
When I apply, I get this error
│ Error: Error in function call
│
│ on modules/terraform/site/s3.tf line 15, in resource "aws_s3_bucket_object" "index":
│ 15: etag = filemd5("index.html")
│
│ Call to function "filemd5" failed: open index.html: no such file or directory.
You need a proper reference to your path. You're running terraform out of your root, not the module. Thus as the comments suggest, you need to you ${path.module} to properly tell terraform where the file is.
I am pretty new in Terraform, trying to create separate backend for each AWS Region under each environment (dev, stg, prod) of my application. So, I am using separate .config files to configure separate backend, and separate .tfvars files to create resources on relevant environmet/regions.
I am detailing everything below:
Folder structure:
.
├── config
│ ├── stg-useast1.config
│ ├── stg-uswest2.config
│ ├── prod-useast1.config
│ └── prod-uswest2.config
├── vars
│ ├── stg-useast1.tfvars
│ ├── stg-uswest2.tfvars
│ ├── prod-useast1.tfvars
│ └── prod-useast1.tfvars
└── modules
├── backend.tf
├── main.tf
├── variables.tf
└── module-ecs
├── main.tf
└── variables.tf
File contents necessary for this question are showing below (just one region):
./config/stg-useast1.config
profile = "myapp-stg"
region = "us-east-1"
bucket = "myapp-tfstate-stg-useast1"
key = "myapp-stg-useast1.tfstate"
dynamodb_table = "myapp-tfstate-stg-useast1-lock"
./vars/stg-useast1.tfvars
environment = "stg"
app_name = "myapp-ecs"
aws_region = "us-east-1"
aws_profile = "myapp-stg"
./modules/backend.tf
terraform {
backend "s3" {
}
}
./modules/main.tf
provider "aws" {
region = var.aws_region
shared_credentials_files = ["~/.aws/credentials"]
profile = var.aws_profile
}
module "aws-ecr" {
source = "./ecs"
}
./modules/variables.tf
variable "app_name" {}
variable "environment" {}
variable "aws_region" {}
variable "aws_profile" {}
./modules/module-ecs/main.tf
resource "aws_ecr_repository" "aws-ecr" {
name = "${var.app_name}-${var.environment}-ecr"
tags = {
Name = "${var.app_name}-ecr"
Environment = var.environment
}
}
./modules/module-ecs/variables.tf
variable "app_name" {
default = "myapp-ecs"
}
variable "environment" {
default = "stg"
}
variable "aws_region" {
default = "us-east-1"
}
variable "aws_profile" {
default = "myapp-stg"
}
The terraform init went well.
$ terraform init --backend-config=../config/stg-useast1.config
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.31.0
Terraform has been successfully initialized!
I ran terraform plan as following:
terraform plan --var-file=../vars/stg-useast1.tfvars
But it did not use the values from this .tfvars file. I had to supply them in ./modules/module-ecs/variables.tf as default = <value> for each variables.
How can I make use of the .tfvars file with terraform plan command?
Any restructuring suggestion is welcomed.
The local module you've created doesn't inherit variables. You will need to pass them in. For example:
module "aws-ecr" {
source = "./ecs" # in your example it looks like the folder is ecs-module?
app_name = var.app_name
environment = var.environment
aws_region = var.region
aws_profile = var.profile
}
I have a file in the same directory as the test.
Since I'm using Bazel as the build system (in Linux) I don't have to create the main and initialize the tests (In fact I don't know where the main function resides when using Bazel).
#include <string>
#include <fstream>
//Other include files
#include "gtest/gtest.h"
TEST(read_file_test, read_file) {
std::string input_file("file_path");
graph gr;
graph_loader loader;
ASSERT_TRUE(loader.load(gr, input_file));
}
The BUILD file for the tests:
cc_test(
name = "loaderLib_test",
srcs = ["loaderLib_test.cpp"],
deps = [
"//src/lib/loader:loaderLib",
"#com_google_googletest//:gtest_main",
],
data = glob(["resources/tests/input_graphs/graph_topology/**"]),
)
The WORKSPACE file:
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "com_google_googletest",
remote = "https://github.com/google/googletest",
tag = "release-1.8.1",
)
The directory structure:
.
├── README.md
├── WORKSPACE
├── bazel-bin -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__/bazel-out/k8-fastbuild/bin
├── bazel-graph_processor -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__
├── bazel-out -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__/bazel-out
├── bazel-testlogs -> /home/mmaghous/.cache/bazel/_bazel_mmaghous/35542ec7cbabc2e6f7475e3870a798d1/execroot/__main__/bazel-out/k8-fastbuild/testlogs
├── resources
│ └── tests
│ └── input_graphs
│ ├── graph_data
│ │ └── G1_data.txt
│ └── graph_topology
│ ├── G1_notFully_notWeakly.txt
│ ├── G2_notFully_Weakly.txt
│ ├── G3_fully_weakly.txt
│ ├── G4_fully_weakly.txt
│ ├── G5_notFully_weakly.txt
│ ├── G6_notFully_weakly.txt
│ └── G7_notFully_notWeakly.txt
├── src
│ ├── lib
│ │ ├── algorithms
│ │ │ ├── BUILD
│ │ │ └── graph_algorithms.hpp
│ │ ├── graph
│ │ │ ├── BUILD
│ │ │ └── graph.hpp
│ │ └── loader
│ │ ├── BUILD
│ │ └── graph_loader.hpp
│ └── main
│ ├── BUILD
│ └── main.cpp
├── tests
│ ├── BUILD
│ ├── algorithmsLib_test.cpp
│ ├── graphLib_test.cpp
│ └── loaderLib_test.cpp
└── todo.md
So how should I reference the file if it's in the same folder as the test or any other folder?
BTW: Using the full path from the root of my file systems works fine.
The problem is that the glob is relative to the BUILD file like when you refer a file in srcs. The glob is expanding to an empty list because no file is matching. The easiest way to see it is putting the full path explicitly instead of the glob and Bazel will complain.
You have two options, or you move the resources folder inside the tests folder or you create a BUILD file in the resources folders where you create a filegroup exposing the txt files and then in the test target you reference the filegroup target.
https://docs.bazel.build/versions/master/be/general.html#filegroup
In resources/BUILD
filegroup(
name = "resources",
srcs = glob(["tests/input_graphs/graph_topology/**"]),
visibility = ["//visibility:public"],
)
In tests/BUILD
cc_test(
name = "loaderLib_test",
srcs = ["loaderLib_test.cpp"],
deps = [
"//src/lib/loader:loaderLib",
"#com_google_googletest//:gtest_main",
],
data = ["//resources"],
)
Is there a way of using output values of a module that is located in another folder? Imagine the following environment:
tm-project/
├── lambda
│ └── vpctm-manager.js
├── networking
│ ├── init.tf
│ ├── terraform.tfvars
│ ├── variables.tf
│ └── vpc-tst.tf
├── prd
│ ├── init.tf
│ ├── instances.tf
│ ├── terraform.tfvars
│ └── variables.tf
└── security
└── init.tf
I want to create EC2 instances and place them in a subnet that is declared in networking folder. So, I was wondering if by any chance I could access the outputs of the module I used in networking/vpc-tst.tf as the inputs of my prd/instances.tf.
Thanks in advances.
You can use a outputs.tf file to define the outputs of a terraform module. Your output will have the variables name such as the content below.
output "vpc_id" {
value = "${aws_vpc.default.id}"
}
These can then be referenced within your prd/instances.tf by referencing the resource name combined with the output name you defined in your file.
For example if you have a module named vpc which uses this module you could then use the output similar to below.
module "vpc" {
......
}
resource "aws_security_group" "my_sg" {
vpc_id = module.vpc.vpc_id
}
I have the following project structure to build Lambda functions on AWS using Terraform :
.
├── aws.tf
├── dev.tfvars
├── global_variables.tf -> ../shared/global_variables.tf
├── main.tf
├── module
│ ├── data_source.tf
│ ├── main.tf
│ ├── output.tf
│ ├── role.tf
│ ├── security_groups.tf
│ ├── sources
│ │ ├── function1.zip
│ │ └── function2.zip
│ └── variables.tf
└── vars.tf
In the .main.tf file i have this code that will create 2 different lambda functions :
module "function1" {
source = "./module"
function_name = "function1"
source_code = "function1.zip"
runtime = "${var.runtime}"
memory_size = "${var.memory_size}"
timeout = "${var.timeout}"
aws_region = "${var.aws_region}"
vpc_id = "${var.vpc_id}"
}
module "function2" {
source = "./module"
function_name = "function2"
source_code = "function2.zip"
runtime = "${var.runtime}"
memory_size = "${var.memory_size}"
timeout = "${var.timeout}"
aws_region = "${var.aws_region}"
vpc_id = "${var.vpc_id}"
}
The problem is that in deployment terraform create all resources twice. For Lambda it's Ok, that's the purpose, but for security groups and Roles that's not what i want.
For example this security group is create 2 times :
resource "aws_security_group" "lambda-sg" {
vpc_id = "${data.aws_vpc.main_vpc.id}"
name = "sacem-${var.project}-sg-lambda-${var.function_name}-${var.environment}"
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = "${var.authorized_ip}"
}
# To solve dependcies error when updating the security groups
lifecycle {
create_before_destroy = true
ignore_changes = ["tags.DateTimeTag"]
}
tags = "${merge(var.resource_tagging, map("Name", "${var.project}-sg-lambda-${var.function_name}-${var.environment}"))}"
}
So that's clear that the problem is the structure of the project. Could you help to solve that ?
Thanks.
If you create the SecurityGroup within the module, it'll be created once per module inclusion.
I believe that some of the variable values for the sg name change when you include the module, right? Therefore, the sg name will be unique for both modules and can be created twice without errors.
If you'd choose a static name, Terraform would throw an error when trying to create the sg from module 2 as the resource already exists (as created by module 1).
You could thus define the sg resource outside of the module itself to create it only once.
You can then pass the id of the created sg as variable to the module inclusion and use it there for other resources.