I have the following project structure to build Lambda functions on AWS using Terraform :
.
├── aws.tf
├── dev.tfvars
├── global_variables.tf -> ../shared/global_variables.tf
├── main.tf
├── module
│ ├── data_source.tf
│ ├── main.tf
│ ├── output.tf
│ ├── role.tf
│ ├── security_groups.tf
│ ├── sources
│ │ ├── function1.zip
│ │ └── function2.zip
│ └── variables.tf
└── vars.tf
In the .main.tf file i have this code that will create 2 different lambda functions :
module "function1" {
source = "./module"
function_name = "function1"
source_code = "function1.zip"
runtime = "${var.runtime}"
memory_size = "${var.memory_size}"
timeout = "${var.timeout}"
aws_region = "${var.aws_region}"
vpc_id = "${var.vpc_id}"
}
module "function2" {
source = "./module"
function_name = "function2"
source_code = "function2.zip"
runtime = "${var.runtime}"
memory_size = "${var.memory_size}"
timeout = "${var.timeout}"
aws_region = "${var.aws_region}"
vpc_id = "${var.vpc_id}"
}
The problem is that in deployment terraform create all resources twice. For Lambda it's Ok, that's the purpose, but for security groups and Roles that's not what i want.
For example this security group is create 2 times :
resource "aws_security_group" "lambda-sg" {
vpc_id = "${data.aws_vpc.main_vpc.id}"
name = "sacem-${var.project}-sg-lambda-${var.function_name}-${var.environment}"
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = "${var.authorized_ip}"
}
# To solve dependcies error when updating the security groups
lifecycle {
create_before_destroy = true
ignore_changes = ["tags.DateTimeTag"]
}
tags = "${merge(var.resource_tagging, map("Name", "${var.project}-sg-lambda-${var.function_name}-${var.environment}"))}"
}
So that's clear that the problem is the structure of the project. Could you help to solve that ?
Thanks.
If you create the SecurityGroup within the module, it'll be created once per module inclusion.
I believe that some of the variable values for the sg name change when you include the module, right? Therefore, the sg name will be unique for both modules and can be created twice without errors.
If you'd choose a static name, Terraform would throw an error when trying to create the sg from module 2 as the resource already exists (as created by module 1).
You could thus define the sg resource outside of the module itself to create it only once.
You can then pass the id of the created sg as variable to the module inclusion and use it there for other resources.
Related
I am pretty new in Terraform, trying to create separate backend for each AWS Region under each environment (dev, stg, prod) of my application. So, I am using separate .config files to configure separate backend, and separate .tfvars files to create resources on relevant environmet/regions.
I am detailing everything below:
Folder structure:
.
├── config
│ ├── stg-useast1.config
│ ├── stg-uswest2.config
│ ├── prod-useast1.config
│ └── prod-uswest2.config
├── vars
│ ├── stg-useast1.tfvars
│ ├── stg-uswest2.tfvars
│ ├── prod-useast1.tfvars
│ └── prod-useast1.tfvars
└── modules
├── backend.tf
├── main.tf
├── variables.tf
└── module-ecs
├── main.tf
└── variables.tf
File contents necessary for this question are showing below (just one region):
./config/stg-useast1.config
profile = "myapp-stg"
region = "us-east-1"
bucket = "myapp-tfstate-stg-useast1"
key = "myapp-stg-useast1.tfstate"
dynamodb_table = "myapp-tfstate-stg-useast1-lock"
./vars/stg-useast1.tfvars
environment = "stg"
app_name = "myapp-ecs"
aws_region = "us-east-1"
aws_profile = "myapp-stg"
./modules/backend.tf
terraform {
backend "s3" {
}
}
./modules/main.tf
provider "aws" {
region = var.aws_region
shared_credentials_files = ["~/.aws/credentials"]
profile = var.aws_profile
}
module "aws-ecr" {
source = "./ecs"
}
./modules/variables.tf
variable "app_name" {}
variable "environment" {}
variable "aws_region" {}
variable "aws_profile" {}
./modules/module-ecs/main.tf
resource "aws_ecr_repository" "aws-ecr" {
name = "${var.app_name}-${var.environment}-ecr"
tags = {
Name = "${var.app_name}-ecr"
Environment = var.environment
}
}
./modules/module-ecs/variables.tf
variable "app_name" {
default = "myapp-ecs"
}
variable "environment" {
default = "stg"
}
variable "aws_region" {
default = "us-east-1"
}
variable "aws_profile" {
default = "myapp-stg"
}
The terraform init went well.
$ terraform init --backend-config=../config/stg-useast1.config
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.31.0
Terraform has been successfully initialized!
I ran terraform plan as following:
terraform plan --var-file=../vars/stg-useast1.tfvars
But it did not use the values from this .tfvars file. I had to supply them in ./modules/module-ecs/variables.tf as default = <value> for each variables.
How can I make use of the .tfvars file with terraform plan command?
Any restructuring suggestion is welcomed.
The local module you've created doesn't inherit variables. You will need to pass them in. For example:
module "aws-ecr" {
source = "./ecs" # in your example it looks like the folder is ecs-module?
app_name = var.app_name
environment = var.environment
aws_region = var.region
aws_profile = var.profile
}
I'm completely new to DataBricks and trying to deploy an E2 workspace using the sample Terraform code provided by DataBricks. I've just started with the VPC part:
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
# version = "3.2.0"
name = local.prefix
cidr = var.cidr_block
azs = data.aws_availability_zones.available.names
enable_dns_hostnames = true
enable_nat_gateway = true
single_nat_gateway = true
create_igw = true
private_subnets = [cidrsubnet(var.cidr_block, 3, 1),
cidrsubnet(var.cidr_block, 3, 2)]
manage_default_security_group = true
default_security_group_name = "${local.prefix}-sg"
default_security_group_egress = [{
cidr_blocks = "0.0.0.0/0"
}]
default_security_group_ingress = [{
description = "Allow all internal TCP and UDP"
self = true
}]
}
When I run terraform plan I get this error:
│ Error: Error in function call
│
│ on .terraform/modules/vpc/main.tf line 1090, in resource "aws_nat_gateway" "this":
│ 1090: subnet_id = element(
│ 1091: aws_subnet.public.*.id,
│ 1092: var.single_nat_gateway ? 0 : count.index,
│ 1093: )
│ ├────────────────
│ │ aws_subnet.public is empty tuple
│ │ count.index is 0
│ │ var.single_nat_gateway is true
│
│ Call to function "element" failed: cannot use element function with an empty list.
Would really appreciate any pointers on what is going wrong here.
You set that you want internet gateway create_igw = true, but you haven't specified public_subnets. You must have public_subnets if you have igw.
I'm trying to provision GCP resources through Terraform, but it's timing out while also throwing errors saying that resources already exist (I've looked in GCP and through the CLI, and the resources do not exist).
Error: Error waiting to create Image: Error waiting for Creating Image: timeout while waiting for state to become 'DONE' (last state: 'RUNNING', timeout: 15m0s)
│
│ with google_compute_image.student-image,
│ on main.tf line 29, in resource "google_compute_image" "student-image":
│ 29: resource "google_compute_image" "student-image" {
│
╵
╷
│ Error: Error creating Firewall: googleapi: Error 409: The resource 'projects/**-****-**********-******/global/firewalls/*****-*********-*******-*****-firewall' already exists, alreadyExists
│
│ with google_compute_firewall.default,
│ on main.tf line 46, in resource "google_compute_firewall" "default":
│ 46: resource "google_compute_firewall" "default" {
Some (perhaps salient) details:
I have previously provisioned these resources successfully using this same approach.
My billing account has since changed.
At another point, it was saying that the machine image existed (which, if it does, I can't see either in the console or the CLI).
I welcome any insights/suggestions.
EDIT
Including HCL; variables are defined in variables.tf and terraform.tfvars
provider google {
region = var.region
}
resource "google_compute_image" "student-image" {
name = var.google_compute_image_name
project = var.project
raw_disk {
source = var.google_compute_image_source
}
timeouts {
create = "15m"
update = "15m"
delete = "15m"
}
}
resource "google_compute_firewall" "default" {
name = "cloud-computing-project-image-firewall"
network = "default"
project = var.project
allow {
protocol = "tcp"
# 22: SSH
# 80: HTTP
ports = [
"22",
"80",
]
}
source_ranges = ["0.0.0.0/0"]
}
source = "./vm"
name = "workspace-vm"
project = var.project
image = google_compute_image.student-image.self_link
machine_type = "n1-standard-1"
}
There is a vm subdirectory with main.tf:
resource "google_compute_instance" "student_instance" {
name = var.name
machine_type = var.machine_type
zone = var.zone
project = var.project
boot_disk {
initialize_params {
image = var.image
size = var.disk_size
}
}
network_interface {
network = "default"
access_config {
}
}
labels = {
project = "machine-learning-on-the-cloud"
}
}
...and variables.tf:
variable name {}
variable project {}
variable zone {
default = "us-east1-b"
}
variable image {}
variable machine_type {}
variable disk_size {
default = 20
}
It sounds like maybe the resources were provisioned with Terraform but perhaps someone deleted them manually and so now your state file and what's actual doesn't match. terraform refresh might solve your problem.
I used this module to create a security group inside a VPC. One of the outputs is the security_group_id, but I'm getting this error:
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = [module.app_security_group.security_group_id]
│ ├────────────────
│ │ module.app_security_group is a object, known only after apply
│
│ This object does not have an attribute named "security_group_id".
I need the security group for an ECS service:
resource "aws_ecs_service" "hello_world" {
name = "hello-world-service"
cluster = aws_ecs_cluster.container_service_cluster.id
task_definition = aws_ecs_task_definition.hello_world.arn
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [module.app_security_group.security_group_id]
subnets = module.vpc.private_subnets
}
load_balancer {
target_group_arn = aws_lb_target_group.loadbalancer_target_group.id
container_name = "hello-world-app"
container_port = 3000
}
depends_on = [aws_lb_listener.loadbalancer_listener, module.app_security_group]
}
I understand that I can only know the security group ID after it is created. That's why I added the depends_on part on the ECS stanza, but it kept returning the same error.
Update
I specified count as 1 on the app_security_group module and this is the error I'm getting now.
│ Error: Unsupported attribute
│
│ on ecs.tf line 39, in resource "aws_ecs_service" "hello_world":
│ 39: security_groups = module.app_security_group.security_group_id
│ ├────────────────
│ │ module.app_security_group is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access an attribute for a specific element of the list, or across all elements of the list?
Update II
This is the module declaration:
module "app_security_group" {
source = "terraform-aws-modules/security-group/aws//modules/web"
version = "3.17.0"
name = "${var.project}-web-sg"
description = "Security group for web-servers with HTTP ports open within VPC"
vpc_id = module.vpc.vpc_id
# ingress_cidr_blocks = module.vpc.public_subnets_cidr_blocks
ingress_cidr_blocks = ["0.0.0.0/0"]
}
I took a look at that module. The problem is that the version 3.17.0 of the module simply does not have the output of security_group_id. You are using a really old version.
The latest version from the site is 4.7.0, you would want to upgrade to this one. In fact, any version above 4.0.0 has the security_group_id, so you need to at least 4.0.0.
As you are using count, please try below.
network_configuration {
security_groups = [module.app_security_group[0].security_group_id]
subnets = module.vpc.private_subnets
}
I am creating a Terraform module for AWS VPC creation.
Here is my directory structure
➢ tree -L 3
.
├── main.tf
├── modules
│ ├── subnets
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── vpc
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
└── variables.tf
3 directories, 12 files
In the subnets module, I want to grab the vpc id of the vpc (sub)module.
In modules/vpc/outputs.tf I use:
output "my_vpc_id" {
value = "${aws_vpc.my_vpc.id}"
}
Will this be enough for me doing the following in modules/subnets/main.tf ?
resource "aws_subnet" "env_vpc_sn" {
...
vpc_id = "${aws_vpc.my_vpc.id}"
}
Your main.tf (or wherever you use the subnet module) would need to pass this in from the output of the VPC module and your subnet module needs to take is a required variable.
To access a module's output you need to reference it as module.<MODULE NAME>.<OUTPUT NAME>:
In a parent module, outputs of child modules are available in expressions as module... For example, if a child module named web_server declared an output named instance_ip_addr, you could access that value as module.web_server.instance_ip_addr.
So your main.tf would look something like this:
module "vpc" {
# ...
}
module "subnets" {
vpc_id = "${module.vpc.my_vpc_id}"
# ...
}
and subnets/variables.tf would look like this:
variable "vpc_id" {}