I have a module that defines a provider as follows
provider "aws" {
region = "${var.region}"
shared_credentials_file = "${module.global_variables.shared_credentials_file}"
profile = "${var.profile}"
}
and an EC instance as follows
resource "aws_instance" "node" {
ami = "${lookup(var.ami, var.region)}"
key_name = "ib-us-east-2-production"
instance_type = "${var.instance_type}"
count = "${var.count}"
security_groups = "${var.security_groups}"
tags {
Name = "${var.name}"
}
root_block_device {
volume_size = 100
}
In the terraform script that calls this module, I would now like to create an ELB and attach it point it to the instance with something along the lines of
resource "aws_elb" "node_elb" {
name = "${var.name}-elb"
.........
However terraform keeps prompting me for the aws region that is already defined in the module. The only way around this is to copy the provider block into the file calling the module. Is there a cleaner way to approach this?
The only way around this is to copy the provider block into the file calling the module.
The provider block should actually be in your file calling the module and you can remove it from your module.
From the docs:
For convenience in simple configurations, a child module automatically inherits default (un-aliased) provider configurations from its parent. This means that explicit provider blocks appear only in the root module, and downstream modules can simply declare resources for that provider and have them automatically associated with the root provider configurations.
https://www.terraform.io/docs/configuration/modules.html#implicit-provider-inheritance
Related
I have a service deployed on GCP compute engine. It consists of a compute engine instance template, instance group, instance group manager, and load balancer + associated forwarding rules etc.
We're forced into using compute engine rather than Cloud Run or some other serverless offering due to the need for docker-in-docker for the service in question.
The deployment is managed by terraform. I have a config that looks something like this:
data "google_compute_image" "debian_image" {
family = "debian-11"
project = "debian-cloud"
}
resource "google_compute_instance_template" "my_service_template" {
name = "my_service"
machine_type = "n1-standard-1"
disk {
source_image = data.google_compute_image.debian_image.self_link
auto_delete = true
boot = true
}
...
metadata_startup_script = data.local_file.startup_script.content
metadata = {
MY_ENV_VAR = var.whatever
}
}
resource "google_compute_region_instance_group_manager" "my_service_mig" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
...
}
resource "google_compute_region_backend_service" "my_service_backend" {
...
backend {
group = google_compute_region_instance_group_manager.my_service_mig.instance_group
}
}
resource "google_compute_forwarding_rule" "my_service_frontend" {
depends_on = [
google_compute_region_instance_group_manager.my_service_mig,
]
name = "my_service_ilb"
backend_service = google_compute_region_backend_service.my_service_backend.id
...
}
I'm running into issues where Terraform is unable to perform any kind of update to this service without running into conflicts. It seems that instance templates are immutable in GCP, and doing anything like updating the startup script, adding an env var, or similar forces it to be deleted and re-created.
Terraform prints info like this in that situation:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.connectors_compute_engine.google_compute_instance_template.airbyte_translation_instance1 must be replaced
-/+ resource "google_compute_instance_template" "my_service_template" {
~ id = "projects/project/..." -> (known after apply)
~ metadata = { # forces replacement
+ "TEST" = "test"
# (1 unchanged element hidden)
}
The only solution I've found for getting out of this situation is to entirely delete the entire service and all associated entities from the load balancer down to the instance template and re-create them.
Is there some way to avoid this situation so that I'm able to change the instance template without having to manually update all the terraform config two times? At this point I'm even fine if it ends up creating some downtime for the service in question rather than a full rolling update or something since that's what's happening now anyway.
I was triggered by this issue as well.
However, according to:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance_template#using-with-instance-group-manager
Instance Templates cannot be updated after creation with the Google
Cloud Platform API. In order to update an Instance Template, Terraform
will destroy the existing resource and create a replacement. In order
to effectively use an Instance Template resource with an Instance
Group Manager resource, it's recommended to specify
create_before_destroy in a lifecycle block. Either omit the Instance
Template name attribute, or specify a partial name with name_prefix.
I would also test and plan with this lifecycle meta argument as well:
+ lifecycle {
+ prevent_destroy = true
+ }
}
Or more realistically in your specific case, something like:
resource "google_compute_instance_template" "my_service_template" {
version {
instance_template = google_compute_instance_template.my_service_template.id
name = "primary"
}
+ lifecycle {
+ create_before_destroy = true
+ }
}
So terraform plan with either create_before_destroy or prevent_destroy = true before terraform apply on google_compute_instance_template to see results.
Ultimately, you can remove google_compute_instance_template.my_service_template.id from state file and import it back.
Some suggested workarounds in this thread:
terraform lifecycle prevent destroy
I have question about how use modules in terraform.
See below my code.
module "aws_vpc"{
source = "../modules/vpc"
vpc_cidr_block = "192.168.0.0/16"
name_cidr = "ec2-eks"
name_subnet = "ec2-eks-subnet"
subnet_cidr = ["192.168.1.0/25"]
}
module "ec2-eks" {
source = "../modules/ec2"
ami_id = "ami-07c8bc5c1ce9598c3"
subnet_id = module.aws_vpc.aws_subnet[0]
count_server = 1
}
output "aws_vpc" {
value = module.aws_vpc.aws_subnet[0]
}
I`m creating a vpc and want the next step to attach ec2 by my created subnet.But terraform attached by VPC of default.
What do I need to do that attach ec2 to my vpc(subnet)?
Thank you for you answers
What do I need to do that attach ec2 to my vpc(subnet)?
aws_instance has subnet_id attribute. Thus to place your instance in a given subnet, you have to set the subnet_id.
Since you are using a module to create your aws_vpc, likely the module will output subnet IDs as well. Due to lack of details of the module, its difficult to provide an exact answer, but in a general scenario you would do something along these lines (example):
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
subnet_id = module.aws_vpc.subnet_id
tags = {
Name = "HelloWorld"
}
}
Obviously, the above depends on the implementation of your module.
Thank you.
I`ve got success resources in AWS. I forget to set in the module of ec2 a parameter subnet_id
I'm getting the following error when trying to initially plan or apply a resource that is using the data values from the AWS environment to a count.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Invalid count argument
on main.tf line 24, in resource "aws_efs_mount_target" "target":
24: count = length(data.aws_subnet_ids.subnets.ids)
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
$ terraform --version
Terraform v0.12.9
+ provider.aws v2.30.0
I tried using the target option but doesn't seem to work on data type.
$ terraform apply -target aws_subnet_ids.subnets
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
The only solution I found that works is:
remove the resource
apply the project
add the resource back
apply again
Here is a terraform config I created for testing.
provider "aws" {
version = "~> 2.0"
}
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
resource aws_default_vpc default {
}
data aws_subnet_ids subnets {
vpc_id = aws_default_vpc.default.id
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
Finally figured out the answer after researching the answer by Dude0001.
Short Answer. Use the aws_vpc data source with the default argument instead of the aws_default_vpc resource. Here is the working sample with comments on the changes.
locals {
project_id = "it_broke_like_3_collar_watch"
}
terraform {
required_version = ">= 0.12"
}
// Delete this --> resource aws_default_vpc default {}
// Add this
data aws_vpc default {
default = true
}
data "aws_subnet_ids" "subnets" {
// Update this from aws_default_vpc.default.id
vpc_id = "${data.aws_vpc.default.id}"
}
resource aws_efs_file_system efs {
creation_token = local.project_id
encrypted = true
}
resource aws_efs_mount_target target {
depends_on = [ aws_efs_file_system.efs ]
count = length(data.aws_subnet_ids.subnets.ids)
file_system_id = aws_efs_file_system.efs.id
subnet_id = tolist(data.aws_subnet_ids.subnets.ids)[count.index]
}
What I couldn't figure out was why my work around of removing aws_efs_mount_target on the first apply worked. It's because after the first apply the aws_default_vpc was loaded into the state file.
So an alternate solution without making change to the original tf file would be to use the target option on the first apply:
$ terraform apply --target aws_default_vpc.default
However, I don't like this as it requires a special case on first deployment which is pretty unique for the terraform deployments I've worked with.
The aws_default_vpc isn't a resource TF can create or destroy. It is the default VPC for your account in each region that AWS creates automatically for you that is protected from being destroyed. You can only (and need to) adopt it in to management and your TF state. This will allow you to begin managing and to inspect when you run plan or apply. Otherwise, TF doesn't know what the resource is or what state it is in, and it cannot create a new one for you as it s a special type of protected resource as described above.
With that said, go get the default VPC id from the correct region you are deploying in your account. Then import it into your TF state. It should then be able to inspect and count the number of subnets.
For example
terraform import aws_default_vpc.default vpc-xxxxxx
https://www.terraform.io/docs/providers/aws/r/default_vpc.html
Using the data element for this looks a little odd to me as well. Can you change your TF script to get the count directly through the aws_default_vpc resource?
I have the following deploy.tf file:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "us_west_1"
region = "us-west-2"
}
resource "aws_us_east_1" "my_test" {
# provider = "aws.us_east_1"
count = 1
ami = "ami-0820..."
instance_type = "t2.micro"
}
resource "aws_us_west_1" "my_test" {
provider = "aws.us_west_1"
count = 1
ami = "ami-0d74..."
instance_type = "t2.micro"
}
I am trying to use it to deploy 2 servers, one in each region. I keep getting errors like:
aws_us_east_1.narc_test: Provider doesn't support resource: aws_us_east_1
I have tried setting alias's for both provider blocks, and referring to the correct region in a number of different ways. I've read up on multi region support, and some answers suggest this can be accomplished with modules, however, this is a simple test, and I'd like to keep it simple. Is this currently possible?
Yes it can be used to create resources in different regions even inside just one file. There is no need to use modules for your test scenario.
Your error is caused by a typo probably. If you want to launch an ec2 instance the resource you wanna create is aws_instance and not aws_us_west_1 or aws_us_east_1.
Sure enough Terraform does not know this kind of resource since it does simply not exist. Change it to aws_instance and you should be good to go! Additionally you should probably name them differently to avoid double naming using my_test for both resources.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.
I have created basic infrastructure as below and I'm trying to see if modules works for me to replicate infrastructure on AWS using Terraform.
variable "access_key" {}
variable "secret_key" {}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
alias = "us-east-1"
region = "us-east-1"
}
variable "company" {}
module "test1" {
source = "./modules"
}
module "test2" {
source = "./modules"
}
And my module is as follows:
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
But somehow when I use same module in my main.tf it is giving me an error for same named resource policy. How should I handle such a scenario?
I want to use same main.tf for prod/stage/dev environment. How do I achieve it?
My actual module looks like the code in this question.
How do I make use of modules and be able to name module resources dynamically? e.g. stage_iam_policy / prod_iam_policy etc. Is this the right approach?
You're naming the IAM policy the same regardless of where you use the module. With IAM policies they are uniquely identified by their name rather than some random ID (such as EC2 instances which are identified as i-...) so you can't have 2 IAM policies with the same name in the same AWS account.
Instead you must add some extra uniqueness to the name such as by using a parameter to the module appended to the name with something like this:
module "test1" {
source = "./modules"
enviroment = "foo"
}
module "test1" {
source = "./modules"
enviroment = "bar"
}
and in your module you'd have the following:
variable "enviroment" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${var.enviroment}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}
Alternatively if you don't have some useful thing you can use such as name or environment etc then you could just straight up use some randomness:
resource "random_pet" "random" {}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "lambda_dbaccess_policy_${random_pet.random.id}"
policy = "${file("${path.module}/api-dynamodb-policy.json")}"
}