I've modules in another directory. So I want to add the backend.tf and set the provider data from linux environment variable .
But terraform giving error .
My Structure is showing as below .
main.tf
└── vpc
├── backend.tf
├── export.sh
├── vars.tf
└── vpc.tf
## main.tf
module "my_vpc" {
source = "../../vpc"
instance_tenancy = "default"
}
## backend.tf
terraform {
backend "s3" {
region = "${var.aws_region}"
bucket = "${var.TERRAFORM_BUCKET}-vpc"
profile = "${var.ORGANISATION}"
key = "${var.ORGANISATION}"
}
}
provider aws {
profile = "${var.ORGANISATION}"
region = "${var.aws_region}"
}
I've exported variables ORGANISATION,REGION and TERRAFORM_BUCKET variables from terminal , but it give this error :
Error: module "my_vpc": missing required argument "aws_region"
Error: module "my_vpc": missing required argument "TERRAFORM_BUCKET"
Error: module "my_vpc": missing required argument "ORGANISATION"
How Can I solve this issue ?
Notice : call backend.tf from module via environment variables . (Dynamic and default variables )
Please Help !
The value for variables in a Terraform script can be provided a couple of different ways.
Input Variable Configuration
Via a .tfvars file => Variable files
Via command line
Via environment variables => Environment Variables
Since you are trying to provide them via environment variables, you should following the naming patter required.
$ TF_VAR_terraform_bucket=bucket_name
$ TF_VAR_organisation=org_name
Then when you perform terraform plan or terraform apply terraform will load the variables.
If you don't have the aws_region variable defined as an environment variable, then you will need to put it in a .tfvars file and use terraform plan -var-file config.tfvars or pass it in via the command line using terraform plan -var us-east-1.
This is all assuming that in your vars.tf file you have the variables defined.
variable "organisation" {
type = "string"
}
variable "terraform_bucket" {
type = "string"
}
variable "aws_region" {
type = "string"
}
*** Edit 1
Thinking through your question, if the variables are needed inside the module then you will need to update your call to the module to include those variables in the use of it.
I cannot tell by the formating of your structure if the backend.tf, vars.tf and vpc.tf are inside the vpc folder or not.
module "my_vpc" {
source = "../../vpc"
instance_tenancy = "default"
bucket = "${var.TERRAFORM_BUCKET}-vpc"
profile = "${var.ORGANISATION}"
key = "${var.ORGANISATION}"
}
This is what the docs say about variables in backend config.
Only one backend may be specified and the configuration may not
contain interpolations. Terraform will validate this
This might help - #17288
Related
I had old Terraform configuration, worked perfect.
In short, I had static website application I needed to deploy using Cloudfront & S3. Then, I need another application to deploy in the same way, but in other sub-domain.
For ease of helping, you can check the full source code here:
Old Terraform configuration: https://github.com/tal-rofe/tf-old
New Terraform configuration: https://github.com/tal-rofe/tf-new
So, my domain is example.io, and in the old configuration I had only static application deployed on app.example.com.
But, as I need an another application, it's going to be deployed on docs.example.com.
To avoid a lot of code duplication, I decided on creating a local module for deploying a generic application onto Cloudfront & S3.
After doing so, seems like terraform apply and terraform plan succeeds (not really, as no resources were changed at all!): Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Not only no changes, but I get an old output:
cloudfront_distribution_id = "blabla"
eks_kubeconfig = <sensitive>
This cloudfront_distribution_id output, was the correct output using the old configuration. I expect to get these new outputs, as configured:
output "frontend_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront frontend"
value = module.frontend-static.cloudfront_distribution_id
}
output "docs_cloudfront_distribution_id" {
description = "The distribution ID of deployed Cloudfront docs"
value = module.docs-static.cloudfront_distribution_id
}
output "eks_kubeconfig" {
description = "EKS Kubeconfig content"
value = module.eks-kubeconfig.kubeconfig
sensitive = true
}
I'm using GitHub actions to apply my Terraform configuration with these steps:
- name: Terraform setup
uses: hashicorp/setup-terraform#v2
with:
terraform_wrapper: false
- name: Terraform core init
env:
TERRAFORM_BACKEND_S3_BUCKET: ${{ secrets.TERRAFORM_BACKEND_S3_BUCKET }}
TERRAFORM_BACKEND_DYNAMODB_TABLE: ${{ secrets.TERRAFORM_BACKEND_DYNAMODB_TABLE }}
run: |
terraform -chdir="./terraform/core" init \
-backend-config="bucket=$TERRAFORM_BACKEND_S3_BUCKET" \
-backend-config="dynamodb_table=$TERRAFORM_BACKEND_DYNAMODB_TABLE" \
-backend-config="region=$AWS_REGION"
- name: Terraform core plan
run: terraform -chdir="./terraform/core" plan -no-color -out state.tfplan
- name: Terraform core apply
run: terraform -chdir="./terraform/core" apply state.tfplan
I used the same steps in my old & new Terraform configurations.
I want to re-use the logic written in my static-app module twice. So basically I want to be able to create static application just by using the module I've configured.
You cannot define the outputs in the root module and expect it to work because you are already using a different module in your static-app module (i.e., you are nesting modules). Since you are using the terraform module there (denoted with source = "terraform-aws-modules/cloudfront/aws") you are limited to what that module provides as outputs and hence can only define those outputs on the module level, not root level. I see you are referencing the EKS output works, but the difference here is that that particular module is not nested and is called directly (from your repo):
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
.
.
.
}
The way I would suggest fixing this is to call the Cloudfront module from the root module (i.e., core in your example):
module "frontend-static" {
source = "terraform-aws-modules/cloudfront/aws"
version = "3.1.0"
... rest of the configuration ...
}
module "docs-static" {
source = "terraform-aws-modules/cloudfront/aws"
version = "3.1.0"
... rest of the configuration ...
}
The outputs you currently have defined in your repo with new configuration (tf-new) should work out-of-the-box with this change. Alternatively, you could write your own module and then you can control which outputs you will have.
Can someone please tell me what I'm doing wrong here. I am trying to import a newly-restored RDS instance into terraform state
I have application module that calls a core-module from company's repository
\applications\rds\main.tf
module "rds" {
source = "git::https://mysource-control/public/core-modules//aws/rds/rds?ref=v3.5.0"
}
then I have root environment specific module that calls application module
\env\us-west-2\qa\rds\main.tf
module "rds" {
source = "../../../../applications/rds"
}
On import I am getting error
C:\....\env\us-west-2\qa\rds>terraform import module.rds.module.rds qa-db-instane
Error: Invalid address │ │ on line 1: │ 1:
module.rds.module.rds │ │ A resource instance address is required
here. The module path must be followed by a resource instance
specification. ╵
For information on valid syntax, see:
https://www.terraform.io/docs/cli/state/resource-addressing.html
In the official documentation there is an example for importing RDS instances https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#import
You'll notice they mention aws_db_instance.default which is the Terraform resource name, followed by whatever you've named it.
In your case if the underlying resource in your nested module (for example sake, say it's called my_db) you'll be able to go
terraform import module.rds.module.rds.aws_db_instance.my_db qa-db-instance
I begin to use terraform with GCP module, I am novice in Terraform.
I donwloaded and put GCP module like "Google VM" in my Terraform to use it.
I created main.tf file outside of google VM module, and I call this module like exemple provide in "exemple" folder
I copy/paste this example in my main.tf file with var.tfvars (where I put my variables)
when I do "terraform plan" I have errors like
│ Error: Reference to undeclared input variable│
│ on main.tf line 31, in module "instance_template":
│ 31: project_id = var.project_id│
│ An input variable with the name "project_id" has not been declared. This variable can be declared with a variable "project_id" {} block.
I think I have forget something....
Variables must be set in module and not in root module (tfvars) ?
thanks for your help
Versioning:
Terraform v0.15.0
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.37.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
I have the following folder structure:
terraform/
├── main.tf
└── terraform.tfvars
├── core
│ ├── main.tf
│ └── variables.tf
The files of concern here are:
terraform/main.tf
terraform/terraform.tfvars
core/main.tf
core/variables.tf
The high-level overview is I'm trying to use TF to create a VPC in AWS, using the terraform-aws-vpc module, so my root module is calling the core module which calls the terraform-provided vpc module to build. The reason it's a child calling a child is that once I get this vpc step working, the "core" module is going to be responsible for building other infrastructure assets in the account. The idea is to have the root module call a child module for each important piece of the AWS Account assets.
The problem I'm running into here is one of variables.
In core/, I have the following variables defined in core/variables.tf:
variable "vpc_facts" {
type = map
}
That variable is referenced for information for the vpc module in core/main.tf:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.vpc_facts.name
cidr = var.vpc_facts.cidr
}
I define these values, which I hope will be passed into the module, in terraform/terraform.tfvars:
vpc_facts = {
name = "demo-vpc"
cidr = "192.168.0.0/16"
}
I then call the core module from terraform/main.tf
module "core" {
source = "./core/"
}
I am trying to call terraform plan from my root folder of terraform/ - I have included the core as a module within terraform/main.tf
My thought process here is terraform/main.tf -> gathers terraform/terraform.tfvars -> core module -> uses terraform/terraform.tfvars as input variables for vpc module
However, that doesn't seem to be happening. I keep getting this error when running terraform plan from the root module folder:
Releasing state lock. This may take a few moments...
╷
│ Warning: Value for undeclared variable
│
│ The root module does not declare a variable named "vpc_facts" but a value was found in file "terraform.tfvars". If you meant to use
│ this value, add a "variable" block to the configuration.
│
│ To silence these warnings, use TF_VAR_... environment variables to provide certain "global" settings to all configurations in your
│ organization. To reduce the verbosity of these warnings, use the -compact-warnings option.
╵
╷
│ Error: Missing required argument
│
│ on main.tf line 57, in module "core":
│ 57: module "core" {
│
│ The argument "vpc_facts" is required, but no definition was found.
I would guess this should already be defined since I'm defining it in both tfvars and core/variables, but when I actually try to define it in the root main.tf:
module "core" {
source = "./core/"
vpc_facts = var.vpc_facts
}
I get a different error:
╷
│ Error: Reference to undeclared input variable
│
│ on main.tf line 60, in module "core":
│ 60: vpc_facts = var.vpc_facts
│
│ An input variable with the name "vpc_facts" has not been declared. This variable can be declared with a variable "vpc_facts" {}
│ block.
but it is declared, in core/variables.tf and valued in terraform/terraform.tfvars
What am I missing here? Does this mean I need to repeatedly define variables in both child modules and the root module? I would think that if a root module is calling a child module, it's a flat structure in terms of variables, and that child module can see child/variables.tf
Does this mean I need to repeatedly define variables in both child modules and the root module?
Yes, exactly what it means. All modules are self-contained, and a child module does not inherit variables from the parent. You have to explicitly define the variables in the module, and then set them in the parent module when you create the module.
I'm having a set of Terraform files and in particular one variables.tf file which sort of holds my variables like aws access key, aws access token etc. I want to now automate the resource creation on AWS using GitLab CI / CD.
My plan is the following:
Write a .gitlab-ci-yml file
Have the terraform calls in the .gitlab-ci.yml file
I know that I can have secret environment variables in GitLab, but I'm not sure how I can push those variables into my Terraform variables.tf file which looks like this now!
# AWS Config
variable "aws_access_key" {
default = "YOUR_ADMIN_ACCESS_KEY"
}
variable "aws_secret_key" {
default = "YOUR_ADMIN_SECRET_KEY"
}
variable "aws_region" {
default = "us-west-2"
}
In my .gitlab-ci.yml, I have access to the secrets like this:
- 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
How can I pipe it to my Terraform scripts? Any ideas? I would need to read the secrets from GitLab's environment and pass it on to the Terraform scripts!
Which executor are you using for your GitLab runners?
You don't necessarily need to use the Docker executor but can use a runner installed on a bare-metal machine or in a VM.
If you install the gettext package on the respective machine/VM as well you can use the same method as I described in Referencing gitlab secrets in Terraform for the Docker executor.
Another possibility could be that you set
job:
stage: ...
variables:
TF_VAR_SECRET1: ${GITLAB_SECRET}
or
job:
stage: ...
script:
- export TF_VAR_SECRET1=${GITLAB_SECRET}
in your CI job configuration and interpolate these. Please see Getting an Environment Variable in Terraform configuration? as well
Bear in mind that terraform requires a TF_VAR_ prefix to environment variables. So actually you need something like this in .gitlab-ci.yml
- 'TF_VAR_AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}'
- 'TF_VAR_AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}'
- 'TF_VAR_AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
Which also means you could just set the variable in the pipeline with that prefix as well and not need this extra mapping step.
I see you actually did discover this per your comment---I'm still posting this answer since I missed your comment the first time and it would have saved me an hour of work.