I've created a VPC peering connection in terraform with a new VPC created in Terraform and an existing one that was created in the AWS console.
I need to be able to import the existing VPC's route table so I'm able to add a new route referencing the vpc_peering_connection_id. However, i'm running into different errors trying to import and would happily take some guidance here.
This is the command I'm running in the cli while being in the root directory of the project-
terraform import aws_route_table.main rtb-12345
this is what I get back-
Error: resource address "aws_route_table.main" does not exist in the configuration.
Before importing this resource, please create its configuration in the root module. For example:
resource "aws_route_table" "main" {
# (resource arguments)
}
It's a pretty large project but this is the basic project structure-
-root directory
-aws
-main.tf
-route_table
-main.tf
-main.tf
I do have the configuration the error is asking for in the route_table folder main.tf file as:
resource "aws_route_table" "main" {
vpc_id = data.aws_vpc.default.id
}
I am also referencing the aws folder in the root directory > main.tf file
module "aws" {
source = "./aws"
}
and then referencing the route_table folder in the aws directory > main.tf file
module "route_table" {
source = "./route_table"
}
I'm not sure what i'm doing wrong here and how exactly to import the route table successfully. I would appreciate any help here, thanks!
Related
I'm pretty new to Terraform, my apologies if this question has an obvious answer I'm missing.
I am trying to create a terraform configuration file for an existing organization. I am able to provision everything I have in the main.tf outlined bellow except for the Shared folder that already exists within this organization.
Related github issues :
The folder operation violates display name uniqueness within the parent.
Generic error message when folder rename matches existing folder
Here are the steps I followed:
Manually create a Shared folder within the organization administration UI.
Manually create a Terraform admin project <redacted-project-name> at the root of the Shared folder.
Manually create a service account named terraform#<redacted-project-name> from the terraform admin project
Create, download and securely store a key for the terraform#<redacted-project-name> service account.
Enable APIs : cloudresourcemanager.googleapis.com, cloudbilling.googleapis.com, iam.googleapis.com, serviceusage.googleapis.com within the terraform admin project
Set permissions of the service account to role/owner, roles/resourcemanager.organizationAdmin, roles/resourcemanager.folderAdmin and roles/resourcemanager.projectCreator.
Create the main.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.85.0"
}
}
}
provider "google" {
credentials = file(var.credentials_file)
region = var.region
zone = var.zone
}
data "google_organization" "org" {
organization = var.organization.id
}
resource "google_folder" "shared" {
display_name = "Shared"
parent = data.google_organization.org.name
}
resource "google_folder" "ddm" {
display_name = "Data and Digital Marketing"
parent = data.google_organization.org.name
}
resource "google_folder" "dtl" {
display_name = "DTL"
parent = google_folder.ddm.name
}
The error I receive :
Error: Error creating folder 'Shared' in 'organizations/<redacted-org-id>': Error waiting for creating folder: Error code 9, message: Folder reservation failed for parent [organizations/<redacted-org-id>], folder [] due to constraint: The folder operation violates display name uniqueness within the parent.
How do I include existing resources within the terraform config file?
For (organization) folders (such as the example above)
For the billing account
For projects, i.e. Am I supposed to declare or import the terraform admin project within the main.tf?
For service accounts, how to handle existing keys and permissions of the account that is running the terraform apply
For existing policies and enabling APIs
In order to include already-existing resources within the terraform template, use the import statement.
For Folders
In the Terraform documentation for google_folder :
# Both syntaxes are valid
$ terraform import google_folder.department1 1234567
$ terraform import google_folder.department1 folders/1234567
So for the example above,
Fetch the folder id using gcloud alpha resource-manager folders list --organization=<redacted_org_id> providing the organization id.
Save the folder id somewhere, and if not already done, declare the folder as a resource within the main.tf
resource "google_folder" "shared" {
display_name = "Shared"
parent = data.google_organization.org.name
}
Run the command : terraform import google_folder.shared folders/<redacted_folder_id>. You should get an output like google_folder.shared: Import prepared!
Make sure your infrastructure is updated via terraform plan.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
I am working on a Python and AWS course on coursera. However instead of creating s3 , api gateway and others using boto3 I am using terraform. Till now everything is going fine however I am facing a issue.Below is my lambda directory structure
Every lambda has a different directory structure and I have to cd in each directory to apply changes using terraform apply.
Example
Below is my lambda code in terraform for one of the lambda function.<<validate.tf>>
provider "aws" {
region = "us-east-2"
}
terraform {
required_version = "> 0.14"
required_providers {
aws = "~> 3.0"
}
backend "s3" {
bucket = "nyeisterraformstatedata2"
key = "api_gateway/lambda_function/terraform_api_gateway_lambda_validate.tfstate"
region = "us-east-2"
dynamodb_table = "terraform-up-and-running-locks-2"
encrypt = true
}
}
data "archive_file" "zip_file" {
type = "zip"
source_dir = "${path.module}/lambda_dependency_and_function"
output_path = "${path.module}/lambda_dependency_and_function.zip"
}
resource "aws_lambda_function" "get_average_rating_lambda" {
filename = "lambda_dependency_and_function.zip"
function_name = "validate"
role = data.aws_iam_role.lambda_role_name.arn
handler = "validate.lambda_handler"
# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = filebase64sha256(data.archive_file.zip_file.output_path)
runtime = "python3.8"
depends_on = [data.archive_file.zip_file]
}
<<variable.tf>>
data "aws_iam_role" "lambda_role_name" {
name = "common_lambda_role_s3_api_gateway_2"
}
Based on the comment below I created a main.tf with following code
provider "aws" {
region = "us-east-2"
}
module "test" {
source = "../validate"
}
but I am trying to import using import statement its giving me an error and I am not able to figure out how to solve it
terraform import module.test.aws_lambda_function.test1 get_average_rating_lambda
Warning: Backend configuration ignored
│
│ on ../validate/validate.tf line 10, in terraform:
│ 10: backend "s3" {
│
│ Any selected backend applies to the entire configuration, so Terraform expects provider configurations only in the root module.
│
│ This is a warning rather than an error because it's sometimes convenient to temporarily call a root module as a child module for testing purposes, but this backend configuration block will have no
│ effect.
╵
Error: resource address "module.test.aws_lambda_function.test1" does not exist in the configuration.
Before importing this resource, please create its configuration in module.test. For example:
resource "aws_lambda_function" "test1" {
# (resource arguments)
}
So my question is there a way for terraform to tell which all files have change and apply them in one go rather than one by one.Since I am new to terraform too so if anyone think that this is the wrong way to structing the project please do let me know.Thank you
What you could do is create a new directory with a main.tf file and make it a project that contains your whole cloud environment. Each of these existing folders could be imported as a module. If each of your folders is a running terraform project, it can already be imported as a module without changing it.
You would then use the terraform import command to import each of the resources, in a fashion similar to terraform import module.aws_lambda_function my_lambda_id for each lambda and any other managed resources.
Then, instead of a state file for each lambda, you would have a state file for your whole environment. From there, terraform is smart enough to detect the individual changes and update accordingly.
I have got an AWS account that already has ECR and IAM ready. I am now creating a new environment using terraform modules. But I could not find a way to import the existing IAM and ECR resources to my modules. When I run the command terraform import aws_ecr_repository.c2m_an c2m_an, I am getting an error as
Error: resource address "aws_ecr_repository.c2m_cs" does not exist in the configuration.
Before importing this resource, please create its configuration in the root module. For example:
resource "aws_ecr_repository" "c2m_cs" {
# (resource arguments)
}
My ECR module definition is as follows:
resource "aws_ecr_repository" "c2m_cs" {
name = var.c2m_cs#"c2m_cs"
}
output "c2m_cs" {
value = "terraform import aws_ecr_repository.c2m_cs c2m_cs"
}
And in my main.tf file withtin my environment folder, I have module definition as follows:
module "ecr" {
source = "../modules/ecr"
c2m_cs = module.ecr.c2m_cs
}
To correct way to import a resource into the module is exemplified in the docs:
terraform import module.foo.aws_instance.bar i-abcd1234
Thus in your case it would be:
terraform import module.aws_ecr_repository.c2m_an c2m_an
I have an existing manually created fargate cluster named "test-cluster" in us-west-1
In terraform configuration file i created
resource "aws_ecs_cluster" "mycluster" {
}
I run terraform command to import the files
terraform import aws_ecs_cluster.mycluster test-cluster
I receive this error message
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_ecs_cluster.cluster, the
provider detected that no object exists with the given id. Only pre-existing
objects can be imported; check that the id is correct and that it is
associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
I've also ran aws configure adding the correct region.
Based on the comments.
The issue was caused by using wrong account in terraform and/or AWS console.
The solution was to use correct account.
We're using terraform to spin up our infrastructure within AWS and we have 3 separate environments: Dev, Stage and Prod
Dev : Requires - public, private1a, privatedb and privatedb2 subnets
Stage & Prod : Requires - public, private_1a, private_1b, privatedb and privatedb2 subnets
I have main.tf, variables, dev.tfvars, stage.tfvars and prod.tfvars. I'm trying to understand as of how can I use main.tf file that I'm currently using for dev environment and create resources required for stage and prod using .tfvars files.
terraform apply -var-file=dev.tfvars
terraform apply -var-file=stage.tfvars (this should create subnet private_1b in addition to the other subnets)
terraform apply -var-file=prod.tfvars (this should create subnet private_1b in addition to the other subnets)
Please let me know if you need further clarification.
Thanks,
What you are trying to do is indeed the correct approach. You will also have to make use of terraform workspaces.
Terraform starts with a single workspace named "default". This
workspace is special both because it is the default and also because
it cannot ever be deleted. If you've never explicitly used workspaces,
then you've only ever worked on the "default" workspace.
Workspaces are managed with the terraform workspace set of commands.
To create a new workspace and switch to it, you can use terraform
workspace new; to switch environments you can use terraform workspace
select; etc.
In essence this means you will have a workspace for each environment you have.
Lets see with some examples.
I have the following files:
main.tf
variables.tf
dev.tfvars
production.tfvars
main.tf
This file contains the VPC module 9Can be any resource ofc). We call the variables via the var. function:
module "vpc" {
source = "modules/vpc"
cidr_block = "${var.vpc_cidr_block}"
subnets_private = "${var.vpc_subnets_private}"
subnets_public = "${var.vpc_subnets_public}"
}
variables.tf
This file contains all our variables. Please do not that we do not assign default here, this will make sure we are 100% certain that we are using the variables from the .tfvars files.
variable "vpc_cidr_block" {}
variable "vpc_subnets_private" {
type = "list"
}
variable "vpc_subnets_public" {
type = "list"
}
That's basically it. Our .tfvars file will look like this:
dev.tfvars
vpc_cidr_block = "10.40.0.0/16"
vpc_subnets_private = ["10.40.0.0/19", "10.40.64.0/19", "10.40.128.0/19"]
vpc_subnets_public = ["10.40.32.0/20", "10.40.96.0/20", "10.40.160.0/20"]
production.tfvars
vpc_cidr_block = "10.30.0.0/16"
vpc_subnets_private = ["10.30.0.0/19", "10.30.64.0/19", "10.30.128.0/19"]
vpc_subnets_public = ["10.30.32.0/20", "10.30.96.0/20", "10.30.160.0/20"]
If I would like to run terraform for my dev environment, these are the commands I would use (Assuming the workspaces are already created, see Terraform workspace docs):
Select the dev environment: terraform workspace select dev
Run a plan to see the changes: terraform plan -var-file=dev.tfvars -out=plan.out
Apply the changes: terraform apply plan.out
You can replicate this for as many environments as you like.