terraform import an existing AWS resource to a module - amazon-web-services

I have got an AWS account that already has ECR and IAM ready. I am now creating a new environment using terraform modules. But I could not find a way to import the existing IAM and ECR resources to my modules. When I run the command terraform import aws_ecr_repository.c2m_an c2m_an, I am getting an error as
Error: resource address "aws_ecr_repository.c2m_cs" does not exist in the configuration.
Before importing this resource, please create its configuration in the root module. For example:
resource "aws_ecr_repository" "c2m_cs" {
# (resource arguments)
}
My ECR module definition is as follows:
resource "aws_ecr_repository" "c2m_cs" {
name = var.c2m_cs#"c2m_cs"
}
output "c2m_cs" {
value = "terraform import aws_ecr_repository.c2m_cs c2m_cs"
}
And in my main.tf file withtin my environment folder, I have module definition as follows:
module "ecr" {
source = "../modules/ecr"
c2m_cs = module.ecr.c2m_cs
}

To correct way to import a resource into the module is exemplified in the docs:
terraform import module.foo.aws_instance.bar i-abcd1234
Thus in your case it would be:
terraform import module.aws_ecr_repository.c2m_an c2m_an

Related

Terraform import resource within nested directories

I've created a VPC peering connection in terraform with a new VPC created in Terraform and an existing one that was created in the AWS console.
I need to be able to import the existing VPC's route table so I'm able to add a new route referencing the vpc_peering_connection_id. However, i'm running into different errors trying to import and would happily take some guidance here.
This is the command I'm running in the cli while being in the root directory of the project-
terraform import aws_route_table.main rtb-12345
this is what I get back-
Error: resource address "aws_route_table.main" does not exist in the configuration.
Before importing this resource, please create its configuration in the root module. For example:
resource "aws_route_table" "main" {
# (resource arguments)
}
It's a pretty large project but this is the basic project structure-
-root directory
-aws
-main.tf
-route_table
-main.tf
-main.tf
I do have the configuration the error is asking for in the route_table folder main.tf file as:
resource "aws_route_table" "main" {
vpc_id = data.aws_vpc.default.id
}
I am also referencing the aws folder in the root directory > main.tf file
module "aws" {
source = "./aws"
}
and then referencing the route_table folder in the aws directory > main.tf file
module "route_table" {
source = "./route_table"
}
I'm not sure what i'm doing wrong here and how exactly to import the route table successfully. I would appreciate any help here, thanks!

Import terraform AWS VPC subnet having CIDR in resource name

I need to import AWS VPC subnets into terraform using import command. When I run terraform plan command I get below output
module.test-vpc.aws_subnet.play["10.76.175.0/24"]
how do I import this resource as it contains this ["10.76.175.0/24"] cidr block. Below are the command I tried which is failing with this error Error: Invalid number literal
terraform import module.test-vpc.aws_subnet.play[10.76.175.0/24] sub-xyz
I tired below commands that got successful import but unable to recognise resources when I run terraform plan again.
terraform import module.test-vpc.aws_subnet.play sub-xyz
terraform import module.test-vpc.aws_subnet.play[0] sub-xyz
The module probably use a for_each condition, so the right command should be
terraform import module.test-vpc.aws_subnet.play["10.76.175.0/24"] sub-xyz
or
terraform import 'module.test-vpc.aws_subnet.play["10.76.175.0/24"]' sub-xyz
with quotes. Because you reference a resource by the key.
It's also possible to reference the resources by a number that represent the order in the map but is not recommended because it's hard to understand if you are doing the right import.
So, doing the commands
terraform import module.test-vpc.aws_subnet.play sub-xyz
terraform import module.test-vpc.aws_subnet.play[0] sub-xyz
you already imported the resources so you don't see that in plan anymore. You can remove the resource from the state by
terraform state rm module.test-vpc.aws_subnet.play[0]
and re-import the resource

How to reference an existing organization folder, or other resources, in Terraform (For GCP)

I'm pretty new to Terraform, my apologies if this question has an obvious answer I'm missing.
I am trying to create a terraform configuration file for an existing organization. I am able to provision everything I have in the main.tf outlined bellow except for the Shared folder that already exists within this organization.
Related github issues :
The folder operation violates display name uniqueness within the parent.
Generic error message when folder rename matches existing folder
Here are the steps I followed:
Manually create a Shared folder within the organization administration UI.
Manually create a Terraform admin project <redacted-project-name> at the root of the Shared folder.
Manually create a service account named terraform#<redacted-project-name> from the terraform admin project
Create, download and securely store a key for the terraform#<redacted-project-name> service account.
Enable APIs : cloudresourcemanager.googleapis.com, cloudbilling.googleapis.com, iam.googleapis.com, serviceusage.googleapis.com within the terraform admin project
Set permissions of the service account to role/owner, roles/resourcemanager.organizationAdmin, roles/resourcemanager.folderAdmin and roles/resourcemanager.projectCreator.
Create the main.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.85.0"
}
}
}
provider "google" {
credentials = file(var.credentials_file)
region = var.region
zone = var.zone
}
data "google_organization" "org" {
organization = var.organization.id
}
resource "google_folder" "shared" {
display_name = "Shared"
parent = data.google_organization.org.name
}
resource "google_folder" "ddm" {
display_name = "Data and Digital Marketing"
parent = data.google_organization.org.name
}
resource "google_folder" "dtl" {
display_name = "DTL"
parent = google_folder.ddm.name
}
The error I receive :
Error: Error creating folder 'Shared' in 'organizations/<redacted-org-id>': Error waiting for creating folder: Error code 9, message: Folder reservation failed for parent [organizations/<redacted-org-id>], folder [] due to constraint: The folder operation violates display name uniqueness within the parent.
How do I include existing resources within the terraform config file?
For (organization) folders (such as the example above)
For the billing account
For projects, i.e. Am I supposed to declare or import the terraform admin project within the main.tf?
For service accounts, how to handle existing keys and permissions of the account that is running the terraform apply
For existing policies and enabling APIs
In order to include already-existing resources within the terraform template, use the import statement.
For Folders
In the Terraform documentation for google_folder :
# Both syntaxes are valid
$ terraform import google_folder.department1 1234567
$ terraform import google_folder.department1 folders/1234567
So for the example above,
Fetch the folder id using gcloud alpha resource-manager folders list --organization=<redacted_org_id> providing the organization id.
Save the folder id somewhere, and if not already done, declare the folder as a resource within the main.tf
resource "google_folder" "shared" {
display_name = "Shared"
parent = data.google_organization.org.name
}
Run the command : terraform import google_folder.shared folders/<redacted_folder_id>. You should get an output like google_folder.shared: Import prepared!
Make sure your infrastructure is updated via terraform plan.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.

AWS project structure using terraform

I am working on a Python and AWS course on coursera. However instead of creating s3 , api gateway and others using boto3 I am using terraform. Till now everything is going fine however I am facing a issue.Below is my lambda directory structure
Every lambda has a different directory structure and I have to cd in each directory to apply changes using terraform apply.
Example
Below is my lambda code in terraform for one of the lambda function.<<validate.tf>>
provider "aws" {
region = "us-east-2"
}
terraform {
required_version = "> 0.14"
required_providers {
aws = "~> 3.0"
}
backend "s3" {
bucket = "nyeisterraformstatedata2"
key = "api_gateway/lambda_function/terraform_api_gateway_lambda_validate.tfstate"
region = "us-east-2"
dynamodb_table = "terraform-up-and-running-locks-2"
encrypt = true
}
}
data "archive_file" "zip_file" {
type = "zip"
source_dir = "${path.module}/lambda_dependency_and_function"
output_path = "${path.module}/lambda_dependency_and_function.zip"
}
resource "aws_lambda_function" "get_average_rating_lambda" {
filename = "lambda_dependency_and_function.zip"
function_name = "validate"
role = data.aws_iam_role.lambda_role_name.arn
handler = "validate.lambda_handler"
# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = filebase64sha256(data.archive_file.zip_file.output_path)
runtime = "python3.8"
depends_on = [data.archive_file.zip_file]
}
<<variable.tf>>
data "aws_iam_role" "lambda_role_name" {
name = "common_lambda_role_s3_api_gateway_2"
}
Based on the comment below I created a main.tf with following code
provider "aws" {
region = "us-east-2"
}
module "test" {
source = "../validate"
}
but I am trying to import using import statement its giving me an error and I am not able to figure out how to solve it
terraform import module.test.aws_lambda_function.test1 get_average_rating_lambda
Warning: Backend configuration ignored
│
│ on ../validate/validate.tf line 10, in terraform:
│ 10: backend "s3" {
│
│ Any selected backend applies to the entire configuration, so Terraform expects provider configurations only in the root module.
│
│ This is a warning rather than an error because it's sometimes convenient to temporarily call a root module as a child module for testing purposes, but this backend configuration block will have no
│ effect.
╵
Error: resource address "module.test.aws_lambda_function.test1" does not exist in the configuration.
Before importing this resource, please create its configuration in module.test. For example:
resource "aws_lambda_function" "test1" {
# (resource arguments)
}
So my question is there a way for terraform to tell which all files have change and apply them in one go rather than one by one.Since I am new to terraform too so if anyone think that this is the wrong way to structing the project please do let me know.Thank you
What you could do is create a new directory with a main.tf file and make it a project that contains your whole cloud environment. Each of these existing folders could be imported as a module. If each of your folders is a running terraform project, it can already be imported as a module without changing it.
You would then use the terraform import command to import each of the resources, in a fashion similar to terraform import module.aws_lambda_function my_lambda_id for each lambda and any other managed resources.
Then, instead of a state file for each lambda, you would have a state file for your whole environment. From there, terraform is smart enough to detect the individual changes and update accordingly.

Create Resources via terraform

I created an AWS environment using TERRAFORM.
After that, some resources were created by console (SES, SNS, LAMBDA) they did not was provisioned by TERRAFORM.
I'm writing the TERRAFORM code for these resources (SES, SNS, LAMBDA) that were created by the console.
If I already have these resources running in my account, is it possible to generate this code via TERRAFORM for these resources without removing them?
Or even, how do I have to proceed in this case?
Welcome to the world of IaC, you're in for a treat. :)
You can import all resources that were created without terraform (using a CLI or manually provisioned - resources which are not part of the tf state) to your terraform state. Once these resources are imported you can then start managing their lifecycle using terraform.
Define the resource in your .tf files
Import existing resources
As an example:
In order to import an existing non terraform managed lambda, you first define the resource for it in your .tf files:
main.tf:
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.test"
# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = "${filebase64sha256("lambda_function_payload.zip")}"
runtime = "nodejs12.x"
environment {
variables = {
foo = "bar"
}
}
}
Then you can execute terraform import, in order to import the existing lambda:
terraform import aws_lambda_function.test_lambda my_test_lambda_function