Create Resources via terraform - amazon-web-services

I created an AWS environment using TERRAFORM.
After that, some resources were created by console (SES, SNS, LAMBDA) they did not was provisioned by TERRAFORM.
I'm writing the TERRAFORM code for these resources (SES, SNS, LAMBDA) that were created by the console.
If I already have these resources running in my account, is it possible to generate this code via TERRAFORM for these resources without removing them?
Or even, how do I have to proceed in this case?

Welcome to the world of IaC, you're in for a treat. :)
You can import all resources that were created without terraform (using a CLI or manually provisioned - resources which are not part of the tf state) to your terraform state. Once these resources are imported you can then start managing their lifecycle using terraform.
Define the resource in your .tf files
Import existing resources
As an example:
In order to import an existing non terraform managed lambda, you first define the resource for it in your .tf files:
main.tf:
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.test"
# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = "${filebase64sha256("lambda_function_payload.zip")}"
runtime = "nodejs12.x"
environment {
variables = {
foo = "bar"
}
}
}
Then you can execute terraform import, in order to import the existing lambda:
terraform import aws_lambda_function.test_lambda my_test_lambda_function

Related

importing aws_iam_policy multiple times

I have created resource stub for importing iam customer managed policy as below.
resource "aws_iam_policy" "customer_managed_policy" {
name = var.customer_managed_policy_name
policy = "{}"
}
The import command used is:
$ terraform import -var 'customer_managed_policy_name=ec2-readonly' aws_iam_policy.customer_managed_policy arn:aws:iam::<account ID>:policy/ec2-readonly
This works fine for first time. But If I want to make it dynamic in order to import any number of policies, I don't know how to do.
Because "aws_iam_policy" resource will use policy name and policy data/json as attributes, for them by using for_each or list, multiple resources can be created but in import command I need to pass policy arn which is different.
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
To import "X" resources, "X" resources must exist in your terraform configuration so "X" can be mapped to state.
2 simple ways to achieve this would be by using "count" or "for_each" to map "X" resources to state. Therefore being able to import "X" resources.
Now, it is important to noticed that after you import a resource, if your terraform configuration it's not equal to the imported resource, once you run terraform apply, terraform will be update all imported resources to match your terraform configuration file.

How to import an AWS Managed Lambda Layer in Terrafrom

AWS manages a layer called AWSDataWrangler-Python38. How do I import it into my Terraform Code. I tried using the Layer Module
resource "aws_lambda_layer_version" "lambda_layer" {
layer_name = "AWSDataWrangler-Python39"
compatible_runtimes = ["python3.9"]
}
It throws an error to specify filename, but there is no file for this layer since it is managed by AWS and it is not a custom layer
You can not import a resource which is not managed by you.
Since this is a layer managed by AWS, there is a public list with all the ARN numbers available for this layer: https://aws-data-wrangler.readthedocs.io/en/stable/layers.html
If you want to use this layer for a Lambda in your Terraform code, you will have to take an ARN from this list and simply hard-code it (or provide it externally with a variable). For example:
resource "aws_lambda_function" "lambda" {
function_name = "MyFunction"
...
layers = [
"arn:aws:lambda:${var.region}:336392948345:layer:AWSDataWrangler-Python39:6"
]
}

How do delete a specific module using terraform?

I have two modules with 30+ resources in each. I want to destroy all the resources in one particular region and nothing in the other region. How to destroy the complete module instead of destroying each resource individually using terraform.
module "mumbai" {
source = "./site-to-site-vpn-setup"
providers = { aws = aws.mumbai }
}
module "seoul" {
source = "./site-to-site-vpn-setup"
providers = { aws = aws.seoul }
}
You could just either remove the relevant module (or comment out), then run terraform plan/apply? Because Terraform is infrastructure-as-code, when you change anything in the code, it will reflect those changes in your infrastructure.
You can specify the target when running terraform destroy.
For example, if you want to delete only the Mumbai module:
terraform destroy -target module.mumbai

Terraform Replacing Bucket Object Instead of Versioning

I'm setting up some Terraform to manage a lambda and s3 bucket with versioning on the contents of the s3. Creating the first version of the infrastructure is fine. When releasing a second version, terraform replaces the zip file instead of creating a new version.
I've tried adding versioning to the s3 bucket in terraform configuration and moving the api-version to a variable string.
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "main.js"
output_path = "main.zip"
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket = "s3-bucket-for-tft-project"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "lambda_zip_file" {
bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
key = "v${var.api-version}-${data.archive_file.lambda_zip.output_path}"
source = "${data.archive_file.lambda_zip.output_path}"
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
s3_key = "${aws_s3_bucket_object.lambda_zip_file.key}"
function_name = "lambda_test_with_s3_version"
role = "${aws_iam_role.lambda_exec.arn}"
handler = "main.handler"
runtime = "nodejs8.10"
}
I would expect the output to be another zip file but with the lambda now pointing at the new version, with the ability to change back to the old version if var.api-version was changed.
Terraform isn't designed for creating this sort of "artifact" object where each new version should be separate from the ones before it.
The data.archive_file data source was added to Terraform in the early days of AWS Lambda when the only way to pass values from Terraform into a Lambda function was to retrieve the intended zip artifact, amend it to include additional files containing those settings, and then write that to Lambda.
Now that AWS Lambda supports environment variables, that pattern is no longer recommended. Instead, deployment artifacts should be created by some separate build process outside of Terraform and recorded somewhere that Terraform can discover them. For example, you could use SSM Parameter Store to record your current desired version and then have Terraform read that to decide which artifact to retrieve:
data "aws_ssm_parameter" "lambda_artifact" {
name = "lambda_artifact"
}
locals {
# Let's assume that this SSM parameter contains a JSON
# string describing which artifact to use, like this
# {
# "bucket": "s3-bucket-for-tft-project",
# "key": "v2.0.0/example.zip"
# }
lambda_artifact = jsondecode(data.aws_ssm_parameter.lambda_artifact)
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = local.lambda_artifact.bucket
s3_key = local.lambda_artifact.key
function_name = "lambda_test_with_s3_version"
role = aws_iam_role.lambda_exec.arn
handler = "main.handler"
runtime = "nodejs8.10"
}
This build/deploy separation allows for three different actions, whereas doing it all in Terraform only allows for one:
To release a new version, you can run your build process (in a CI system, perhaps) and have it push the resulting artifact to S3 and record it as the latest version in the SSM parameter, and then trigger a Terraform run to deploy it.
To change other aspects of the infrastructure without deploying a new function version, just run Terraform without changing the SSM parameter and Terraform will leave the Lambda function untouched.
If you find that a new release is defective, you can write the location of an older artifact into the SSM parameter and run Terraform to deploy that previous version.
A more complete description of this approach is in the Terraform guide Serverless Applications with AWS Lambda and API Gateway, which uses a Lambda web application as an example but can be applied to many other AWS Lambda use-cases too. Using SSM is just an example; any data that Terraform can retrieve using a data source can be used as an intermediary to decouple the build and deploy steps from one another.
This general idea can apply to all sorts of code build artifacts as well as Lambda zip files. For example: custom AMIs created with HashiCorp Packer, Docker images created using docker build. Separating the build process, the version selection mechanism, and the deployment process gives a degree of workflow flexibility that can support both the happy path and any exceptional paths taken during incidents.

Importing terraform aws_iam_policy

I'm trying to import a terraform aws_iam_policy that gets automatically added by automation I don't own. The import seems to work but once I run a terraform plan I get the following error
* aws_iam_policy.mypolicy1: "policy": required field is not set
I'm running the terraform import as follows.
terraform import aws_iam_policy.mypolicy1 <myarn>
Here is my relevant terraform config
resource "aws_iam_policy" "mypolicy1" {
}
resource "aws_iam_role_policy_attachment" "mypolicy1_attachment`" {
role = "${aws_iam_role.myrole1.name}"
policy_arn = "${aws_iam_policy.mypolicy1.arn}"
}
resource "aws_iam_role" "myrole1" {
name = "myrole1"
assume_role_policy = "${file("../policies/ecs-role.json")}"
}
I double checked that the terraform.tfstate included the policy i'm trying to import. Is there something else I'm missing here?
You still need to provide the required fields in the Terraform configuration for the plan to work.
If you remove the aws_iam_policy resource from your configuration and run a plan after importing the policy you should see that Terraform wants to destroy the policy because it is in the state file but not in the configuration.
Simply setup your aws_iam_policy resource to match the imported policy and then a plan should show no changes.
I finally found a relatively elegant, and universal work-around to address Amazon's poor implementation of the import IAM policy capability. The solution does NOT require that you reverse engineer Amazon, or anybody else's, implementation of the "aws_iam_policy" resource that you want to import.
There are two steps.
Create an aws_iam_policy resource definition that has a "lifecycle" argument, with an ignore_changes list. There are three fields in the aws_iam_policy resource that will trigger a replacement: policy, description and path. Add these three fields to the ignore_changes list.
Import the external IAM policy, and attach it to the resource definition that you created in your resource file.
Resource file (ex: static-resources.tf)
resource "aws_iam_policy" "MyLambdaVPCAccessExecutionRole" {
lifecycle {
prevent_destroy = true
ignore_changes = [policy, path, description]
}
policy = jsonencode({})
}
Import Statement: Using the arn of the IAM policy that you want to import, import the policy and attach it to your resource definition.
terraform import aws_iam_policy.MyLambdaVPCAccessExecutionRole arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
The magic is the fields that you need to add to the ignore_changes list, and adding a place-holder for the required "policy" argument. Since this is a required field, Terraform won't let you proceed without it, even though this is one of the fields that you told Terraform to ignore any changes to.
Note: If you use modules, you will need to add "module.." to the front on your resource reference. For example
terraform import module.static.aws_iam_policy.MyLambdaVPCAccessExecutionRole arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole