Sharing resources between Terraform workspaces - amazon-web-services

I have an infrastructure I'm deploying using Terraform in AWS. This infrastructure can be deployed to different environments, for which I'm using workspaces.
Most of the components in the deployment should be created separately for each workspace, but I have several key components that I wish to be shared between them, primarily:
IAM roles and permissions
They should use the same API Gateway, but each workspace should deploy to different paths and methods
For example:
resource "aws_iam_role" "lambda_iam_role" {
name = "LambdaGeneralRole"
policy = <...>
}
resource "aws_lambda_function" "my_lambda" {
function_name = "lambda-${terraform.workspace}"
role = "${aws_iam_role.lambda_iam_role.arn}"
}
The first resource is a IAM role that should be shared across all instances of that Lambda, and shouldn't be recreated more than once.
The second resource is a Lambda function whose name depends on the current workspace, so each workspace will deploy and keep track of the state of a different Lambda.
How can I share resources, and their state, between different Terraform workspaces?

For the shared resources, I create them in a separate template and then refer to them using terraform_remote_state in the template where I need information about them.
What follows is how I implement this, there are probably other ways to implement it. YMMV
In the shared services template (where you would put your IAM role) I use Terraform backend to store the output data for the shared services template in Consul. You also need to output any information you want to use in other templates.
shared_services template
terraform {
backend "consul" {
address = "consul.aa.example.com:8500"
path = "terraform/shared_services"
}
}
resource "aws_iam_role" "lambda_iam_role" {
name = "LambdaGeneralRole"
policy = <...>
}
output "lambda_iam_role_arn" {
value = "${aws_iam_role.lambda_iam_role.arn}"
}
A "backend" in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.
In the individual template you invoke the backend as a data source using terraform_remote_state and can use the data in that template.
terraform_remote_state:
Retrieves state meta data from a remote backend
individual template
data "terraform_remote_state" "shared_services" {
backend = "consul"
config {
address = "consul.aa.example.com:8500"
path = "terraform/shared_services"
}
}
# This is where you use the terraform_remote_state data source
resource "aws_lambda_function" "my_lambda" {
function_name = "lambda-${terraform.workspace}"
role = "${data.terraform_remote_state.shared_services.lambda_iam_role_arn}"
}
References:
https://www.terraform.io/docs/state/remote.html
https://www.terraform.io/docs/backends/
https://www.terraform.io/docs/providers/terraform/d/remote_state.html

Resources like aws_iam_role having a name attribute will not create a new instance if the name value matches an already provisioned resource.
So, the following will create a single aws_iam_role named LambdaGeneralRole.
resource "aws_iam_role" "lambda_iam_role" {
name = "LambdaGeneralRole"
policy = <...>
}
...
resource "aws_iam_role" "lambda_iam_role_reuse_existing_if_name_is_LambdaGeneralRole" {
name = "LambdaGeneralRole"
policy = <...>
}
Similarly, the aws provider will effectively creat one S3 bucket name my-store given the following:
resource "aws_s3_bucket" "store-1" {
bucket = "my-store"
acl = "public-read"
force_destroy = true
}
...
resource "aws_s3_bucket" "store-2" {
bucket = "my-store"
acl = "public-read"
force_destroy = true
}
This behaviour holds even if the resources were defined different workspaces with their respective separate Terraform state.
To get the best of this approach, define the shared resources as separate configuration. That way, you don't risk destroying a shared resource after running terraform destroy.

Related

Terraform Reference Created S3 Bucket for Remote Backend

I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}
Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.

Variably create 1 or more AWS Buckets across multiple regions

I’m looking for recommendations and help with an issue that I am having with setting up and managing bucket and bucket policy creation for multiple environments and multiple regions within a single environment.
I have 4 AWS accounts (dev, stg, prod1, prod2 which is a copy of prod1). In prod1 we have two kubernetes clusters aws-us-prod1 and aws-eu-prod1. These two clusters are completely independent of one another and they merely serve customers in those regions.
I have an applications running on these two different clusters (aws-us-prod1 and aws-eu-prod1) that need to write content to an S3 bucket. But these two clusters share an AWS account (prod1).
I’m trying to write some terraform resource automation to manage this, and I haven’t been able to variably control what region a bucket gets put in. The latest doc shows that there is a region attribute but it doesn’t work because of how the provider has been implemented with the aws provider region attribute.
What I’d like to do is something like this:
variable "buckets" {
type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}
resource "aws_s3_bucket" "my_buckets" {
for_each = var.buckets
bucket = each.key
region = each.value
}
resource "aws_s3_bucket_policy" "my_buckets_policy" {
for_each = aws_s3_bucket.my_buckets
bucket = each.value.id
policy = ...
}
I’ve tried using multiple providers using aliases, but you can’t programmatically use a provider based on the value of a variable you are iterating over. What’s the proper way to organize this project and resources to accomplish this?
These issues I have come across are related to this:
https://github.com/hashicorp/terraform/issues/3656
https://github.com/terraform-providers/terraform-provider-aws/issues/5999
The region attribute just got removed from the s3_bucket in terraform-provider-aws v3.0.0 from July 31, 2020. Before then you could set the region for a bucket and it would have been respected and the bucket would have been created in that selected region. However that was not how any other resource is managed, it was probably just there because S3 is globally scoped and the bucket has no region in the arn. All other services use the region of the provider itself (as it should be).
I would recommend to create the different providers for all the different regions you may want to support and then splitting the var.buckets according to their region and then create one resource "aws_s3_bucket" "this_region" { } for each region:
variable "buckets" {
type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}
provider "aws" {
region = "eu-west-2"
alias = "eu-west-2"
}
locals {
eu_west_2_buckets = [for name, region in var.buckets: name if region == "eu-west-2"]
}
resource "aws_s3_bucket" "eu_west_2_buckets" {
count = length(local.eu_west_2_buckets)
bucket = eu_west_2_buckets[count.index]
provider = aws.eu-west-2
}
If you want to only rollout the buckets that match the current deployment region you can do that by simply changing the bucket filtering logic:
variable "buckets" {
type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}
locals {
buckets = [for name, region in var.buckets: name if region == data.aws_region.current.name]
}
data "aws_region" "current" { }
resource "aws_s3_bucket" "buckets" {
count = length(local.buckets)
bucket = buckets[count.index]
}

Applying Terraform On Two Different AWS Accounts In the Same Plan

We're currently porting some of our CloudFormation templates to Terraform. In one of these templates we use a custom resource with a Lambda function.
The purpose of the function is to assume a role in our main AWS account; where R53 DNS is managed, and add a newly generated CloudFront dns there.
I am wondering if there's a way to do this in terraform, such that:
create the cloudfront resource, alb, etc on the dev/qa/prod accounts
add the r53 recordset to the main account
All within the same terraform plan. Can I choose an IAM role when creating a resource? Or choose the account where the resource should be created?
The only reference I have found is here
You can configure multiple providers ( one per account in your case) and create an alias for each. Then you will need to specify the provider for each ressource. Example:
provider "aws" {
region = "eu-west-1"
profile = "profile1"
alias = "account1"
}
provider "aws" {
region = "eu-west-1"
profile = "profile2"
alias = "account2"
}
resource "aws_lambda_function" "function1" {
provider = "aws.account1" // will be created in account 1
...
}
resource "aws_lambda_function" "function2" {
provider = "aws.account2" // will be created in account 2
...
}

Reference variables from another Terraform plan

I have created a set-up with main and disaster recovery website architecture in AWS using Terraform.
The main website is in region1 and disaster recovery is in region2. This script is created as different plans or different directories.
For region1, I created one directory which contains only the main website Terraform script to launch the main website infrastructure.
For region2, I created another directory which contains only the disaster recovery website Terraform script to launch the disaster recovery website infrastructure.
In my main website script, I need some values of the disaster recovery website such as VPC peering connection ID, DMS endpoint ARNs etc.
How can I reference these variables from the disaster recovery website directory to the main website directory?
One option is to use the terraform_remote_state data source to fetch outputs from the other state file like this:
vpc/main.tf
resource "aws_vpc" "foo" {
cidr_block = "10.0.0.0/16"
}
output "vpc_id" {
value = "${aws_vpc.foo.id}"
}
route/main.tf
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
resource "aws_route_table" "rt" {
vpc_id = "${data.terraform_remote_state.vpc.vpc_id}"
}
However, it's nearly always better to just use the native data sources of the provider as long as they exist for the resource you need.
So in your case you will need to use data sources such as the aws_vpc_peering_connection data source to be able to establish cross VPC routing with something like this:
data "aws_vpc_peering_connection" "pc" {
vpc_id = "${data.aws_vpc.foo.id}"
peer_cidr_block = "10.0.0.0/16"
}
resource "aws_route_table" "rt" {
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_route" "r" {
route_table_id = "${aws_route_table.rt.id}"
destination_cidr_block = "${data.aws_vpc_peering_connection.pc.peer_cidr_block}"
vpc_peering_connection_id = "${data.aws_vpc_peering_connection.pc.id}"
}
You'll need to do similar things for any other IDs or things you need to reference in your DR region.
It's worth noting that there's not any data sources for the DMS resources so you would either need to use the terraform_remote_state data source to fetch any IDs (such as the source and target endpoint ARNs to setup the aws_dms_replication_task or you could structure things so that all of the DMS stuff happens in the DR region and then you only need to refer to the other region's VPC ID, database names and potentially KMS key IDs which can all be done via data sources.

How to refer target of another folder in terrform

The terraform structure of my project is:
iam/policies/policy1.tf
iam/roles/roles1.tf
policy1.tf includes several policies, etc.:
resource "aws_iam_policy" "policy1A" {
name = "..."
policy = "${data.template_file.policy1ATempl.rendered}"
}
roles1.tf includes several roles, etc.:
resource "aws_iam_role" "role1" {
name = ....
assume_role_policy = ....
}
Now I want to attach policy1A to role1. Since policy1 and role1 are not in the same folder, how do I attach them?
resource "aws_iam_policy_attachment" "attach1" {
name = "attachment1"
roles = ??? (sth. like ["${aws_iam_role.role1.name}"])
policy_arn = ??? (sth. like "${aws_iam_policy.Policy1.arn}")
}
You can't make references like this across directories because Terraform only works in one directory at a time. If you really do need to separate these into different folders (which I would recommend against here) then normally you would use data sources to retrieve information about resources that have already been created.
In this case though the aws_iam_policy data source is currently pretty much useless because it requires the ARN of the policy instead of the path or name. Instead you can construct it if you know the policy name as it fits a well known pattern. You obviously know the name of role(s) you want to attach the policy to and you know the name of the policy so that covers those things.
data "aws_caller_identity" "current" {}
resource "aws_iam_policy_attachment" "attach1" {
name = "attachment1"
roles = "role1"
policy_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/policy1A"
}
This above example does briefly show off how to use data sources as it fetches the AWS account ID, using the aws_caller_identity data source, and dynamically uses that to build the ARN of the policy you are trying to attach.
I would echo the warning in the docs against using the resource aws_iam_policy_attachment unless you are absolutely certain with what you're doing because it will detach the policy from anything else it happens to be attached to. If you want to take a managed policy and attach it to N roles I would instead recommend using aws_iam_role_policy_attachment and it's like instead.