Work with multiple environments/variables in Terraform - amazon-web-services

We're using terraform to spin up our infrastructure within AWS and we have 3 separate environments: Dev, Stage and Prod
Dev : Requires - public, private1a, privatedb and privatedb2 subnets
Stage & Prod : Requires - public, private_1a, private_1b, privatedb and privatedb2 subnets
I have main.tf, variables, dev.tfvars, stage.tfvars and prod.tfvars. I'm trying to understand as of how can I use main.tf file that I'm currently using for dev environment and create resources required for stage and prod using .tfvars files.
terraform apply -var-file=dev.tfvars
terraform apply -var-file=stage.tfvars (this should create subnet private_1b in addition to the other subnets)
terraform apply -var-file=prod.tfvars (this should create subnet private_1b in addition to the other subnets)
Please let me know if you need further clarification.
Thanks,

What you are trying to do is indeed the correct approach. You will also have to make use of terraform workspaces.
Terraform starts with a single workspace named "default". This
workspace is special both because it is the default and also because
it cannot ever be deleted. If you've never explicitly used workspaces,
then you've only ever worked on the "default" workspace.
Workspaces are managed with the terraform workspace set of commands.
To create a new workspace and switch to it, you can use terraform
workspace new; to switch environments you can use terraform workspace
select; etc.
In essence this means you will have a workspace for each environment you have.
Lets see with some examples.
I have the following files:
main.tf
variables.tf
dev.tfvars
production.tfvars
main.tf
This file contains the VPC module 9Can be any resource ofc). We call the variables via the var. function:
module "vpc" {
source = "modules/vpc"
cidr_block = "${var.vpc_cidr_block}"
subnets_private = "${var.vpc_subnets_private}"
subnets_public = "${var.vpc_subnets_public}"
}
variables.tf
This file contains all our variables. Please do not that we do not assign default here, this will make sure we are 100% certain that we are using the variables from the .tfvars files.
variable "vpc_cidr_block" {}
variable "vpc_subnets_private" {
type = "list"
}
variable "vpc_subnets_public" {
type = "list"
}
That's basically it. Our .tfvars file will look like this:
dev.tfvars
vpc_cidr_block = "10.40.0.0/16"
vpc_subnets_private = ["10.40.0.0/19", "10.40.64.0/19", "10.40.128.0/19"]
vpc_subnets_public = ["10.40.32.0/20", "10.40.96.0/20", "10.40.160.0/20"]
production.tfvars
vpc_cidr_block = "10.30.0.0/16"
vpc_subnets_private = ["10.30.0.0/19", "10.30.64.0/19", "10.30.128.0/19"]
vpc_subnets_public = ["10.30.32.0/20", "10.30.96.0/20", "10.30.160.0/20"]
If I would like to run terraform for my dev environment, these are the commands I would use (Assuming the workspaces are already created, see Terraform workspace docs):
Select the dev environment: terraform workspace select dev
Run a plan to see the changes: terraform plan -var-file=dev.tfvars -out=plan.out
Apply the changes: terraform apply plan.out
You can replicate this for as many environments as you like.

Related

Terraform deploy different resource for enviroment

I'm new on Terraform so I'm sure it is a easy question.
I'm trying to deploy into GCP using terraform.
I have 2 different enviroments both on same GCP project:
nonlive
live
I have alerts for each enviroment so that is what I intend to create:
If I deploy into an enviroment then Terraform must create/update resources for this enviromet but don't update resources for rest of enviroments.
I'm trying to user modules and conditions, it's similar to this:
module "enviroment_live" {
source = "./live"
module_create = (var.environment=="live")
}
resource "google_monitoring_alert_policy" "alert_policy_live" {
count = var.module_create ? 1 : 0
display_name = "Alert CPU LPProxy Live"
Problem:
When I deploy on live enviroment Terraform delete alerts for nonlive enviroment and vice versa.
Is it possible to update resources of one enviroment without deleting those of the other?
Regards
As Marko E suggested solution was to use workspaces:
Terraform workspaces
The steps must be:
Create a workspace for each enviroment.
On deploy (CI/CD) select workspace befor plan/apply:
terraform workspace select $ENVIROMENT
Use conditions (as I explained before) to create/configure the resource.

How to choose from multiple providers when running terraform apply?

I have some terraform code which creates resources in AWS.
For production, we have a CICD pipeline that runs against the real AWS. For local development I use localstack. So there are two providers like this:
# AWS config
provider "aws" {
region = ...
}
# Localstack config
provider "aws" {
region = ...
access_key = "mock_access_key"
secret_key = "mock_secret_key"
skip_credentials_validation = true
endpoints {
s3 = "http://localhost:4566"
...
}
}
My current solution is to comment out the provider which isn't used. This isn't ideal because we need to remember to uncomment the localstack config when developing locally and uncomment the real aws config when pushing code, forgetting this can be a big issue.
I also tried creating an alias for the localstack config alias = localstack and a couple variables:
variable provider_id {
type = map
default = {
localdev = "aws.localstack"
prod = "aws"
}
}
variable aws_account {
type = string
default = ""
}
Then the resources can lookup the provider like this:
provider = lookup(var.provider_id, var.aws_account)
Then the idea is we can pass the variable aws_account=... when running terraform apply so the resources pick the right provider. However, this doesn't work and returns this error (I presume because it should look up aws not "aws"):
╷
│ Error: Invalid provider configuration reference
│
│ on s3.tf line 2, in resource "aws_s3_bucket" "bucket":
│ 2: provider = lookup(var.provider_id, var.aws_account)
│
│ The provider argument requires a provider type name, optionally followed by a period and then a configuration alias.
╵
And even if it did work it wouldn't be ideal because we'd have to remember to add the provider lookup to every resource block.
I was wondering what is the best way to work with multiple providers like this? Ideally a simple solution such as passing a variable or parameter when running terraform apply. I don't mind if it requires a refactor of the terraform (I only learnt this tool last week).
The typical way to deal with that is through terraform workspaces:
In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams.
So you could try having two workspaces, one for localstack and the second for real aws.
You can achieve this via Alias workflow - but require code changes to include alias in resources sections. Please refer this Doc:
https://www.terraform.io/docs/language/modules/develop/providers.html
Though as you mentioned that one config is for prod/root account and one for local development in this case recommended and tested approach is to create two workspaces - separating workspaces will give you a lot of advantages such as:
1- code is separate so no chances of messing with it accidentally
2- debugging will be much easier
3- you are free to experiment with your code without fear

Why terraform plan or terraform apply uses different workspace than it should use?

I added two terraform workspaces. (Terraform v0.14.6)
In my folder I have three files with .tfvars extension:
terraform.tfvars
abc.tfvars
123.tfvars
When I run terraform workspace list I see three workspaces and workspace abc is marked *
default
*abc
123
The problem is that, If I run terraform plan or terraform apply, terraform uses settings from workspace default and settings from terraform.tfvars not from abc.tfvars
I have a question: Do you want to perform these actions in workspace abc? But after this terraform doesn't uses abc.tfvars but terraform.tfvars. Why?
terraform.tfvars
AWS_ACCESS_KEY = "xxxx"
AWS_SECRET_KEY = "xxxx"
LinkedAccount = "9xxxx"
abc.tfvars
AWS_ACCESS_KEY = "xxxx"
AWS_SECRET_KEY = "xxxx"
LinkedAccount = "8xxxx"
If it is difficult to say why, I can put here my main.tf, variables.tf files as well. Maybe it is simple issue to resolve and someone sees where it is my mistake. Let me know.
You will need to tell Terraform which variable file to load via the -var-file switch like this:
terraform plan -var-file=abc.tfvars
Terraform Documentation: Input Variables
It's also worth reading this GitHub issue: Feature: Conditionally load tfvars/tf file based on Workspace #15966

Terraform Dependancies between accounts [duplicate]

I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf

How to obtain AWS certificate from different region in terraform? [duplicate]

I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf