I am developing the infrastructure (IaC) I want to have in AWS with Terraform. To test, I am using an EC2 instance.
This code has to be able to be deployed across multiple accounts and **multiple regions (environments) per developer **. This is an example:
account-999
developer1: us-east-2
developer2: us-west-1
developerN: us-east-1
account-666:
Staging: us-east-1
Production: eu-west-2
I've created two .tfvars variables, account-999.env.tfvars and account-666.env.tfvars with the following content:
profile="account-999" and profile="account-666" respectively
This is my main.tf which contains the aws provider with the EC2 instance:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
profile = var.profile
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
And the variable.tf file:
variable "profile" {
type=string
}
variable "region" {
description = "Region by developer"
type = map
default = {
developer1 = "us-west-2"
developer2 = "us-east-2"
developerN = "ap-southeast-1"
}
}
But I'm not sure if I'm managing it well. For example, the region variable only contains the values of the account-999 account. How can I solve that?
On the other hand, with this structure, would it be possible to implement modules?
You could use a provider alias to accomplish this. More info about provider aliases can be found here.
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
resource "aws_instance" "foo" {
provider = aws.west
# ...
}
Another way to look at is, is by using terraform workspaces. Here is an example:
terraform workspace new account-999
terraform workspace new account-666
Then this is an example of your aws credentials file:
[account-999]
aws_access_key_id=xxx
aws_secret_access_key=xxx
[account-666]
aws_access_key_id=xxx
aws_secret_access_key=xxx
A reference to that account can be used within the provider block:
provider "aws" {
region = "us-east-1"
profile = "${terraform.workspace}"
}
You could even combine both methods!
Related
Looking at this example:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_global_replication_group
The secondary region is referencing the aws.other_region variable, however the aws provider spec does not have an 'other_region' field
When I try to set that manually to 'us-west-1' for example it fails with failed to install provider
other_region provider can be defined like.
provider "aws" {
region = "us-west-2"
}
provider "aws" {
alias = "other_region"
region = "us-west-1"
}
Then you can use it on resources like.
resource "aws_elasticache_replication_group" "secondary" {
provider = aws.other_region
replication_group_id = "example-secondary"
}
terraform plan shows correct result when run locally but does not create resource mentioned in module when run on GitHub actions. The other resources in root main.tf (s3) are created fine.
Root project:-
terraform {
backend "s3" {
bucket = "sd-tfstorage"
key = "terraform/backend"
region = "us-east-1"
}
}
locals {
env_name = "sandbox"
aws_region = "us-east-1"
k8s_cluster_name = "ms-cluster"
}
# Network Configuration
module "aws-network" {
source = "github.com/<name>/module-aws-network"
env_name = local.env_name
vpc_name = "msur-VPC"
cluster_name = local.k8s_cluster_name
aws_region = local.aws_region
main_vpc_cidr = "10.10.0.0/16"
public_subnet_a_cidr = "10.10.0.0/18"
public_subnet_b_cidr = "10.10.64.0/18"
private_subnet_a_cidr = "10.10.128.0/18"
private_subnet_b_cidr = "10.10.192.0/18"
}
# EKS Configuration
# GitOps Configuration
module:-
provider "aws" {
region = var.aws_region
}
locals {
vpc_name = "${var.env_name} ${var.vpc_name}"
cluster_name = "${var.cluster_name}-${var.env_name}"
}
## AWS VPC definition
resource "aws_vpc" "main" {
cidr_block = var.main_vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
"Name" = local.vpc_name,
"kubernetes.io/cluster/${local.cluster_name}" = "shared",
}
}
When you run it locally, you are using the default aws profile to plan it.
Have you set up your github environment with the correct aws access to perform those actions?
I would like to store a terraform state file in one aws account and deploy infrastructure into another. Is it possible to provide different set of credentials for backend and aws provider using environmental variables(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)? Or maybe provide credentials to one with environmental variables and another through shared_credentials_file?
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "=3.74.3"
}
}
backend "s3" {
encrypt = true
bucket = "bucket-name"
region = "us-east-1"
key = "terraform.tfstate"
}
}
variable "region" {
default = "us-east-1"
}
provider "aws" {
region = "${var.region}"
}
resource "aws_vpc" "test" {
cidr_block = "10.0.0.0/16"
}
Yes, the AWS profile/access keys configuration used by the S3 backend are separate from the AWS profile/access keys configuration used by the AWS provider. By default they are both going to be looking in the same place, but you could configure the backend to use a different profile so that it connects to a different AWS account.
Yes, and you can even keep them in separated files in the same folder to avoid confusion
backend.tf
terraform {
backend "s3" {
profile = "profile-1"
region = "eu-west-1"
bucket = "your-bucket"
key = "terraform-state/terraform.tfstate"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
main.tf
provider "aws" {
profile = "profile-2"
region = "us-east-1"
}
resource .......
This way, the state file will be stored in the profile-1, and all the code will run in the profile-2
I tried to create a simple example in AWS environments. In the beginning, I export 2 values:
export AWS_ACCESS_KEY_ID= something
export AWS_SECRET_ACCESS_KEY= something
After that, I wrote a simple code.
provider "aws" {
region = "us-east-1"
access_key = AWS_ACCESS_KEY_ID
secret_key = AWS_SECRET_ACCESS_KEY
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}
When I define values instead of parameters AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY everything works OK, but with the provided code, I see the following error
on main.tf line 4, in provider "aws":
4: secret_key = AWS_SECRET_ACCESS_KEY
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
Some ideas on how to solve this problem?
You don't have to do anything. As explained in the terraform authentication documentation for AWS provider, terraform will automatically use the credentials in that order:
Static credentials
Environment variables
Shared credentials/configuration file
CodeBuild, ECS, and EKS Roles
EC2 Instance Metadata Service (IMDS and IMDSv2)
So once you export your keys (make sure to export them correctly):
export AWS_ACCESS_KEY_ID="something"
export AWS_SECRET_ACCESS_KEY="something"
in your config file you would just use (exemplified in the docs):
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}
In my application I am using AWS autoscaling group using terraform. I launch an Autoscaling group giving it a number of instances in a region. But Since, only 20 are instances allowed in a region. I want to launch an autoscaling group that will create instances across multiple regions so that I can launch multiple. I had this configuration:
# ---------------------------------------------------------------------------------------------------------------------
# THESE TEMPLATES REQUIRE TERRAFORM VERSION 0.8 AND ABOVE
# ---------------------------------------------------------------------------------------------------------------------
terraform {
required_version = ">= 0.9.3"
}
provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-east-1"
}
provider "aws" {
alias = "us-west-1"
region = "us-west-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
provider "aws" {
alias = "eu-west-1"
region = "eu-west-1"
}
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "ap-southeast-1"
region = "ap-southeast-1"
}
provider "aws" {
alias = "ap-southeast-2"
region = "ap-southeast-2"
}
provider "aws" {
alias = "ap-northeast-1"
region = "ap-northeast-1"
}
provider "aws" {
alias = "sa-east-1"
region = "sa-east-1"
}
resource "aws_launch_configuration" "launch_configuration" {
name_prefix = "${var.asg_name}-"
image_id = "${var.ami_id}"
instance_type = "${var.instance_type}"
associate_public_ip_address = true
key_name = "${var.key_name}"
security_groups = ["${var.security_group_id}"]
user_data = "${data.template_file.user_data_client.rendered}"
lifecycle {
create_before_destroy = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# CREATE AN AUTO SCALING GROUP (ASG)
# ---------------------------------------------------------------------------------------------------------------------
resource "aws_autoscaling_group" "autoscaling_group" {
name = "${var.asg_name}"
max_size = "${var.max_size}"
min_size = "${var.min_size}"
desired_capacity = "${var.desired_capacity}"
launch_configuration = "${aws_launch_configuration.launch_configuration.name}"
vpc_zone_identifier = ["${data.aws_subnet_ids.default.ids}"]
lifecycle {
create_before_destroy = true
}
tag {
key = "Environment"
value = "production"
propagate_at_launch = true
}
tag {
key = "Name"
value = "clj-${var.job_id}-instance"
propagate_at_launch = true
}
}
# ---------------------------------------------------------------------------------------------------------------------
# THE USER DATA SCRIPT THAT WILL RUN ON EACH CLIENT NODE WHEN IT'S BOOTING
# ---------------------------------------------------------------------------------------------------------------------
data "template_file" "user_data_client" {
template = "${file("./user-data-client.sh")}"
vars {
company_location_job_id = "${var.job_id}"
docker_login_username = "${var.docker_login_username}"
docker_login_password = "${var.docker_login_password}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# DEPLOY THE CLUSTER IN THE DEFAULT VPC AND SUBNETS
# Using the default VPC and subnets makes this example easy to run and test, but it means Instances are
# accessible from the public Internet. In a production deployment, we strongly recommend deploying into a custom VPC
# and private subnets.
# ---------------------------------------------------------------------------------------------------------------------
data "aws_subnet_ids" "default" {
vpc_id = "${var.vpc_id}"
}
But this configuration does not work, it is only launching instances in a single region and throwing error as they reach 20.
How can we create instances across multiple regions in an autoscaling group ?
You correctly instantiate multiple aliased providers, but are not using any of them.
If you really need to create resources in different regions from one configuration, you must pass the alias of the provider to the resource:
resource "aws_autoscaling_group" "autoscaling_group_eu-central-1" {
provider = "aws.eu-central-1"
}
And repeat this block as many times as needed (or, better, extract it into a module and pass the providers to module.
But, as mentioned in a comment, if all you want to achieve is to have more than 20 instances, you can increase your limit by opening a ticket with AWS support.