I just created a new AWS account using Terraform aws_organizations_account module. What I am now trying to do is to create ressources into that new account. I guess I would need the account_id of the new AWS account to do that so I stored it into a new output variable but after that I have no idea how can I create a aws_s3_bucket for example
provider.tf
provider "aws" {
region = "us-east-1"
}
main.tf
resource "aws_organizations_account" "account" {
name = "tmp"
email = "first.last+tmp#company.com"
role_name = "myOrganizationRole"
parent_id = "xxxxx"
}
## what I am trying to create inside that tmp account
resource "aws_s3_bucket" "bucket" {}
outputs.tf
output "account_id" {
value = aws_organizations_account.account.id
sensitive = true
}
You can't do this the way you want. You need entire, account creation pipeline for that. Roughly in the pipeline you would have two main stages:
Create your AWS Org and member accounts.
Assume role from the member accounts, and run your TF code for these accounts to create resources.
There are many ways of doing this, and also there are many resources on this topic. Some of them are:
How to Build an AWS Multi-Account Strategy with Centralized Identity Management
Setting up an AWS organization from scratch with Terraform
Terraform on AWS: Multi-Account Setup and Other Advanced Tips
Apart from those, there is also AWS Control Tower, which can be helpful in setting up initial multi-account infrastructure.
Related
Based on this comment I wanted to give #Davos the opportunity to supply his answer to this question:
can you point at a good example of this (cross) account deployment setup? I am using the .aws/config and ./aws/credentials entries of another account, and specifying AWS_PROFILE=dev_admin for example, but resource owners are still showing as the main org's Management Account #. I've had no luck with the provider "profile" either...
I'm not aware of any kind of comprehensive tutorial for cross-account deployment.
AWS Terraform provider has options such as profile where we can specify which profile should be used from our ~/.aws/config file. Moreover, the provider can have a assume_role in which case a certain role will be assumed to create resources, although this can be necessary only we would want to use the same user and assume a role in another account.
We can have multiple providers in the same project. Each provider can use credentials for different users in different accounts. Each resource can specify which provider to use, so it will be provisioned in that specific account.
Bringing this all together, we can have the following example:
~/.aws/credentials file:
[default]
aws_access_key_id=ACCESS_KEY
aws_secret_access_key=SECRET_KEY
[user1]
aws_access_key_id=ACCESS_KEY
aws_secret_access_key=SECRET_KEY
~/.aws/config file:
[default]
region=us-west-1
output=json
[profile user1]
region=us-east-1
output=text
Terraform code:
# Default provider, it will use the credentials for the default profile and it will provision resources in the default account
provider "aws" {
region = "us-west-1"
}
# Provider for another account, it will use the credentials for profile user1 and it will provision resources in the secondary account
provider "aws" {
alias = "account1"
region = "us-east-1"
profile = "user1"
}
# No provider is explicitly specified, this will use the default provider
# It will be deployed in the default account
resource "aws_vpc" "default_vpc" {
cidr_block = "10.0.0.0/16"
}
# Provider is explicitly specified, so this will go into secondary account
resource "aws_vpc" "another_vpc" {
provider = aws.account1
cidr_block = "10.0.0.0/16"
}
Obviously, the state will be kept in a single place, which can be a bucket in any of the accounts.
I have just moved to a multi account set up using Control Tower and am having a 'mare using Terraform to deploy resources in different accounts.
My (simplified) account structure is:
|--Master
|--management (backends etc)
|--images (s3, ecr)
|--dev
|--test
As a simplified experiment I am trying to create an ecr in the images account. So I think I need to create a policy to enable role switching and provide permissions within the target account. For now I am being heavy handed and just trying to switch to Admin access. The AWSAdministratorAccess role is created by Control Tower on configuration.
provider "aws" {
region = "us-west-2"
version = "~> 3.1"
}
data "aws_iam_group" "admins" { // want to attach policy to admins to switch role
group_name = "administrators"
}
// Images account
resource "aws_iam_policy" "images-admin" {
name = "Assume-Role-Images_Admin"
description = "Allow assuming AWSAdministratorAccess role on Images account"
policy = <<EOP
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
]
}
EOP
}
resource "aws_iam_group_policy_attachment" "assume-role-images-admin" {
group = data.aws_iam_group.admins.group_name
policy_arn = aws_iam_policy.images-admin.arn
}
Having deployed this stack I then attempt to deploy another stack which creates a resource in the images account.
provider "aws" {
region = var.region
version = "~>3.1"
}
provider "aws" {
alias = "images"
region = var.region
version = "~> 3.1"
assume_role {
role_arn = "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
}
resource "aws_ecr_repository" "boot-images" {
provider = aws.images
name = "boot-images"
}
On deployment I got:
> Error: error configuring Terraform AWS Provider: IAM Role (arn:aws:iam::*********:role/AWSAdministratorAccess) cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
First one: the creds provided are from the master account which always worked in a single account environment
Second: that's what I think has been achieved by attaching the policy
Third: less sure on this but AWSAdministratorAccess defo exists in the account, I think the arn format is correct and while AWS Single Sign On refers to it as a Permission Set the console also describes this as a role.
I found Deploying to multiple AWS accounts with Terraform? which was helpful but I am missing something here.
I am also at a loss of how to extend this idea to deploying an s3 remote backend into my "management" account.
Terraform version 0.12.29
Turns out there were a couple of issues here:
Profile
The credentials profile was incorrect. Setting the correct creds in Env Vars let me run a simple test when just using the creds file failed. There is still an issue here I don't understand as updating the creds file also failed but I have a system that works.
AWS created roles
While my assumption was correct that the Permission Sets are defined as Roles, they have a trust relationship which was not extended to my Master Admin User (my bad) AND it cannot be amended as it was created by AWS automatically and it is locked down.
Manually grant permissions
So while I can grant permissions to a Group to assume a role programatically via Terraform I need to manually create a role in the Target account which extends trust and hence permissions to the Master account.
In my own experience and considering you already have a working AWS infrastructure, I'd rule out and move away from Control Tower and look into doing same things with CloudFormation StackSets. They let you target OUs or individual accounts.
Control Tower has been recommended to me several times, but having an AWS ecosystem of more >25 accounts with production workloads, I am very reluctant to event try. It's great to start from scratch I guess, but not when you already have a decent amount of workloads and accounts in AWS.
I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf
I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf
I have been using access/secret keys with terraform to create/manage our infrastructure in AWS. However, I am trying to switch to using IAM role instead. I should be able to use a role in my account and assume the role in another account and should be able to run plan, apply etc to build infra in the other account. Any ideas, please suggest.
So far, I am testing with https://www.terraform.io/docs/providers/aws/, but for some reason, it is not working for me or the instructions are not clear to me.
Get the full ARN for the role you want to assume. In your provider config use the 'assume_role' block with the ARN: https://www.terraform.io/docs/providers/aws/index.html#assume_role
provider "aws"
region = "<whatever region>"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
We use a non-terraform script to setup our credentials using IAM role and assume role.(something like https://github.com/Integralist/Shell-Scripts/blob/master/aws-cli-assumerole.sh ) For using with okta, we use https://github.com/redventures/oktad
We get the tmp credentaials and token, save it in ~/.aws/credentials as respective dev/prod etc profile and then point our respective terraform provider configuration like this:
provider "aws" {
region = "${var.region}"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
profile = "${var.dev_profile}"
}