I have Terrafrom script that build infrastructure on AWS main account. In my AWS account i have sub organisations . I need to run my TF script to build infrastructure on one of that sub-organisation. How can i do it ?
The best practice to do so is to create a "TerraformRole" in your sub account, which can be assumed by the "TerraformRole" from your master AWS account.
You then define the AWS provider to assume this role.
provider "aws" {
version = "~> 2.33.0"
region = var.region
assume_role {
role_arn = "arn:aws:iam::${var.account_id}:role/${var.terraform_role_name}"
}
}
Related
Based on this comment I wanted to give #Davos the opportunity to supply his answer to this question:
can you point at a good example of this (cross) account deployment setup? I am using the .aws/config and ./aws/credentials entries of another account, and specifying AWS_PROFILE=dev_admin for example, but resource owners are still showing as the main org's Management Account #. I've had no luck with the provider "profile" either...
I'm not aware of any kind of comprehensive tutorial for cross-account deployment.
AWS Terraform provider has options such as profile where we can specify which profile should be used from our ~/.aws/config file. Moreover, the provider can have a assume_role in which case a certain role will be assumed to create resources, although this can be necessary only we would want to use the same user and assume a role in another account.
We can have multiple providers in the same project. Each provider can use credentials for different users in different accounts. Each resource can specify which provider to use, so it will be provisioned in that specific account.
Bringing this all together, we can have the following example:
~/.aws/credentials file:
[default]
aws_access_key_id=ACCESS_KEY
aws_secret_access_key=SECRET_KEY
[user1]
aws_access_key_id=ACCESS_KEY
aws_secret_access_key=SECRET_KEY
~/.aws/config file:
[default]
region=us-west-1
output=json
[profile user1]
region=us-east-1
output=text
Terraform code:
# Default provider, it will use the credentials for the default profile and it will provision resources in the default account
provider "aws" {
region = "us-west-1"
}
# Provider for another account, it will use the credentials for profile user1 and it will provision resources in the secondary account
provider "aws" {
alias = "account1"
region = "us-east-1"
profile = "user1"
}
# No provider is explicitly specified, this will use the default provider
# It will be deployed in the default account
resource "aws_vpc" "default_vpc" {
cidr_block = "10.0.0.0/16"
}
# Provider is explicitly specified, so this will go into secondary account
resource "aws_vpc" "another_vpc" {
provider = aws.account1
cidr_block = "10.0.0.0/16"
}
Obviously, the state will be kept in a single place, which can be a bucket in any of the accounts.
I just created a new AWS account using Terraform aws_organizations_account module. What I am now trying to do is to create ressources into that new account. I guess I would need the account_id of the new AWS account to do that so I stored it into a new output variable but after that I have no idea how can I create a aws_s3_bucket for example
provider.tf
provider "aws" {
region = "us-east-1"
}
main.tf
resource "aws_organizations_account" "account" {
name = "tmp"
email = "first.last+tmp#company.com"
role_name = "myOrganizationRole"
parent_id = "xxxxx"
}
## what I am trying to create inside that tmp account
resource "aws_s3_bucket" "bucket" {}
outputs.tf
output "account_id" {
value = aws_organizations_account.account.id
sensitive = true
}
You can't do this the way you want. You need entire, account creation pipeline for that. Roughly in the pipeline you would have two main stages:
Create your AWS Org and member accounts.
Assume role from the member accounts, and run your TF code for these accounts to create resources.
There are many ways of doing this, and also there are many resources on this topic. Some of them are:
How to Build an AWS Multi-Account Strategy with Centralized Identity Management
Setting up an AWS organization from scratch with Terraform
Terraform on AWS: Multi-Account Setup and Other Advanced Tips
Apart from those, there is also AWS Control Tower, which can be helpful in setting up initial multi-account infrastructure.
I provision AWS resources using Terraform using a python script that call terraform via shell
os.system('terraform apply')
The only way I found to enable terraform authentication, after enabling MFA and assuming a role, is to publish these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')
This worked OK until I configured s3 as backend, terraform action is performed but before the state can be stored in the bucket I get the standard (very confusing) exception:
Error: error configuring S3 Backend: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
I read this excellent answer explaining that for security and other reasons backend configuration is separate.
Since I don't want to add actual secret keys to source code (as suggested by the post) I tried adding a reference to the profile and when it failed I added the actual keys just to see if it would work, which it didn't.
My working theory is that behind the scenes terraform starts another process which doesn't access or inherit the credential e environment variables.
How do I use s3 backend, with an MFA assumed role?
One must point the backend to the desired profile. In my case the same profile used for the provisioning itself.
Here is a minimal POC
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
backend "s3" {
bucket = "unique-terraform-state-dev"
key = "test"
region = "us-east-2"
profile = "the_role_assumed_in_aws_credentials"
}
}
provider "aws" {
version = "~> 3.0"
region = var.region
}
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.bucket_name
}
I'm reminding that it's run by shell which has these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')
I would like to use AWS Assume Roles, with Terraform Cloud / Enterprise
In Terraform Open Source, you would typically just do an Assume Role, leveraging the .aws/Credential Profile on the CLI, which is the initial authentication, and performing the Assume Role:
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
The issue is, with Terraform Enterprise or Cloud, you cannot reference a profile, as the immutable infrastructure will not have that file in its directory.
Terraform Cloud/Enterprise needs to have an Access Key ID, and Secret Access Key, set as a variable, so its infrastructure can perform the Terraform RUN, via its Pipeline, and authenticate to what ever AWS Account you would like to provision within.
So the question is:
How can I perform an AWS Assume Role, leveraging the Access Key ID, and Secret Access Key, of the AWS account with the "Action": "sts:AssumeRole", Policy?
I would think, the below would work, however Terraform is doing the initial authentication via the AWS Credential Profile creds, for the account which has the sts:AssumeRole policy
Can Terraform look at the access_key, and secret_key, to determine what AWS account to use, when trying to assume the role, rather than use the AWS Credential Profile?
provider "aws" {
region = var.aws_region
access_key = var.access_key_id
secret_key = var.secret_access_key
assume_role {
role_arn = "arn:aws:iam::566264069176:role/RemoteAdmin"
#role_arn = "arn:aws:iam::<awsaccount>:role/<rolename>" # Do a replace in "file_update_automation.ps1"
session_name = "RemoteAdminRole"
}
}
In order to allow Terraform Cloud/Enterprise to get new Assume Role Session Tokens, it would need to use the Access_key and Secret_key, to tell it what AWS Account has the sts:assume role, linking to the member AWS Account to be provisioned, and not an AWS Creds Profile
Thank you
This can be achive if you have a business plan enabled and implement self hosted terraform agents in you infrastructure.See video.
I used the exact same provider configuration minus the explicit adding of the acces keys.
The access keys were added in the Terraform Cloud workspace as environment variables.
This is definitely possible with Terraform Enterprise (TFE) if your TFE infrastructure is also hosted in AWS and the instance profile is trusted by the role you are trying to assume.
For Terraform Cloud (TFC) it is a different story, today there is no way to create a trust between TFC and an IAM role, but we can leverage the AWS SDK's ability to pickup credentials from environment variables. You have 2 options:
Create an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variable on the workspace and set them (remember to mark the secret access key as sensitive). The provider will pickup these in the env and work the same as on your local.
If all workspaces need to use the same access and secret keys, you can set the env variables on a variable set, which will apply to all workspaces.
I have been using access/secret keys with terraform to create/manage our infrastructure in AWS. However, I am trying to switch to using IAM role instead. I should be able to use a role in my account and assume the role in another account and should be able to run plan, apply etc to build infra in the other account. Any ideas, please suggest.
So far, I am testing with https://www.terraform.io/docs/providers/aws/, but for some reason, it is not working for me or the instructions are not clear to me.
Get the full ARN for the role you want to assume. In your provider config use the 'assume_role' block with the ARN: https://www.terraform.io/docs/providers/aws/index.html#assume_role
provider "aws"
region = "<whatever region>"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
We use a non-terraform script to setup our credentials using IAM role and assume role.(something like https://github.com/Integralist/Shell-Scripts/blob/master/aws-cli-assumerole.sh ) For using with okta, we use https://github.com/redventures/oktad
We get the tmp credentaials and token, save it in ~/.aws/credentials as respective dev/prod etc profile and then point our respective terraform provider configuration like this:
provider "aws" {
region = "${var.region}"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
profile = "${var.dev_profile}"
}