I have just moved to a multi account set up using Control Tower and am having a 'mare using Terraform to deploy resources in different accounts.
My (simplified) account structure is:
|--Master
|--management (backends etc)
|--images (s3, ecr)
|--dev
|--test
As a simplified experiment I am trying to create an ecr in the images account. So I think I need to create a policy to enable role switching and provide permissions within the target account. For now I am being heavy handed and just trying to switch to Admin access. The AWSAdministratorAccess role is created by Control Tower on configuration.
provider "aws" {
region = "us-west-2"
version = "~> 3.1"
}
data "aws_iam_group" "admins" { // want to attach policy to admins to switch role
group_name = "administrators"
}
// Images account
resource "aws_iam_policy" "images-admin" {
name = "Assume-Role-Images_Admin"
description = "Allow assuming AWSAdministratorAccess role on Images account"
policy = <<EOP
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
]
}
EOP
}
resource "aws_iam_group_policy_attachment" "assume-role-images-admin" {
group = data.aws_iam_group.admins.group_name
policy_arn = aws_iam_policy.images-admin.arn
}
Having deployed this stack I then attempt to deploy another stack which creates a resource in the images account.
provider "aws" {
region = var.region
version = "~>3.1"
}
provider "aws" {
alias = "images"
region = var.region
version = "~> 3.1"
assume_role {
role_arn = "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
}
resource "aws_ecr_repository" "boot-images" {
provider = aws.images
name = "boot-images"
}
On deployment I got:
> Error: error configuring Terraform AWS Provider: IAM Role (arn:aws:iam::*********:role/AWSAdministratorAccess) cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
First one: the creds provided are from the master account which always worked in a single account environment
Second: that's what I think has been achieved by attaching the policy
Third: less sure on this but AWSAdministratorAccess defo exists in the account, I think the arn format is correct and while AWS Single Sign On refers to it as a Permission Set the console also describes this as a role.
I found Deploying to multiple AWS accounts with Terraform? which was helpful but I am missing something here.
I am also at a loss of how to extend this idea to deploying an s3 remote backend into my "management" account.
Terraform version 0.12.29
Turns out there were a couple of issues here:
Profile
The credentials profile was incorrect. Setting the correct creds in Env Vars let me run a simple test when just using the creds file failed. There is still an issue here I don't understand as updating the creds file also failed but I have a system that works.
AWS created roles
While my assumption was correct that the Permission Sets are defined as Roles, they have a trust relationship which was not extended to my Master Admin User (my bad) AND it cannot be amended as it was created by AWS automatically and it is locked down.
Manually grant permissions
So while I can grant permissions to a Group to assume a role programatically via Terraform I need to manually create a role in the Target account which extends trust and hence permissions to the Master account.
In my own experience and considering you already have a working AWS infrastructure, I'd rule out and move away from Control Tower and look into doing same things with CloudFormation StackSets. They let you target OUs or individual accounts.
Control Tower has been recommended to me several times, but having an AWS ecosystem of more >25 accounts with production workloads, I am very reluctant to event try. It's great to start from scratch I guess, but not when you already have a decent amount of workloads and accounts in AWS.
Related
I just created a new AWS account using Terraform aws_organizations_account module. What I am now trying to do is to create ressources into that new account. I guess I would need the account_id of the new AWS account to do that so I stored it into a new output variable but after that I have no idea how can I create a aws_s3_bucket for example
provider.tf
provider "aws" {
region = "us-east-1"
}
main.tf
resource "aws_organizations_account" "account" {
name = "tmp"
email = "first.last+tmp#company.com"
role_name = "myOrganizationRole"
parent_id = "xxxxx"
}
## what I am trying to create inside that tmp account
resource "aws_s3_bucket" "bucket" {}
outputs.tf
output "account_id" {
value = aws_organizations_account.account.id
sensitive = true
}
You can't do this the way you want. You need entire, account creation pipeline for that. Roughly in the pipeline you would have two main stages:
Create your AWS Org and member accounts.
Assume role from the member accounts, and run your TF code for these accounts to create resources.
There are many ways of doing this, and also there are many resources on this topic. Some of them are:
How to Build an AWS Multi-Account Strategy with Centralized Identity Management
Setting up an AWS organization from scratch with Terraform
Terraform on AWS: Multi-Account Setup and Other Advanced Tips
Apart from those, there is also AWS Control Tower, which can be helpful in setting up initial multi-account infrastructure.
I have no ideas what to set here. The whole policy, binding and member stuff is very confusing IMHO. Are any of these roles? Anyway...
Trying to access the secret manager from a cloud function. The cloud function is setup using Terraform:
module "mds_reporting_cloud_function" {
source = "terraform-google-modules/scheduled-function/google"
version = "2.0.0"
project_id = var.function_gcp_project
job_name = var.function_name
job_description = var.function_description
job_schedule = var.function_cron_schedule
function_entry_point = "main"
function_source_directory = "${path.module}/../../../../src"
function_name = var.function_name
region = var.function_gcp_region
bucket_name = var.function_name
function_description = var.function_description
function_environment_variables = var.function_environment_variables
function_runtime = "python38"
topic_name = var.function_name
}
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = var.function_gcp_project
region = var.function_gcp_region
cloud_function = var.function_name
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${var.function_gcp_project}#appspot.gserviceaccount.com"
]
}
My understanding is, that if no service account for the cloud function is specified it will use the default App Engine service account.
The binding should 'bind' the role to the existing IAM policy of the App Engine service account.
However, it throws this error:
Error:
Error applying IAM policy for cloudfunctions cloudfunction "projects/alpine-proton-280612/locations/europe-west3/functions/mds-reporting-cloud-function":
Error setting IAM policy for cloudfunctions cloudfunction "projects/alpine-proton-280612/locations/europe-west3/functions/mds-reporting-cloud-function":
googleapi: Error 400: Role roles/secretmanager.secretAccessor is not supported for this resource.
Not sure what do.
The best solution is to grant, only on the secret, the permission for the Cloud Functions service account to access the secret. For that use the Secret Manager IAM terraform resource
resource "google_secret_manager_secret_iam_binding" "binding" {
project = var.function_gcp_project
secret_id = google_secret_manager_secret.your-secret.secret_id
# If your secret is not created by terraform, use this format for the id projects/{{project}}/secrets/{{secret_id}}
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${var.function_gcp_project}#appspot.gserviceaccount.com"
]
}
Important note:
You can also grant this role at the project level, but it's less secure because the function will have access to all the secrets of the project
You use the App Engine (and cloud function) default service account. It's also not so secure. Indeed, any App Engine service, and Cloud Functions with custom service account will use by default this service account, and thus will be able to access the secrets. Prefer a custom service account for your Cloud Functions
The second comment of John is very important. Terraform have several level of write-and-replace of IAM roles. Keep in mind (work for all IAM_XXX terraform module)
Policy replace all the accounts of all the possible roles of the whole resource (on a project, it could be dramatic!)
Binding replace all the accounts of a specific role
Member only add an account on a specific role. Delete nothing.
I have the following multi-account setup with AWS SSO:
An account called "infrastructure-owner". Under this account, there is a role called "SomeAccessLevel" where I can click to sign-in the web console.
Another account called "infrastructure-consumer". Under this account there is the same role called "SomeAccessLevel" where I can click to sign-in the web console. There may be other roles.
Account "infrastructure-owner" owns resources (for example S3 buckets, DynamoDB tables, or VPNs) typically with read/write access. This account is somewhat protected and rarely used. Account "infrastructure-consumer" merely have read access to resources in "infrastructure-owner". This account is used often by multiple people/services. For example, production data pipelines run in "infrastructure-consumer" and have read-only rights to S3 buckets in "infrastructure-owner". However, from time to time, new data may be included manually in these S3 buckets via sign-in "infrastructure-owner".
I would like to provision this infrastructure with Terraform. I am unable to provide permissions for "infrastructure-consumer" to access resources from "infrastructure-owner". I've read dozens of blog posts on AWS multi-account / SSO / Terraform but I still cannot do it. At this point, I cannot even do it manually in the web console.
Please realize that "SomeAccessLevel" is a role created by AWS that I cannot modify (typically called AWSReservedSSO_YOURNAMEHERE_RANDOMSTRING). Also, I cannot give permissions to particular users, since these users may not be owned by "infrastructure-consumer". Also, users access this account via SSO using a role.
The following Terraform code is an example DynamoDB table created in the "infrastructure-owner" that I would like to read in the "infrastructure-consumer" account (any role):
# Terraform config
terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.44"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR_ORGANIZATION_HERE"
workspaces {
name = "YOUR_TF_WORKSPACE_NAME_HERE" # linked to "infrastructure-owner"
}
}
}
# Local provider
provider "aws" {
profile = "YOUR_AWS_PROFILE_NAME_HERE" # linked to "infrastructure-owner"
region = "eu-central-1"
}
# Example resource that I would like to access from other accounts like "infrastructure-consumer"
resource "aws_dynamodb_table" "my-database" {
# Basic
name = "my-database"
billing_mode = "PAY_PER_REQUEST"
hash_key = "uuid"
# Key
attribute {
name = "uuid"
type = "S"
}
}
# YOUR CODE TO ALLOW "infrastructure-consumer" TO READ THE TABLE.
It could also happen that there is a better architecture for this use case. I am trying to follow general practices for AWS multi-account for production environments, and Terraform for provisioning them.
Thank you!
I assume you mean AWS accounts and not IAM accounts (users).
I remember that roles to be assumed via AWS SSO had something called permission sets, which is no more than a policy with API actions allowed|denied to be performed while assuming a role. I don't know exactly how AWS SSO could influence how role trust works in AWS, but you could have a role in infrastructure-owner's account that trusts anything in infrastructure-consumer's account, i.e. trusting "arn:aws:iam::${var.infrastructure-consumer's account}:root"
To achieve that with Terraform you would run it in your management account (SSO Administrator's Account) and make that trust happen.
We are working on a requirement where we want terraform apply which runs on AWS EC2 instance to use IAM role instead of using credentials(accesskey/secretkey) as part of aws provider to create route53 in AWS.
NOTE: IAM Role added to instance has been provided with policy which gives the role the route53fullaccess.
When we use below syntax in terraform.tf, it works fine. We are able to create route.
SYNTAX:
*provider "aws" {
access_key = "${var.aws_accesskey}
secret_key = "${var.aws_secretkey}
region = "us-east-1"
}
resource "aws_route53_record {}*
But, we want the terraform script to run with IAM Role and not with credentials. (Do not want to maintain credentials file)
STEPS TRIED:
1. Removed provider block from terraform.tf file and run the build.
SYNTAX:
resource "aws_route53_record {}
2.Getting the below error.
Provider.aws :InvalidClientTokenid.
3. Went through the terraform official documentation to use IAM Role. it says to use metadata api. but there is no working sample. (https://www.terraform.io/docs/providers/aws/index.html)
Am new to Terraforms so pardon me if its a basic question. Can someone help with the code/working sample to achieve this ?
You need to supply the profile arn in the "provider" block, not the role, like so :
provider "aws" {
profile = "arn:aws:iam::<your account>:instance-profile/<your role name>"
}
The 'role_arn' key mentioned in the answer above is actually invalid in the 'provider' context.
Insert the following line for IAM role in your terraform script, in provider:
role_arn = "arn:aws:iam::<your account>:role/SQS-Role-demo"
I have been using access/secret keys with terraform to create/manage our infrastructure in AWS. However, I am trying to switch to using IAM role instead. I should be able to use a role in my account and assume the role in another account and should be able to run plan, apply etc to build infra in the other account. Any ideas, please suggest.
So far, I am testing with https://www.terraform.io/docs/providers/aws/, but for some reason, it is not working for me or the instructions are not clear to me.
Get the full ARN for the role you want to assume. In your provider config use the 'assume_role' block with the ARN: https://www.terraform.io/docs/providers/aws/index.html#assume_role
provider "aws"
region = "<whatever region>"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
We use a non-terraform script to setup our credentials using IAM role and assume role.(something like https://github.com/Integralist/Shell-Scripts/blob/master/aws-cli-assumerole.sh ) For using with okta, we use https://github.com/redventures/oktad
We get the tmp credentaials and token, save it in ~/.aws/credentials as respective dev/prod etc profile and then point our respective terraform provider configuration like this:
provider "aws" {
region = "${var.region}"
shared_credentials_file = "${pathexpand("~/.aws/credentials")}"
profile = "${var.dev_profile}"
}