Let me give some context to the issue.
I'm trying to create a terraform script that deploys an AWS Organization with some accounts and also some resources in those accounts.
So, the issue is that I cant seem to be able to figure out how to create resources on multiple accounts at runetime. Meaning that I'd like to create resources on accounts I created on the same script.
The "workflow" would be something like this
Script creates AWS Organization
Same script creates AWS Organizations account
Same script creates an S3 bucket on the account created
Is this a thing that is possible doing? I know one can "impersonate" users by doing something like the following.
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
But is this information something I can get after creating the account as some sort of output from the AWS-organization-account terraform module?
Maybe there is another way of doing this and I just need some reading material.
You can do this but you may want to separate some of these things out to minimise blast radius so it's not all in a single terraform apply or terraform destroy.
As a quick example you could do something like the following:
resource "aws_organizations_organization" "org" {
aws_service_access_principals = [
"cloudtrail.amazonaws.com",
"config.amazonaws.com",
]
feature_set = "ALL"
}
resource "aws_organizations_account" "new_account" {
name = "my_new_account"
email = "john#doe.org"
depends_on = [
aws_organizations_organization.org,
]
}
provider "aws" {
alias = "new_account"
assume_role {
role_arn = "arn:aws:iam::${aws_organizations_account.new_account.id}:role/OrganizationAccountAccessRole"
session_name = "new_account_creation"
}
}
resource "aws_s3_bucket" "bucket" {
provider = aws.new_account
bucket = "new-account-bucket-${aws_organizations_account.new_account.id}"
acl = "private"
}
The above uses the default OrganizationAccountAccessRole IAM role that is created in the child account to then create the S3 bucket in that account.
Related
I would like to store a terraform state file in one aws account and deploy infrastructure into another. Is it possible to provide different set of credentials for backend and aws provider using environmental variables(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)? Or maybe provide credentials to one with environmental variables and another through shared_credentials_file?
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "=3.74.3"
}
}
backend "s3" {
encrypt = true
bucket = "bucket-name"
region = "us-east-1"
key = "terraform.tfstate"
}
}
variable "region" {
default = "us-east-1"
}
provider "aws" {
region = "${var.region}"
}
resource "aws_vpc" "test" {
cidr_block = "10.0.0.0/16"
}
Yes, the AWS profile/access keys configuration used by the S3 backend are separate from the AWS profile/access keys configuration used by the AWS provider. By default they are both going to be looking in the same place, but you could configure the backend to use a different profile so that it connects to a different AWS account.
Yes, and you can even keep them in separated files in the same folder to avoid confusion
backend.tf
terraform {
backend "s3" {
profile = "profile-1"
region = "eu-west-1"
bucket = "your-bucket"
key = "terraform-state/terraform.tfstate"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
main.tf
provider "aws" {
profile = "profile-2"
region = "us-east-1"
}
resource .......
This way, the state file will be stored in the profile-1, and all the code will run in the profile-2
as i'm new with terraform, i'd like to ask your help once i got stuck for almost a day.
When trying to apply a IAC to deploy a Nginx service into a ECS(EC2 launch type) on aws i'm facing the following problem:
Error: Error creating IAM Role nginx-iam_role: MalformedPolicyDocument: Has prohibited field Resource status code: 400, request id: 0f1696f4-d86b-4ad1-ba3b-9453f3beff2b
I have already checked the documentation and the syntax is fine. What else could be wrong?
Following the snippet code creating the IAM infra:
provider "aws" {
region = "us-east-2"
}
data "aws_iam_policy_document" "nginx-doc-policy" {
statement {
sid = "1"
actions = [
"ec2:*"
]
resources = ["*"]
}
}
resource "aws_iam_role" "nginx-iam_role" {
name = "nginx-iam_role"
path = "/"
assume_role_policy = "${data.aws_iam_policy_document.nginx-doc-policy.json}"
}
resource "aws_iam_group_policy" "nginx-group-policy" {
name = "my_developer_policy"
group = "${aws_iam_group.nginx-iam-group.name}"
policy = "${data.aws_iam_policy_document.nginx-doc-policy.json}"
}
resource "aws_iam_group" "nginx-iam-group" {
name = "nginx-iam-group"
path = "/"
}
resource "aws_iam_user" "nginx-user" {
name = "nginx-user"
path = "/"
}
resource "aws_iam_user_group_membership" "nginx-membership" {
user = "${aws_iam_user.nginx-user.name}"
groups = ["${aws_iam_group.nginx-iam-group.name}"]
}
If you guys need the remaining code: https://github.com/atilasantos/iac-terraform-nginx.git
You are trying to use the aws_iam_policy_document.nginx-doc-policy policy as an assume_role_policy which does not work as an assume role policy needs to define a principal that you trust and want to grant access to assume the role you are creating.
An assume role policy could look like this is you want to grant access to the role to EC2 instances via instance profiles. At the end you can attach your initial role via a new resource as an inline policy to the role:
data "aws_iam_policy_document" "instance-assume-role-policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role" "nginx-iam_role" {
name = "nginx-iam_role"
path = "/"
assume_role_policy = data.aws_iam_policy_document.instance-assume-role-policy.json
}
resource "aws_iam_role_policy" "role_policy" {
name = "role policy"
role = aws_iam_role.nginx-iam_role.id
policy = data.aws_iam_policy_document.nginx-doc-policy.json
}
Instead of attaching the policy as an inline policies you can also create an IAM Policy and attach it to the various iam resources. (e.g.: aws_iam_policy and aws_iam_role_policy_attachment for roles.)
We created a bunch of open-source IAM modules (and others) to make IAM handling easier: Find them here on github. But there are more modules out there that you can try.
I would like to manage AWS S3 buckets with terraform and noticed that there's a region parameter for the resource.
I have an AWS provider that is configured for 1 region, and would like to use that provider to create S3 buckets in multiple regions if possible. My S3 buckets have a lot of common configuration that I don't want to repeat, so i have a local module to do all the repetitive stuff....
In mod-s3-bucket/main.tf, I have something like:
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
region = var.bucket_region
bucket = var.bucket_name
}
And then in main.tf in the parent directory (tf root):
provider "aws" {
region = "us-east-1"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
When I run a terraform apply with that, both buckets get created in us-east-1 - is this expected behaviour? My understanding is that region should make the buckets get created in different regions.
Further to that, if I run a terraform plan after bucket creation, I see the following:
~ region = "us-east-1" -> "us-east-2"
on the 1 bucket, but after an apply, the region has not changed.
I know I can easily solve this by using a 2nd, aliased AWS provider, but am asking specifically about how the region parameter is meant to work for an aws_s3_bucket resource (https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#region)
terraform v0.12.24
aws v2.64.0
I think you'll need to do something like the docs show in this example for Replication Configuration: https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#using-replication-configuration
# /root/main.tf
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "us-east-2"
region = "us-east-2"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
provider = "aws.us-east-2"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
# /mod-s3-bucket/main.tf
variable provider {
type = string
default = "aws"
}
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
provider = var.provider
region = var.bucket_region
bucket = var.bucket_name
}
I've never explicitly set the provider like that though in a resource but based on the docs it might work.
The region attribute in s3 bucket resource isn't parsed as expected, there is a bug for this:
https://github.com/terraform-providers/terraform-provider-aws/issues/592
The multiple provider approach is needed.
Terraform informs you if you try to set the region directly in the resource:
╷
│ Error: Value for unconfigurable attribute
│
│ with aws_s3_bucket.my_bucket,
│ on s3.tf line 10, in resource "aws_s3_bucket" "my_bucket":
│ 28: region = "us-east-1"
│
│ Can't configure a value for "region": its value will be decided automatically based on the result of applying this configuration.
Terraform uses the configuration of the provider, where the region is set, for managing resources. Alternatively, as already mentioned, you can use multiple configurations for the same provider by making use of the alias meta-argument.
You can optionally define multiple configurations for the same
provider, and select which one to use on a per-resource or per-module
basis. The primary reason for this is to support multiple regions for
a cloud platform; other examples include targeting multiple Docker
hosts, multiple Consul hosts, etc.
...
A provider block without an alias argument is the default
configuration for that provider. Resources that don't set the provider
meta-argument will use the default provider configuration that matches
the first word of the resource type name. link
I am new to terraform and still learing.
I know there is way you can import your existing infrastructure to terraform and have the state file created. But As if now I have multiple AWS accounts and multiple regions in those accounts which have multiple VPCs.
My task is to create vpc flow log through terraform.
Is it possible?
If it is, could you please help me or direct me how to get this thing done.
It is possible, albeit a little messy. You will need to create a provider block (with a unique alias) for each account/region combination you have (you can use profiles but I think roles is best), and select those providers in your resources appropriately.
provider "aws" {
alias = "acct1uswest2"
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
provider "aws" {
alias = "acct2useast1"
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
}
}
resource "aws_flow_log" "flow1" {
provider = aws.acct1uswest2
vpc_id = "vpc id in account 1" # you mentioned the vpc already exists, so you can either import the vpc and reference it's .id attribute here, or just put the id here as a string
...
}
resource "aws_flow_log" "flow2" {
provider = aws.acct2useast1
vpc_id = "vpc id in account 2"
...
}
Suggest you read up on importing resources (and the implications) here
More on multiple providers here
I'm trying to create data roles in three environments in AWS using Terraform.
One is an role in root account. This role can is used to login to AWS and can assume data roles in production and staging. This works fine. This is using a separate module.
I have problems when trying to create the roles in prod and staging from a module.
My module looks like this main.tf:
resource "aws_iam_role" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}
data "aws_iam_policy_document" "assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${var.account_id}:root"]
}
}
}
data "aws_iam_policy_document" "assume_role_custom_principals" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [
"${var.custom_principals}",
]
}
}
}
resource "aws_iam_role_policy_attachment" "this" {
role = "${aws_iam_role.this.name}"
policy_arn = "${aws_iam_policy.this.arn}"
}
I also have the following in output.tf:
output "role_name" {
value = "${aws_iam_role.this.name}"
}
Next I try to use the module to create two roles in prod and staging.
main.tf:
module "data_role" {
source = "../tf_data_role"
account_id = "${var.account_id}"
name = "data"
policy_description = "Role for data engineers"
custom_principals = [
"arn:aws:iam::${var.master_account_id}:root",
]
policy = "${data.aws_iam_policy_document.data_access.json}"
}
Then I'm trying to attach a AWS policies like this:
resource "aws_iam_role_policy_attachment" "data_readonly_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "data_redshift_full_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonRedshiftFullAccess"
}
The problem I encounter here is that when I try to run this module the above two policies are not attached in staging but in root account. How can I fix this to make it attach the policies in staging?
I'll assume from your question that staging is its own AWS account, separate from your root account. From the Terraform docs
You can define multiple configurations for the same provider in order to support multiple regions, multiple hosts, etc.
This also applies to creating resources in multiple AWS accounts. To create Terraform resources in two AWS accounts, follow these steps.
In your entrypoint main.tf, define aws providers for the accounts you'll be targeting:
# your normal provider targeting your root account
provider "aws" {
version = "1.40"
region = "us-east-1"
}
provider "aws" {
version = "1.40"
region = "us-east-1"
alias = "staging" # define custom alias
# either use an assumed role or allowed_account_ids to target another account
assume_role {
role_arn = "arn:aws:iam:STAGINGACCOUNTNUMBER:role/Staging"
}
}
(Note: the role arn must exist already and your current AWS credentials must have permission to assume it)
To use them in your module, call your module like this
module "data_role" {
source = "../tf_data_role"
providers = {
aws.staging = "aws.staging"
aws = "aws"
}
account_id = "${var.account_id}"
name = "data"
... remainder of module
}
and define the providers within your module like this
provider "aws" {
alias = "staging"
}
provider "aws" {}
Now when you are declaring resources within your module, you can dictate which AWS provider (and hence which account) to create the resources in, e.g
resource "aws_iam_role" "this" {
provider = "aws.staging" # this aws_iam_role will be created in your staging account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
# no explicit provider is set here so it will use the "default" (un-aliased) aws provider and create this aws_iam_policy in your root account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}