I am getting following error while Setting up a kinesis data firehose event destination for Amazon SES event publishing using terraform. It seems like the terraform created the IAM role but throwing the error while creating the firehose event destination with IAM role.
Whereas able to attach same IAM role with firehose event destination from AWS console which was created by terraform.
If I manually create the same IAM role using AWS console and then pass the ARN of the role to the terraform it works. However if I try to create the role using terraform and then create the event destination it doesn’t work. Can someone pls help me on this.
Error creating SES configuration set event destination: InvalidFirehoseDestination: Could not assume IAM role <arn:aws:iam::<AWS account name >:role/<AWS IAM ROLE NAME>>.
data "aws_iam_policy_document" "ses_configuration_set_assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ses.amazonaws.com"]
}
}
}
data "aws_iam_policy_document" "ses_firehose_destination_policy" {
statement {
effect = "Allow"
actions = [
"firehose:PutRecord",
"firehose:PutRecordBatch"
]
resources = [
"<ARN OF AWS FIREHOSE DELIVERY STREAM >"
]
}
}
resource "aws_iam_policy" "ses_firehose_destination_iam_policy" {
name = "SesfirehosedestinationPolicy"
policy = data.aws_iam_policy_document.ses_firehose_destination_policy.json
}
resource "aws_iam_role" "ses_firehose_destination_role" {
name = "SesfirehosedestinationRole"
assume_role_policy = data.aws_iam_policy_document.ses_configuration_set_assume_role.json
}
resource "aws_iam_role_policy_attachment" "ses_firehose_destination_role_att" {
role = aws_iam_role.ses_firehose_destination_role.name
policy_arn = aws_iam_policy.ses_firehose_destination_iam_policy.arn
}
resource "aws_ses_configuration_set" "create_ses_configuration_set" {
name = var.ses_config_set_name
}
resource "aws_ses_event_destination" "ses_firehose_destination" {
name = "event-destination-kinesis"
configuration_set_name = aws_ses_configuration_set.create_ses_configuration_set.name
enabled = true
matching_types = ["send", "reject", "bounce", "complaint", "delivery", "renderingFailure"]
depends_on = [aws_iam_role.ses_firehose_destination_role]
kinesis_destination {
stream_arn = "<ARN OF AWS FIREHOSE DELIVERY STREAM>"
role_arn = aws_iam_role.ses_firehose_destination_role.arn
}
}
You might need to look at your Firehose datasource. If it is a Kinesis Datastream, it will not work. It will only work when using a Direct PUT and other datasource for the Kinesis Firehose. I ran into this issue while setting this up for my Kinesis Firehose to Datadog as well. I hope that this helps.
I found the same issue and was able to resolve it with a slight workaround.
The issue is likely due to the time it takes AWS to propagate the IAM role to all the regions. As an IAM role is global, it will be created first in a 'random' region and then propagated to all regions. So if it is not first created in your region it may take some time to propagate and you will get this error if the SES Event Destination is created before the IAM role has propagated.
It does not help to add a depends_on clause, as terraform (correctly?) thinks the IAM role has been created, it has just not been propagated to your region yet.
The solution that worked for me was to create an IAM role that grants access to the "sts:AssumeRole" action for the SES service and the "firehose:PutRecordBatch" action for the Firehose. When I apply the Terraform, I did a targeted apply first for only this role, waited a minute (to allow the IAM role to propagate) and then do a normal terraform apply to complete.
In your example the command will look something like this:
terraform apply --target aws_iam_role_policy_attachment.ses_firehose_destination_role_att
terraform apply
Related
I've looked at this question and this one but I'm not able to deploy a role into a child account which allows an ECS task running in the parent account to AssumeRole into it.
Terraform code:
data "aws_iam_policy_document" "cross-account-assume-role-child" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [
"arn:aws:sts::${var.master_account_ID}:assumed-role/${var.cross_account_role_name}"
]
}
}
}
When I try to run terraform the plan succeeds but the apply fails with such an error:
Error: failed creating IAM Role (ECS-cross-account-child-role):
MalformedPolicyDocument: Invalid principal in policy:
"AWS":"arn:aws:sts::<AWS Account ID>:assumed-role/ECS-cross-account-master-role"
I get the same error if I try to manually update the policy like above in the AWS console so this isn't due to terraform.
What am I doing wrong?
The arn you need to specify in the policy is the one of the IAM role, not of the assumed credentials:
arn:aws:iam::${var.master_account_ID}:role/${var.cross_account_role_name}
Instead of of sts and assumed-role
My objective is to be able to set up an IAM Role which can assume a role of a certain IAM user. After the creation of the role, I would like to come back later and modify this role by adding external IDs to establish a trust relationship. Let me illustrate with an example:
Let's say I want to create role:
resource "aws_iam_role" "happy_role" {
name = "happy-role"
assume_role_policy = data.aws_iam_policy_document.happy_assume_rule_policy.json
}
Let's also assume that happy_assume_role_policy looks something like:
data "aws_iam_policy_document" "happy_assume_role_policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [var.some_iam_user_arn]
}
}
}
Now, I will use the created role to create an external integration. But once I am done creating that integration, I want to go back to the role I originally created and modify it's assumed role policy. So now I want to add a condition to the assume role policy and make it look like:
data "aws_iam_policy_document" "happy_assume_role_policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [var.snowflake_iam_user_arn]
}
condition {
test = "StringEquals"
values = [some_integration.integration.external_id]
variable = "sts:ExternalId"
}
}
}
In other words, my workflow should be like:
Create role without assume conditions
Create an integration with that role
Take the ID from the created integration and go back to the created role and add a condition on it
Edit:
By "integration" I mean something like this. Once an Integration is created, there is an outputted ID, and then I need to take that ID and feed it back to the Assume Role I originally created. That should happen everytime I add a new integration.
I first tried to create two IAM roles, one for managing the integration creation, and another for managing the integration itself. That ran without circular reference errors; however, I was not able to establish a connection from the storage to the database, as it needs to be the same IAM Role creating and managing the integration.
This is what I have ended up doing (still not good for an accepted way to do it IMO). I created a role (with targeting) like:
resource "aws_iam_role" "happy_role" {
name = "happy-role"
assume_role_policy = data.aws_iam_policy_document.basic_policy.json
}
And used a basic assume role policy (without conditions). And then for the next run, I applied (without targeting) and it worked.
I followed the approach mentioned here, How to create a Snowflake Storage Integration with AWS S3 with Terraform?
As part of Storage Integration creation, just provide the role arn which is manually constructed without the resource is created. Terraform wont complain. Then create the role with the assume policy referring to the External Id and User ARN created by the Storage Integration
I am currently having two (maybe conflicting) S3 bucket policies, which show a permanent difference on Terraform. Before I show parts of the code, I will try to give an overview of the structure.
I am currently using a module, which:
Takes IAM Role & an S3 Bucket as inputs
Attaches S3 Bucket policy to the inputted role
Attaches S3 Bucket (allowing VPC) policy to the inputted S3 bucket
I have created some code (snippet and not full code) to illustrate how this looks like for the module.
The policies look like:
# S3 Policy to be attached to the ROLE
data "aws_iam_policy_document" "foo_iam_s3_policy" {
statement {
effect = "Allow"
resources = ["${data. s3_bucket.s3_bucket.arn}/*"]
actions = ["s3:GetObject", "s3:GetObjectVersion"]
}
statement {
effect = "Allow"
resources = [data.s3_bucket.s3_bucket.arn]
actions = ["s3:*"]
}
}
# VPC Policy to be attached to the BUCKET
data "aws_iam_policy_document" "foo_vpc_policy" {
statement {
sid = "VPCAllow"
effect = "Allow"
resources = [data.s3_bucket.s3_bucket.arn, "${data.s3_bucket.s3_bucket.arn}/*"]
actions = ["s3:GetObject", "s3:GetObjectVersion"]
condition {
test = "StringEquals"
variable = "aws:SourceVpc"
values = [var.foo_vpc]
}
principals {
type = "*"
identifiers = ["*"]
}
}
}
The policy attachments look like:
# Turn policy into a resource to be able to use ARN
resource "aws_iam_policy" "foo_iam_policy_s3" {
name = "foo-s3-${var.s3_bucket_name}"
description = "IAM policy for foo on s3"
policy = data.aws_iam_policy_document.foo_iam_s3_policy.json
}
# Attaches s3 bucket policy to IAM Role
resource "aws_iam_role_policy_attachment" "foo_attach_s3_policy" {
role = data.aws_iam_role.foo_role.name
policy_arn = aws_iam_policy.foo_iam_policy_s3.arn
}
# Attach foo vpc policy to bucket
resource "s3_bucket_policy" "foo_vpc_policy" {
bucket = data.s3_bucket.s3_bucket.id
policy = data.aws_iam_policy_document.foo_vpc_policy.json
}
Now let's step outside of the module, where the S3 bucket (the one I mentioned that will be inputted into the module) is created, and where another policy needs to be attached to it (the S3 bucket). So outside of the module, we:
Provide an S3 bucket to the aforementioned module as input (alongside the IAM Role)
Create a policy to allow some IAM Role to put objects in the aforementioned bucket
Attach the created policy to the bucket
The policy looks like:
# Create policy to allow bar to put objects in the bucket
data "aws_iam_policy_document" "bucket_policy_bar" {
statement {
sid = "Bar IAM access"
effect = "Allow"
resources = [module.s3_bucket.bucket_arn, "${module. s3_bucket.bucket_arn}/*"]
actions = ["s3:PutObject", "s3:GetObject", "s3:ListBucket"]
principals {
type = "AWS"
identifiers = [var.bar_iam]
}
}
}
And its attachment looks like:
# Attach Bar bucket policy
resource "s3_bucket_policy" "attach_s3_bucket_bar_policy" {
bucket = module.s3_bucket.bucket_name
policy = data.aws_iam_policy_document.bucket_policy_bar.json
}
(For more context: Basically foo is a database that needs VPC and s3 attachment to role to operate on the bucket and bar is an external service that needs to write data to the bucket)
What is going wrong
When I try to plan/apply, Terraform shows that there is always change, and shows an overwrite between the S3 bucket policy of bar (bucket_policy_bar) and the VPC policy attached inside the module (foo_vpc_policy).
In fact the error I am getting kind of sounds like what is described here:
The usage of this resource conflicts with the
aws_iam_policy_attachment resource and will permanently show a
difference if both are defined.
But I am attaching policies to S3 and not to a role, so I am not sure if this warning applies to my case.
Why are my policies conflicting? And how can I avoid this conflict?
EDIT:
For clarification, I have a single S3 bucket, to which I need to attach two policies. One that allows VPC access (foo_vpc_policy, which gets created inside the module) and another one (bucket_policy_bar) that allows IAM role to put objects in the bucket
there is always change
That is correct. aws_s3_bucket_policy sets new policy on the bucket. It does not add new statements to it.
Since you are invoking aws_s3_bucket_policy twice for same bucket, first time in module.s3_bucket module, then second time in parent module (I guess), the parent module will simply attempt to set new policy on the bucket. When you perform terraform apply/plan again, the terraform will detect that the policy defined in module.s3_bucket is different, and will try to update it. So you end up basically with a circle, where each apply will change the bucket policy to new one.
I'm not aware of a terraform resource which would allow you to update (i.e. add new statements) to an existing bucket policy. Thus I would try to re-factor your design so that you execute aws_s3_bucket_policy only once with all the statements that you require.
Thanks to the tip from Marcin I was able to resolve the issue by making the attachment of the policy inside the module optional like:
# Attach foo vpc policy to bucket
resource "s3_bucket_policy" "foo_vpc_policy" {
count = var.attach_vpc_policy ? 1 : 0 # Only attach VPC Policy if required
bucket = data.s3_bucket.s3_bucket.id
policy = data.aws_iam_policy_document.foo_vpc_policy.json
}
The policy in all cases has been added as output of the module like:
# Outputting only the statement, as it will later be merged with other policies
output "foo_vpc_policy_json" {
description = "VPC Allow policy json (to be later merged with other policies that relate to the bucket outside of the module)"
value = data.aws_iam_policy_document.foo_vpc_policy.json
}
For the cases when it was needed to defer the attachment of the policy (wait to attach it together with another policy), I in-lined the poliicy via source_json)
data "aws_iam_policy_document" "bucket_policy_bar" {
# Adding the VPC Policy JSON as a base for this Policy (docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document)
source_json = module.foor_.foo_vpc_policy_json # here we add the statement that has
statement {
sid = "Bar IAM access"
effect = "Allow"
resources = [module.s3_bucket_data.bucket_arn, "${module.s3_bucket_data.bucket_arn}/*"]
actions = ["s3:PutObject", "s3:GetObject", "s3:ListBucket"]
principals {
type = "AWS"
identifiers = [var.bar_iam]
}
}
}
I have just moved to a multi account set up using Control Tower and am having a 'mare using Terraform to deploy resources in different accounts.
My (simplified) account structure is:
|--Master
|--management (backends etc)
|--images (s3, ecr)
|--dev
|--test
As a simplified experiment I am trying to create an ecr in the images account. So I think I need to create a policy to enable role switching and provide permissions within the target account. For now I am being heavy handed and just trying to switch to Admin access. The AWSAdministratorAccess role is created by Control Tower on configuration.
provider "aws" {
region = "us-west-2"
version = "~> 3.1"
}
data "aws_iam_group" "admins" { // want to attach policy to admins to switch role
group_name = "administrators"
}
// Images account
resource "aws_iam_policy" "images-admin" {
name = "Assume-Role-Images_Admin"
description = "Allow assuming AWSAdministratorAccess role on Images account"
policy = <<EOP
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
]
}
EOP
}
resource "aws_iam_group_policy_attachment" "assume-role-images-admin" {
group = data.aws_iam_group.admins.group_name
policy_arn = aws_iam_policy.images-admin.arn
}
Having deployed this stack I then attempt to deploy another stack which creates a resource in the images account.
provider "aws" {
region = var.region
version = "~>3.1"
}
provider "aws" {
alias = "images"
region = var.region
version = "~> 3.1"
assume_role {
role_arn = "arn:aws:iam::<Images_Account_ID>:role/AWSAdministratorAccess"
}
}
resource "aws_ecr_repository" "boot-images" {
provider = aws.images
name = "boot-images"
}
On deployment I got:
> Error: error configuring Terraform AWS Provider: IAM Role (arn:aws:iam::*********:role/AWSAdministratorAccess) cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
First one: the creds provided are from the master account which always worked in a single account environment
Second: that's what I think has been achieved by attaching the policy
Third: less sure on this but AWSAdministratorAccess defo exists in the account, I think the arn format is correct and while AWS Single Sign On refers to it as a Permission Set the console also describes this as a role.
I found Deploying to multiple AWS accounts with Terraform? which was helpful but I am missing something here.
I am also at a loss of how to extend this idea to deploying an s3 remote backend into my "management" account.
Terraform version 0.12.29
Turns out there were a couple of issues here:
Profile
The credentials profile was incorrect. Setting the correct creds in Env Vars let me run a simple test when just using the creds file failed. There is still an issue here I don't understand as updating the creds file also failed but I have a system that works.
AWS created roles
While my assumption was correct that the Permission Sets are defined as Roles, they have a trust relationship which was not extended to my Master Admin User (my bad) AND it cannot be amended as it was created by AWS automatically and it is locked down.
Manually grant permissions
So while I can grant permissions to a Group to assume a role programatically via Terraform I need to manually create a role in the Target account which extends trust and hence permissions to the Master account.
In my own experience and considering you already have a working AWS infrastructure, I'd rule out and move away from Control Tower and look into doing same things with CloudFormation StackSets. They let you target OUs or individual accounts.
Control Tower has been recommended to me several times, but having an AWS ecosystem of more >25 accounts with production workloads, I am very reluctant to event try. It's great to start from scratch I guess, but not when you already have a decent amount of workloads and accounts in AWS.
I'm trying to deal with some terraform scripts that are held across different git repo's.
One of the repo's is used for provisioning eks infrastructure and the other is to hold generic tf scripts.
In the generic repo, I've created a custom iam policy. I want this to be attached to the iam role when the eks infrastructure repo is created. Is there a way to do this?
resource "aws_iam_role_policy_attachment" "eks-worker-node-data-access" {
policy_arn = "arn:aws:iam::aws:policy/base-data-capture"
role = "${module.eks-control-plane.worker_node_role_name}"
}
The policy arn field is set to the name of the policy that would be created, however this fails as it hasn't been imported...
What I'd like to do is something like
resource "aws_iam_role_policy_attachment" "eks-worker-node-data-access" {
source = "my-git-repo.policy"
policy_arn = "arn:aws:iam::aws:policy/base-data-capture"
role = "${module.eks-control-plane.worker_node_role_name}"
}
I suppose the obvious answer would be to just have the policy created in the same tf folder, but all other policies have been created in the generic repo.