Create multiple IAM roles with different policies in Terraform - amazon-web-services

Good day everyone, I am new to terraform, trying to solve a problem and stuck in it. I want to create multiple AWS IAM roles using input variables and then assign different policies to those roles. I am trying to do this in for_each loop. However, I am unable to solve the riddle that how to provide different policies. I am trying to solve this using a map variable. Here is my test code
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "region"
shared_credentials_file = "path_to_creds"
profile = "profile_name"
}
variable "roles" {
type = map
default = {
* # These bucket names should be different in each policy
"TestRole1" = "module.s3_bucket_raw.s3_bucket_arn, module.s3_bucket_bronze.s3_bucket_arn"
"TestRole2" = "module.s3_bucket_bronze.s3_bucket_arn, module.s3_bucket_silver.s3_bucket_arn"
}
}
resource "aws_iam_role" "example" {
for_each = var.roles
name = "${each.key}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:DeleteObject",
"s3:GetBucketLocation"
]
Effect = "Allow"
Resource = [
each.value,
"${each.value}/*"
]
}
]
})
}
As you can see in the Resource block of the policy the bucket name will be each.value which will be
module.s3_bucket_raw.s3_bucket_arn, module.s3_bucket_bronze.s3_bucket_arn
this is fine however I also want to do the
module.s3_bucket_raw.s3_bucket_arn/*, module.s3_bucket_bronze.s3_bucket_arn/*
which is not possible with my approach because when i do "${each.value}/*" this will translate into
module.s3_bucket_raw.s3_bucket_arn, module.s3_bucket_bronze.s3_bucket_arn/*
I hope some expert can spend few minutes for me, for which I am thanking you all in anticipation.

I would first either update the variable definition to store the buckets as a list or add a local to convert your variable to a nice object before creating the resource if you can't update the variable definition. This variable would be nicer to work with:
variable "roles" {
type = map
default = {
"TestRole1" = ["module.s3_bucket_raw.s3_bucket_arn", "module.s3_bucket_bronze.s3_bucket_arn"]
"TestRole2" = ["module.s3_bucket_bronze.s3_bucket_arn", "module.s3_bucket_silver.s3_bucket_arn"]
}
}
Then you can use a for loop in the resource like this:
resource "aws_iam_role" "example" {
for_each = var.roles
name = "${each.key}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:DeleteObject",
"s3:GetBucketLocation"
]
Effect = "Allow"
Resource = flatten([for bucket in each.value: [bucket, "${bucket}/*"]]),
}]
})
}
I prefer the aws_iam_policy_document data source for complex policies but I'll leave that to you.

Related

Pass variables into Terraform IAM Policy Document

Editing This Question.
I'm using Terraform v0.12.9 and I'm trying to conditionally create bucket policies with the custom policy stored as an IAM policy document.
data "aws_iam_policy_document" "my_policy" {
statement {
sid = "IPALLOW"
effect = "Deny"
actions = ["s3:*"]
resources = [
"arn:aws:s3:::${var.my_bucket}/*",
"arn:aws:s3:::${var.my_bucket}"
]
principals {
type = "AWS"
identifiers = ["*"]
}
condition {
test = "NotIpAddress"
variable = "aws:SourceIp"
values = [
"${concat(var.ip_one,
var.ip_two,
var.ip_three)}"
]
}
}
}
resource "aws_s3_bucket_policy" "my-bucket-policy" {
count = length(var.buckets)
bucket = element(values(var.buckets[count.index]), 0)
policy = (
element(values(var.buckets[count.index]), 0) == "bar_bucket" ?
data.aws_iam_policy_document.my_policy.json : <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "HTTP",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
REST OF POLICY REDACTED
POLICY
)
}
}
However this fails and throws an error
Inappropriate value for attribute "values": element 0: string required.
But when I hardcode the ip addresses in the policy like this,
["100.0.0.100","100.0.0.101","100.0.0.102/24","100.0.0.103/24","100.0.0.104"]
It works.
Is there a way to pass the IP addresses as a variable in the IAM policy document?
I've tried jsonencode on the values, but it add a bunch of \ which doesn't work.
These are what the variables look like.
variable "ip_one" {
type = list(string)
default = [
"100.0.0.100",
"100.0.0.101"
]
}
variable "ip_two" {
type = list(string)
default = [
"100.0.0.102/24",
"100.0.0.103/24"
]
}
variable "ip_three" {
type = list(string)
default = [
"100.0.0.104"
]
}
As I see it, there are a couple of problems:
The IPs have to have the subnet mask as well, i.e., you cannot mix IP addresses with and without subnet mask. For example, in ip_one you are using 100.0.0.100 while in ip_two you have 100.0.0.102/24.
The variables are already lists of strings, so concatenating lists will create a list and then values is supposed to be a list as well.
What I would suggest is using a local variable for any value manipulation and then using the local variable in the values argument. So you could do the following:
locals {
ips = concat(var.ip_one, var.ip_two, var.ip_three)
}
data "aws_iam_policy_document" "my_policy" {
statement {
sid = "IPALLOW"
effect = "Deny"
actions = ["s3:*"]
resources = [
"arn:aws:s3:::${var.my_bucket}/*",
"arn:aws:s3:::${var.my_bucket}"
]
principals {
type = "AWS"
identifiers = ["*"]
}
condition {
test = "NotIpAddress"
variable = "aws:SourceIp"
values = local.ips
}
}
}
Also, if you want to limit access to only a set of IPs or subnets, you should definitely consider adding the subnet mask to IPs in ip_one and ip_three. If you do not do that, the access will probably be allowed from the IP addresses you do not want to allow access from.
Can you try something like
values = [
join(",", concat(var.ip_one,
var.ip_two,
var.ip_three))
]

how to configure s3 bucket to allow aws application load balancer (not class) use it? currently throws' access denied'

I have an application load balancer and I'm trying to enable logging, terraform code below:
resource "aws_s3_bucket" "lb-logs" {
bucket = "yeo-messaging-${var.environment}-lb-logs"
}
resource "aws_s3_bucket_acl" "lb-logs-acl" {
bucket = aws_s3_bucket.lb-logs.id
acl = "private"
}
resource "aws_lb" "main" {
name = "main"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.public.id]
enable_deletion_protection = false
subnets = [aws_subnet.public.id, aws_subnet.public-backup.id]
access_logs {
bucket = aws_s3_bucket.lb-logs.bucket
prefix = "main-lb"
enabled = true
}
}
unfortunately I can't apply this due to:
Error: failure configuring LB attributes: InvalidConfigurationRequest: Access Denied for bucket: xxx-lb-logs. Please check S3bucket permission
│ status code: 400, request id: xx
I've seen a few SO threads and documentation but unfortunately it all applies to the classic load balancer, particularly the 'data' that allows you to get the service account of the laod balancer.
I have found some policy info on how to apply the right permissions to a SA but I can't seem to find how to apply the service account to the LB itself.
Example:
data "aws_iam_policy_document" "allow-lb" {
statement {
principals {
type = "AWS"
identifiers = [data.aws_elb_service_account.main.arn]
}
actions = [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
]
resources = [
aws_s3_bucket.lb-logs.arn,
"${aws_s3_bucket.lb-logs.arn}/*",
]
}
}
resource "aws_s3_bucket_policy" "allow-lb" {
bucket = aws_s3_bucket.lb-logs.id
policy = data.aws_iam_policy_document.allow-lb.json
}
But this is all moot because data.aws_elb_service_account.main.arn is only for classic LB.
EDIT:
Full code with attempt from answer below:
resource "aws_s3_bucket" "lb-logs" {
bucket = "yeo-messaging-${var.environment}-lb-logs"
}
resource "aws_s3_bucket_acl" "lb-logs-acl" {
bucket = aws_s3_bucket.lb-logs.id
acl = "private"
}
data "aws_iam_policy_document" "allow-lb" {
statement {
principals {
type = "Service"
identifiers = ["logdelivery.elb.amazonaws.com"]
}
actions = [
"s3:PutObject"
]
resources = [
"${aws_s3_bucket.lb-logs.arn}/*"
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control"
]
}
}
}
resource "aws_s3_bucket_policy" "allow-lb" {
bucket = aws_s3_bucket.lb-logs.id
policy = data.aws_iam_policy_document.allow-lb.json
}
resource "aws_lb" "main" {
name = "main"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.public.id]
enable_deletion_protection = false
subnets = [aws_subnet.public.id, aws_subnet.public-backup.id]
access_logs {
bucket = aws_s3_bucket.lb-logs.bucket
prefix = "main-lb"
enabled = true
}
}
The bucket policy you need to use is provided in the official documentation for access logs on Application Load Balancers.
{
"Effect": "Allow",
"Principal": {
"Service": "logdelivery.elb.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
Notice bucket-name prefix and your-aws-account-id need to be replaced in that policy with your actual values.
In Terraform:
data "aws_iam_policy_document" "allow-lb" {
statement {
principals {
type = "Service"
identifiers = ["logdelivery.elb.amazonaws.com"]
}
actions = [
"s3:PutObject"
]
resources = [
"${aws_s3_bucket.lb-logs.arn}/*"
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control"
]
}
}
}

how to create an iam role with policy that grants access to the SQS created

I created 2 SQS and the DeadLetterQueue with the code in my main.tf calling the SQS/main.tf module.I would like to destroy and create them again but this time,I want to call IAM/iam_role.tf as well to create one IAM role together with the policy documents.I don't know how to specify that in my main.tf so that the resources section of the data policy document has both CloudTrail_SQS created ,meaning "CloudTrail_SQS_Data_Event" and "cloudTrail_SQS_Management_Event" and the resources arn of the S3 give the role access to the 2 different buckets used for the SQS,meaning "cloudtrail-management-event-logs" and "aws-cloudtrail143-sqs-logs"
SQS/main.tf
resource "aws_sqs_queue" "CloudTrail_SQS"{
name = var.sqs_queue_name
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.CloudTrail_SQS_DLQ.arn
maxReceiveCount = 4
})
}
resource "aws_sqs_queue" "CloudTrail_SQS_DLQ"{
name = var.dead_queue_name
IAM/iam_role.tf
resource "aws_iam_role" "access_role" {
name = var.role_name
description = var.description
assume_role_policy = data.aws_iam_policy_document.trust_relationship.json
}
trust policy
data "aws_iam_policy_document" "trust_relationship" {
statement {
sid = "AllowAssumeRole"
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [var.account_id]
}
condition {
test = "StringEquals"
variable = "sts:ExternalId"
values = [var.external_id]
}
}
}
data "aws_iam_policy_document" "policy_document"{
statement{
actions = [
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage"
]
effect = "Allow"
resources = aws_sqs_queue.CloudTrail_SQS.arn
}
statement {
actions = ["sqs:ListQueues"]
effect = "Allow"
resources = ["*"]
}
statement {
actions = ["s3:GetObject", "s3:GetBucketLocation"]
resources = [
"arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}/*"
]
effect = "Allow"
}
statement {
actions = ["s3:ListBucket"]
resources = [
"arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}"
]
effect = "Allow"
}
statement {
actions = ["kms:Decrypt", "kms:GenerateDataKey","kms:DescribeKey" ]
effect = "Allow"
resources = [var.kms_key_arn]
}
}
main.tf
module "data_events"{
source = "../SQS"
cloudtrail_event_log_bucket_name = "aws-cloudtrail143-sqs-logs"
sqs_queue_name = "CloudTrail_SQS_Data_Event"
dead_queue_name = "CloudTrail_DLQ_Data_Event"
}
module "management_events"{
source = "../SQS"
cloudtrail_event_log_bucket_name = "cloudtrail-management-event-logs"
sqs_queue_name = "cloudTrail_SQS_Management_Event"
dead_queue_name = "cloudTrail_DLQ_Management_Event"
}
The role would be created as shown below. But your question has so many mistakes and missing information, that its impossible to provide full, working code. So the below code should be treated as a template which you need to adjust for your use.
resource "aws_iam_role" "access_role" {
name = var.role_name
description = var.description
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
inline_policy {
name = "allow-access-to-s3-sqs"
policy = data.aws_iam_policy_document.policy_document.json
}
}
data "aws_iam_policy_document" "policy_document"{
statement{
actions = [
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage"
]
effect = "Allow"
resources = [
module.data_events.sqs.arn,
module.management_events.sqs.arn,
]
}
statement {
actions = ["sqs:ListQueues"]
effect = "Allow"
resources = ["*"]
}
statement {
actions = ["s3:GetObject", "s3:GetBucketLocation"]
resources = [
"arn:aws:s3:::aws-cloudtrail143-sqs-logs/*"
"arn:aws:s3:::cloudtrail-management-event-logs/*"
]
effect = "Allow"
}
statement {
actions = ["s3:ListBucket"]
resources = [
"arn:aws:s3:::aws-cloudtrail143-sqs-logs",
"arn:aws:s3:::cloudtrail-management-event-logs"
]
effect = "Allow"
}
statement {
actions = ["kms:Decrypt", "kms:GenerateDataKey","kms:DescribeKey" ]
effect = "Allow"
resources = [var.kms_key_arn]
}
}
You can use the data sources of terraform.
At this time, you should write the output for SQS folder, write them as data in IAM folder and use it

What's the correct terraform syntax to allow an external AWS role to subscribe and read from AWS SNS topic?

I want to create a policy so a specific aws role (not in the same account) let's say arn:aws:iam::123123123123:role/sns-read-role can subscribe and receive messages from my SNS topic in AWS.
From the official terraform docs about aws_sns_topic_policy example it would be
resource "aws_sns_topic" "test" {
name = "my-topic-with-policy"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.test.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive"
]
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
123123123123
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
aws_sns_topic.test.arn
]
}
}
But this would translate to arn:aws:iam::123123123123:root and filter only on account-id.
From AWS JSON policy elements: Principal I understand the AWS syntax is
"Principal": { "AWS": "arn:aws:iam::AWS-account-ID:role/role-name" }
Adding the role in the condition like this
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
arn:aws:iam::123123123123:role/sns-read-role
]
}
does not work.
It would make sense to add the role to the principal like this
principals {
type = "AWS"
identifiers = ["arn:aws:iam::123123123123:role/sns-read-role"]
}
When I try to subscribe, I get an AuthorizationError: "Couldn't subscribe to topic..."
Do I need the condition together with the principal? Why even bother with the condition if you can use the principal in the first place?
After some experimenting, I found that I don't need the condition. This works for me:
resource "aws_sns_topic" "test" {
name = "my-topic-with-policy"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.test.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive"
]
effect = "Allow"
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::123123123123:role/sns-read-role"
]
}
resources = [
aws_sns_topic.test.arn
]
}
}
In case you want to use parameters for your module:
principals {
type = "AWS"
identifiers = [
"${var.account_arn}:role/${var.role}"
]
}

How to loop through a list of s3 buckets and create and attach a number of policies for each bucket?

I am learning about terraform modules and my objective is to build module which takes in a collection of s3 Buckets, and then creates and applies to them some iam policies.
What I have tried so far was to have some sort of a for loop, where I generate the policies and attach them to the buckets. For reference, my code looks something like this:
data "aws_iam_policy_document" "foo_iam_policy" {
statement {
sid = ""
effect = "Allow"
resources = [
for arn in var.s3_buckets_arn :
"${arn}/*"
]
actions = [
"s3:GetObject",
"s3:GetObjectVersion",
]
}
statement {
sid = ""
effect = "Allow"
resources = var.s3_buckets_arn
actions = ["s3:*"]
}
}
resource "aws_iam_policy" "foo_iam_policy" {
name = "foo-iam-policy"
path = "/"
description = "IAM policy for foo to access S3"
policy = data.aws_iam_policy_document.foo_iam_policy.json
}
data "aws_iam_policy_document" "foo_assume_rule_policy" {
statement {
effect = "Allow"
actions = [
"sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [
var.foo_iam_user_arn]
}
condition {
test = "StringEquals"
values = var.foo_external_ids
variable = "sts:ExternalId"
}
}
}
resource "aws_iam_role" "foo_role" {
name = "foo-role"
assume_role_policy = data.aws_iam_policy_document.foo_assume_rule_policy.json
}
resource "aws_iam_role_policy_attachment" "foo_attach_s3_policy" {
role = aws_iam_role.foo_role.name
policy_arn = aws_iam_policy.foo_iam_policy.arn
}
data "aws_iam_policy_document" "foo_policy_source" {
for_each = toset(var.s3_buckets_arn)
// arn = each.key
statement {
sid = "VPCAllow"
effect = "Allow"
resources = [
each.key,
"${each.key}/*",
]
actions = [
"s3:*"]
condition {
test = "StringEquals"
variable = "aws:SourceVpc"
values = [
"vpc-01010101"]
}
principals {
type = "*"
identifiers = [
"*"]
}
}
}
I don't know if what I have tried makes much sense, or if there is a better way to loop through buckets and generate policies. My question is: what is the best practice for such cases where one wants to provide a list of buckets and loop through them to attach policies?
On a side note, I have encountered an error with my approach:
The “for_each” value depends on resource attributes that cannot be
determined (Terraform)
To attach a bucket policy to a bucket you should use aws_s3_bucket_policy, not aws_iam_policy_document. Also if the buckets already exist, probably it would be better to fetch their data first using data source aws_s3_bucket:
data "aws_s3_bucket" "selected" {
# s3_buckets_names easier to use then s3_buckets_arns
for_each = toset(var.s3_buckets_names)
bucket = each.value
}
Then, you can iterate over the selected buckets and add your policy to it:
resource "aws_s3_bucket_policy" "bucket_policie" {
for_each = data.aws_s3_bucket.selected
bucket = each.key
policy = "your policy document"
}