InvalidParameterValueException: Cannot access stream - amazon-web-services

I am trying to create a dynamodb table and lambda trigger using Terraform. This is how I define the table, role policy and lambda trigger:
resource "aws_dynamodb_table" "filenames" {
name = local.dynamodb_table_filenames
billing_mode = "PROVISIONED"
read_capacity = 1000
write_capacity = 1000
hash_key = "filename"
stream_enabled = true
stream_view_type = "NEW_IMAGE"
#range_key = ""
attribute {
name = "filename"
type = "S"
}
tags = var.tags
}
resource "aws_iam_role_policy" "dynamodb_policy" {
policy = jsonencode(
{
Version: "2012-10-17",
Statement: [
{
Action: [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:Query",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:DescribeStream",
"dynamodb:ListShards",
"dynamodb:ListStreams",
],
Effect: "Allow",
Resource: aws_dynamodb_table.filenames.arn
}
]
}
)
role = aws_iam_role.processing_lambda_role.id
}
resource "aws_lambda_event_source_mapping" "allow_dynamodb_table_to_trigger_lambda" {
event_source_arn = aws_dynamodb_table.filenames.stream_arn
function_name = aws_lambda_function.trigger_stepfunction_lambda.arn
starting_position = "LATEST"
}
I am getting this error even though I have already added the relevant policies added in the role:
error creating Lambda Event Source Mapping (arn:aws:dynamodb:eu-central-12:table/tablename/stream): InvalidParameterValueException: Cannot access stream arn:aws:dynamodb:eu-central-1:299093934558:table/4tablename/stream. Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, ListShards, and ListStreams Actions on your stream in IAM.
How can I fix this?

The stream actions apply to streams, not to tables. The ARN for stream has the form of:
arn:${Partition}:dynamodb:${Region}:${Account}:table/${TableName}/stream/${StreamLabel}
Thus, you should use (or something equivalent):
Resource: "${aws_dynamodb_table.filenames.arn}/stream/*"
or more general:
Resource: "${aws_dynamodb_table.filenames.arn}/*"

Related

Terraform: How to get array/list f ARN for array of resource

How do get a list/array of ARNs for a resource created with for_each ?
# DynamoDB
resource "aws_dynamodb_table" "terraform_state" {
for_each = var.aws_shared_accounts
name = "${each.key}-terraform-state"
read_capacity = 5
write_capacity = 5
hash_key = "LockID"
attribute {
resource "aws_iam_role" "infra_github_role" {
name = "TerraformBackendRole"
inline_policy {
name = "TerrafomBackendPolicy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "*"
Effect = "Allow"
Resource = [
concat(
[aws_s3_bucket.terraform_bucket.arn, "${aws_s3_bucket.terraform_bucket.arn}/*"],
aws_dynamodb_table.terraform_state[*].arn
)
]
},
]
})
}
}
Without the aws_iam_role I can run terraform plan and can see the resources
# aws_dynamodb_table.terraform-state["main"] will be created
+ resource "aws_dynamodb_table" "terraform-state" {
# aws_dynamodb_table.terraform-state["main-dev"] will be created
+ resource "aws_dynamodb_table" "terraform-state" {
The error I am receiving after adding the aws_iam_role
Error: Unsupported attribute
on 03-iam.tf line 16, in resource "aws_iam_role" "infra_github_role":
16: aws_dynamodb_table.terraform_state[*].arn
Since you used for_each, to get arn values you should do:
values(aws_dynamodb_table.terraform_state)[*].arn

how to configure s3 bucket to allow aws application load balancer (not class) use it? currently throws' access denied'

I have an application load balancer and I'm trying to enable logging, terraform code below:
resource "aws_s3_bucket" "lb-logs" {
bucket = "yeo-messaging-${var.environment}-lb-logs"
}
resource "aws_s3_bucket_acl" "lb-logs-acl" {
bucket = aws_s3_bucket.lb-logs.id
acl = "private"
}
resource "aws_lb" "main" {
name = "main"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.public.id]
enable_deletion_protection = false
subnets = [aws_subnet.public.id, aws_subnet.public-backup.id]
access_logs {
bucket = aws_s3_bucket.lb-logs.bucket
prefix = "main-lb"
enabled = true
}
}
unfortunately I can't apply this due to:
Error: failure configuring LB attributes: InvalidConfigurationRequest: Access Denied for bucket: xxx-lb-logs. Please check S3bucket permission
│ status code: 400, request id: xx
I've seen a few SO threads and documentation but unfortunately it all applies to the classic load balancer, particularly the 'data' that allows you to get the service account of the laod balancer.
I have found some policy info on how to apply the right permissions to a SA but I can't seem to find how to apply the service account to the LB itself.
Example:
data "aws_iam_policy_document" "allow-lb" {
statement {
principals {
type = "AWS"
identifiers = [data.aws_elb_service_account.main.arn]
}
actions = [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
]
resources = [
aws_s3_bucket.lb-logs.arn,
"${aws_s3_bucket.lb-logs.arn}/*",
]
}
}
resource "aws_s3_bucket_policy" "allow-lb" {
bucket = aws_s3_bucket.lb-logs.id
policy = data.aws_iam_policy_document.allow-lb.json
}
But this is all moot because data.aws_elb_service_account.main.arn is only for classic LB.
EDIT:
Full code with attempt from answer below:
resource "aws_s3_bucket" "lb-logs" {
bucket = "yeo-messaging-${var.environment}-lb-logs"
}
resource "aws_s3_bucket_acl" "lb-logs-acl" {
bucket = aws_s3_bucket.lb-logs.id
acl = "private"
}
data "aws_iam_policy_document" "allow-lb" {
statement {
principals {
type = "Service"
identifiers = ["logdelivery.elb.amazonaws.com"]
}
actions = [
"s3:PutObject"
]
resources = [
"${aws_s3_bucket.lb-logs.arn}/*"
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control"
]
}
}
}
resource "aws_s3_bucket_policy" "allow-lb" {
bucket = aws_s3_bucket.lb-logs.id
policy = data.aws_iam_policy_document.allow-lb.json
}
resource "aws_lb" "main" {
name = "main"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.public.id]
enable_deletion_protection = false
subnets = [aws_subnet.public.id, aws_subnet.public-backup.id]
access_logs {
bucket = aws_s3_bucket.lb-logs.bucket
prefix = "main-lb"
enabled = true
}
}
The bucket policy you need to use is provided in the official documentation for access logs on Application Load Balancers.
{
"Effect": "Allow",
"Principal": {
"Service": "logdelivery.elb.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
Notice bucket-name prefix and your-aws-account-id need to be replaced in that policy with your actual values.
In Terraform:
data "aws_iam_policy_document" "allow-lb" {
statement {
principals {
type = "Service"
identifiers = ["logdelivery.elb.amazonaws.com"]
}
actions = [
"s3:PutObject"
]
resources = [
"${aws_s3_bucket.lb-logs.arn}/*"
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = [
"bucket-owner-full-control"
]
}
}
}

How do I capture AWS Backup failures in terraform when Windows VSS fails?

I'm using AWS Backups to back up several EC2 instances. I have terraform that seems to report correctly when there is a backup failure, but I am also interested in when the disks have backed up correctly, but when Windows VSS fails. Ultimately, the failed events are going to be published to Opsgenie. Is there a way to accomplish this? I have tried capturing all events with the 'aws_backup_vault_notifications' resource, and I have tried a filter as described in this AWS blog: https://aws.amazon.com/premiumsupport/knowledge-center/aws-backup-failed-job-notification/
I have included most of my terraform below, minus the opsgenie module; I can get successful or fully failing events published to Opsgenie just fine if I include those events:
locals {
backup_vault_events = toset(["BACKUP_JOB_FAILED", "COPY_JOB_FAILED"])
}
resource "aws_backup_region_settings" "legacy" {
resource_type_opt_in_preference = {
"Aurora" = false
"DynamoDB" = false
"EFS" = false
"FSx" = false
"RDS" = false
"Storage Gateway" = false
"EBS" = true
"EC2" = true
"DocumentDB" = false
"Neptune" = false
"VirtualMachine" = false
}
}
resource "aws_backup_vault" "legacy" {
name = "Legacy${var.environment_tag}"
kms_key_arn = aws_kms_key.key.arn
}
resource "aws_iam_role" "legacy_backup" {
name = "AWSBackupService"
permissions_boundary = data.aws_iam_policy.role_permissions_boundary.arn
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["sts:AssumeRole"],
"Effect": "allow",
"Principal": {
"Service": ["backup.amazonaws.com"]
}
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "legacy_backup" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup"
role = aws_iam_role.legacy_backup.name
}
###############################################################################
## Second Region Backup
###############################################################################
resource "aws_backup_vault" "secondary" {
provider = aws.secondary
name = "Legacy${var.environment_tag}SecondaryRegion"
kms_key_arn = aws_kms_replica_key.secondary_region.arn
tags = merge(
local.tags, {
name = "Legacy${var.environment_tag}SecondaryRegion"
}
)
}
data "aws_iam_policy_document" "backups" {
policy_id = "__default_policy_ID"
statement {
actions = [
"SNS:Publish",
]
effect = "Allow"
principals {
type = "Service"
identifiers = ["backup.amazonaws.com"]
}
resources = [
aws_sns_topic.backup_alerts.arn
]
sid = "__default_statement_ID"
}
}
###############################################################################
# SNS
###############################################################################
resource "aws_sns_topic_policy" "backup_alerts" {
arn = aws_sns_topic.backup_alerts.arn
policy = data.aws_iam_policy_document.backups.json
}
resource "aws_backup_vault_notifications" "backup_alerts" {
backup_vault_name = aws_backup_vault.legacy.id
sns_topic_arn = aws_sns_topic.backup_alerts.arn
backup_vault_events = local.backup_vault_events
}
resource "aws_sns_topic_subscription" "backup_alerts_opsgenie_target" {
topic_arn = aws_sns_topic.backup_alerts.arn
protocol = "https"
endpoint = module.opsgenie_team.sns_integration_sns_endpoint
confirmation_timeout_in_minutes = 1
endpoint_auto_confirms = true
}

how to create an iam role with policy that grants access to the SQS created

I created 2 SQS and the DeadLetterQueue with the code in my main.tf calling the SQS/main.tf module.I would like to destroy and create them again but this time,I want to call IAM/iam_role.tf as well to create one IAM role together with the policy documents.I don't know how to specify that in my main.tf so that the resources section of the data policy document has both CloudTrail_SQS created ,meaning "CloudTrail_SQS_Data_Event" and "cloudTrail_SQS_Management_Event" and the resources arn of the S3 give the role access to the 2 different buckets used for the SQS,meaning "cloudtrail-management-event-logs" and "aws-cloudtrail143-sqs-logs"
SQS/main.tf
resource "aws_sqs_queue" "CloudTrail_SQS"{
name = var.sqs_queue_name
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.CloudTrail_SQS_DLQ.arn
maxReceiveCount = 4
})
}
resource "aws_sqs_queue" "CloudTrail_SQS_DLQ"{
name = var.dead_queue_name
IAM/iam_role.tf
resource "aws_iam_role" "access_role" {
name = var.role_name
description = var.description
assume_role_policy = data.aws_iam_policy_document.trust_relationship.json
}
trust policy
data "aws_iam_policy_document" "trust_relationship" {
statement {
sid = "AllowAssumeRole"
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [var.account_id]
}
condition {
test = "StringEquals"
variable = "sts:ExternalId"
values = [var.external_id]
}
}
}
data "aws_iam_policy_document" "policy_document"{
statement{
actions = [
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage"
]
effect = "Allow"
resources = aws_sqs_queue.CloudTrail_SQS.arn
}
statement {
actions = ["sqs:ListQueues"]
effect = "Allow"
resources = ["*"]
}
statement {
actions = ["s3:GetObject", "s3:GetBucketLocation"]
resources = [
"arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}/*"
]
effect = "Allow"
}
statement {
actions = ["s3:ListBucket"]
resources = [
"arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}"
]
effect = "Allow"
}
statement {
actions = ["kms:Decrypt", "kms:GenerateDataKey","kms:DescribeKey" ]
effect = "Allow"
resources = [var.kms_key_arn]
}
}
main.tf
module "data_events"{
source = "../SQS"
cloudtrail_event_log_bucket_name = "aws-cloudtrail143-sqs-logs"
sqs_queue_name = "CloudTrail_SQS_Data_Event"
dead_queue_name = "CloudTrail_DLQ_Data_Event"
}
module "management_events"{
source = "../SQS"
cloudtrail_event_log_bucket_name = "cloudtrail-management-event-logs"
sqs_queue_name = "cloudTrail_SQS_Management_Event"
dead_queue_name = "cloudTrail_DLQ_Management_Event"
}
The role would be created as shown below. But your question has so many mistakes and missing information, that its impossible to provide full, working code. So the below code should be treated as a template which you need to adjust for your use.
resource "aws_iam_role" "access_role" {
name = var.role_name
description = var.description
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
inline_policy {
name = "allow-access-to-s3-sqs"
policy = data.aws_iam_policy_document.policy_document.json
}
}
data "aws_iam_policy_document" "policy_document"{
statement{
actions = [
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage"
]
effect = "Allow"
resources = [
module.data_events.sqs.arn,
module.management_events.sqs.arn,
]
}
statement {
actions = ["sqs:ListQueues"]
effect = "Allow"
resources = ["*"]
}
statement {
actions = ["s3:GetObject", "s3:GetBucketLocation"]
resources = [
"arn:aws:s3:::aws-cloudtrail143-sqs-logs/*"
"arn:aws:s3:::cloudtrail-management-event-logs/*"
]
effect = "Allow"
}
statement {
actions = ["s3:ListBucket"]
resources = [
"arn:aws:s3:::aws-cloudtrail143-sqs-logs",
"arn:aws:s3:::cloudtrail-management-event-logs"
]
effect = "Allow"
}
statement {
actions = ["kms:Decrypt", "kms:GenerateDataKey","kms:DescribeKey" ]
effect = "Allow"
resources = [var.kms_key_arn]
}
}
You can use the data sources of terraform.
At this time, you should write the output for SQS folder, write them as data in IAM folder and use it

What's the correct terraform syntax to allow an external AWS role to subscribe and read from AWS SNS topic?

I want to create a policy so a specific aws role (not in the same account) let's say arn:aws:iam::123123123123:role/sns-read-role can subscribe and receive messages from my SNS topic in AWS.
From the official terraform docs about aws_sns_topic_policy example it would be
resource "aws_sns_topic" "test" {
name = "my-topic-with-policy"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.test.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive"
]
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
123123123123
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
aws_sns_topic.test.arn
]
}
}
But this would translate to arn:aws:iam::123123123123:root and filter only on account-id.
From AWS JSON policy elements: Principal I understand the AWS syntax is
"Principal": { "AWS": "arn:aws:iam::AWS-account-ID:role/role-name" }
Adding the role in the condition like this
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
arn:aws:iam::123123123123:role/sns-read-role
]
}
does not work.
It would make sense to add the role to the principal like this
principals {
type = "AWS"
identifiers = ["arn:aws:iam::123123123123:role/sns-read-role"]
}
When I try to subscribe, I get an AuthorizationError: "Couldn't subscribe to topic..."
Do I need the condition together with the principal? Why even bother with the condition if you can use the principal in the first place?
After some experimenting, I found that I don't need the condition. This works for me:
resource "aws_sns_topic" "test" {
name = "my-topic-with-policy"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.test.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive"
]
effect = "Allow"
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::123123123123:role/sns-read-role"
]
}
resources = [
aws_sns_topic.test.arn
]
}
}
In case you want to use parameters for your module:
principals {
type = "AWS"
identifiers = [
"${var.account_arn}:role/${var.role}"
]
}