Terraform AWS Eventbridge Input transformer not applying - amazon-web-services

When trying to use the eventbridge input_transformer i do not receive the transformed object, but instead i get the original object sent directly to SQS.
I currently have the following setup running
locals {
rule-arn = "arn:aws:events:${data.aws_arn.event-rule.region}:${data.aws_arn.event-rule.account}:rule/${aws_cloudwatch_event_rule.notification.name}"
}
resource "aws_sqs_queue" "test-queue" {
name = "test-queue"
}
resource "aws_cloudwatch_event_rule" "notification" {
name = "test-notification"
event_bus_name = aws_cloudwatch_event_bus.events.name
description = "Listens to all events in the TEST.Notification namespace"
event_pattern = jsonencode({
source = [{ "prefix" : "TEST.Notification" }],
})
}
resource "aws_cloudwatch_event_target" "developer-notification" {
rule = aws_cloudwatch_event_rule.notification.name
target_id = "SendToSQS"
arn = aws_sqs_queue.test-queue.arn
input_transformer {
input_paths = {
"detailType" = "$.detail-type",
}
input_template = jsonencode(
{
"detailType" : "<detailType>"
}
)
}
}
resource "aws_sqs_queue_policy" "test-queue" {
queue_url = aws_sqs_queue.test-queue.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "",
"Statement": [
{
"Sid": "Allow EventBridge to SQS",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "*",
"Resource": "${aws_sqs_queue.test-queue.arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "${aws_cloudwatch_event_rule.notification.arn}"
}
}
}
]
}
POLICY
}
I am running Terraform version:
Terraform v1.2.3
on darwin_arm64
I have seen some talk about having to do stuff like
"\"<detailType>\""
in order to have it work, but i've had no luck with that either, so for brevity/readability i've removed all the weird tricks i've seen people use. My thinking is there's something more basic i am missing here.
Does someone know what i am doing wrong?

Related

Passing List to IAM Group Policy Template file

I am creating policy based on tags. The environment value differs for each resource, so I have to add them as multiple values. Unable to pass as list. Tried join,split and for loop. None of them works. Pls help. Below code just add the value as "beta,test" which will not work as expected
main.tf
locals{
workspaceValues = terraform.workspace == "dev" ? ["alpha", "dev"] : terraform.workspace == "test" ? ["beta", "test"] : ["prod", "staging"]
}
resource "aws_iam_group_policy" "inline_policy" {
name = "${terraform.workspace}_policy"
group = aws_iam_group.backend_admin.name
policy = templatefile("policy.tpl", { env = join(",", local.workspaceValues), region = "${data.aws_region.current.name}", account_id = "${data.aws_caller_identity.current.account_id}" })
}
policy.tpl:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"ForAllValues:StringLikeIfExists": {
"aws:ResourceTag/Environment": "${env}"
}
}
}
]
}
You can use jsonencode to properly format TF list as a list in json in your template:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"ForAllValues:StringLikeIfExists": {
"aws:ResourceTag/Environment": ${jsonencode(env)}
}
}
}
]
}
For that you would call the template as follows:
resource "aws_iam_group_policy" "inline_policy" {
name = "${terraform.workspace}_policy"
group = aws_iam_group.backend_admin.name
policy = templatefile("policy.tpl", {
env = local.workspaceValues
})
}
You are not using region nor account_id in your template, so there is no reason to pass them in.
An alternative solution to avoid this all together is to recreate the policy using the data component "aws_iam_policy_document" and then pass it to the aws_iam_policy resource like the following,
data "aws_iam_policy_document" "example" {
statement {
effect = "Allow"
actions = [
"ec2:*"
]
resources = [
"*"
]
condition {
test = "ForAllValues:StringLikeIfExists"
variable = "aws:ResourceTag/Environment"
values = local.workspaceValues
}
}
}
resource "aws_iam_policy" "example" {
name = "example_policy"
path = "/"
policy = data.aws_iam_policy_document.example.json
}

how to tie bucket names down to resources created calling a module

The below code works when i create the resources but i would like to tie each SQS created with a different s3 bucket.for example.I want CloudTrail_SQS_Management_Event/CloudTrail_DLQ_Management_Event to use a bucket called "management_sqs_bucket and CloudTrail_SQS_Data_Event/CloudTrail_DLQ_Data_Event to use bucket called "data_sqs_bucket and for the bucket names to reflect accordingly on the queue policies.
SQS/variables.tf
variable "sqs_queue_name"{
description = "The name of different SQS to be created"
type = string
}
variable "dead_queue_name"{
description = "The name of different Dead Queues to be created"
type = string
}
variable "max_receive_count" {
type = number
}
SQS/iam.tf
data "aws_iam_policy_document" "policy_document"{
statement{
actions = [
"sqs:DeleteMessage",
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage",
"sqs:SetQueueAttributes"
]
effect = "Allow"
resources = values(aws_sqs_queue.sqs)[*].arn
}
resource "aws_sqs_queue_policy" "Cloudtrail_SQS_Policy" {
queue_url = aws_sqs_queue.CloudTrail_SQS.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "sqspolicy",
"Statement": [
{
"Sid": "AllowSQSInvocation",
"Effect": "Allow",
"Principal": {"AWS":"*"},
"Action": "sqs:*",
"Resource": "${aws_sqs_queue.CloudTrail_SQS.arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}"
}
}
}
]
}
POLICY
}
resource "aws_sqs_queue_policy" "CloudTrail_SQS_DLQ"{
queue_url = aws_sqs_queue.CloudTrail_SQS_DLQ.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "sqspolicy",
"Statement": [
{
"Sid": "DLQ Policy",
"Effect": "Allow",
"Principal": {"AWS":"*"},
"Action": "sqs:*",
"Resource": "${aws_sqs_queue.CloudTrail_SQS_DLQ.arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}"
}
}
}
]
}
POLICY
}
SQS/main.tf
resource "aws_sqs_queue" "sqs" {
name = var.sqs_queue_name
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.dlq.arn
maxReceiveCount = var.max_receive_count
})
}
resource "aws_sqs_queue" "dlq" {
name = var.dead_queue_name
}
SQS/output.tf
output "sqs_queue_id"{
value = values(aws_sqs_queue.sqs)[*].id
description = "The URL for the created Amazon SQS queue."
}
output "sqs_queue_arn" {
value = values(aws_sqs_queue.sqs)[*].arn
description = "The ARN of the SQS queue."
}
variable.tf
variable "queue_names" {
default = [
{
sqs_name = "CloudTrail_SQS_Management_Event"
dlq_name = "CloudTrail_DLQ_Management_Event"
},
{
sqs_name = "CloudTrail_SQS_Data_Event"
dlq_name = "CloudTrail_DLQ_Data_Event"
}
]
}
module "sqs_queue" {
source = "../SQS"
for_each = {
for sqs, dlq in var.queue_names : sqs => dlq
}
sqs_queue_name = each.value.sqs_name
dead_queue_name = each.value.dlq_name
max_receive_count = var.max_receive_count
}
From what I understand I believe this is what you would want to do:
variables.tf:
variable "queue_names" {
default = [
{
sqs_name = "CloudTrail_SQS_Management_Event"
dlq_name = "CloudTrail_DLQ_Management_Event"
bucket_name = "management_sqs_bucket"
},
{
sqs_name = "CloudTrail_SQS_Data_Event"
dlq_name = "CloudTrail_DLQ_Data_Event"
bucket_name = "data_sqs_bucket"
}
]
}
main.tf:
module "my_sqs" {
source = "./my_sqs"
for_each = {
for q in var.queue_names : q.sqs_name => q
}
sqs_queue_name = each.value.sqs_name
dead_queue_name = each.value.dlq_name
max_receive_count = 4
cloudtrail_event_log_bucket_name = each.value.bucket_name
}
Also, I see that some code duplication for aws_sqs_queue_policy because you have both the SQS queue and the DLQ. This can be refactored to something like this:
iam.tf:
data "aws_iam_policy_document" "policy_document" {
statement {
actions = [
"sqs:DeleteMessage",
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage",
"sqs:SetQueueAttributes"
]
effect = "Allow"
resources = aws_sqs_queue.sqs[*].arn
}
}
locals {
queue_data = [
{
id = aws_sqs_queue.sqs.id
arn = aws_sqs_queue.sqs.arn
},
{
id = aws_sqs_queue.dlq.id
arn = aws_sqs_queue.dlq.arn
}
]
}
resource "aws_sqs_queue_policy" "sqs_policy" {
# This can be achieved with for_each similar to what we have in main.tf, but I did not want to complicate it
count = length(aws_sqs_queue.sqs[*])
queue_url = local.queue_data[count.index].id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "sqspolicy",
"Statement": [
{
"Sid": "AllowSQSInvocation",
"Effect": "Allow",
"Principal": {"AWS":"*"},
"Action": "sqs:*",
"Resource": "${local.queue_data[count.index].arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}"
}
}
}
]
}
POLICY
}

Websocket Connection on AWS always results in TOO MANY REQUESTS even with one request

So I have this terraform that seems to deploy a websocket api connection to AWS BUT....
Once deployed, when I connect, I consistently get "429 too many requests" errors.
Using terraform 0.13.4.
I've turned up the requests manually in the console but every time I wscat -c {MYENDPOINT} I get a 429.
Can't find anything online or anything in the manuals about this.
Here is the terraform. Wondering if anyone can see if I'm missing something in my routes or integrations?
Here is the response I keep getting from the logs:
(VH_SDESljoEF7tg=) Gateway response body: { "message": "Too Many Requests", "connectionId": "VH_SDd21joECIeg=", "requestId": "VH_SDESljoEF7tg=" }
and
(VH_SDESljoEF7tg=) Key throttle limit exceeded for RestApi k27g2ypii6, Stage test, Resource $connect, HttpMethod GET. Limit: 42.00 Burst: 0
resource "aws_apigatewayv2_api" "websocket-api" {
name = "websocket-api"
protocol_type = "WEBSOCKET"
}
resource "aws_apigatewayv2_integration" "chatRoomConnectIntegration" {
api_id = aws_apigatewayv2_api.websocket-api.id
integration_type = "AWS_PROXY"
integration_uri = aws_lambda_function.ChatRoomConnectFunction.invoke_arn
integration_method = "POST"
}
resource "aws_apigatewayv2_route" "connectRoute" {
api_id = aws_apigatewayv2_api.websocket-api.id
route_key = "$connect"
target = "integrations/${aws_apigatewayv2_integration.chatRoomConnectIntegration.id}"
}
resource "aws_apigatewayv2_deployment" "deploy" {
api_id = aws_apigatewayv2_api.websocket-api.id
description = "testing deployment"
triggers = {
redeployment = sha1(join(",", list(
jsonencode(aws_apigatewayv2_integration.chatRoomConnectIntegration),
jsonencode(aws_apigatewayv2_route.connectRoute),
)))
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_apigatewayv2_stage" "test-stage" {
api_id = aws_apigatewayv2_api.websocket-api.id
name = "test"
access_log_settings {
destination_arn = aws_cloudwatch_log_group.access_logs.arn
format = "$context.identity.sourceIp - - [$context.requestTime] \"$context.httpMethod $context.routeKey $context.protocol\" $context.status $context.responseLength $context.requestId $context.integrationErrorMessage"
}
default_route_settings {
data_trace_enabled = true
logging_level = "INFO"
throttling_rate_limit = 42
}
route_settings {
route_key = "$connect"
data_trace_enabled = true
logging_level = "INFO"
throttling_rate_limit = 42
}
}
resource "aws_api_gateway_account" "api_gateway_accesslogs" {
cloudwatch_role_arn = aws_iam_role.cloudwatch.arn
}
resource "aws_iam_role" "cloudwatch" {
name = "api_gateway_cloudwatch_global"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "cloudwatch" {
name = "default"
role = aws_iam_role.cloudwatch.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_lambda_permission" "allow_api_gateway" {
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.ChatRoomConnectFunction.arn
statement_id = "AllowExecutionFromApiGateway"
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.websocket-api.execution_arn}/*/*/*"
}
output "endpoint" {
value = aws_apigatewayv2_stage.test-stage.invoke_url
}
I can't explain the cause of the throttling, but I added this block to my aws_apigatewayv2_stage resource, triggered a new deployment, and now I'm able to connect using wscat:
default_route_settings {
throttling_rate_limit = 100
throttling_burst_limit = 50
}
(relevant docs here)

S3 Cross region replication using Terraform

I was using Terraform to setup S3 buckets (different region) and set up replication between them.
It was working properly until I added KMS in it.
I created 2 KMS keys one for source and one for destination.
Now while applying replication configuration, there is an option to pass destination key for destination bucket but I am not sure how to apply key at the source.
Any help would be appreciated.
provider "aws" {
alias = "east"
region = "us-east-1"
}
resource "aws_s3_bucket" "destination-bucket" {
bucket = ""destination-bucket"
provider = "aws.east"
acl = "private"
region = "us-east-1"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = "${var.kms_cmk_dest_arn}"
sse_algorithm = "aws:kms"
}
}
}
}
resource "aws_s3_bucket" "source-bucket" {
bucket = "source-bucket"
acl = "private"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = "${var.kms_cmk_arn}"
sse_algorithm = "aws:kms"
}
}
}
replication_configuration {
role = "${aws_iam_role.replication.arn}"
rules {
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.source-bucket.arn}"
storage_class = "STANDARD"
replica_kms_key_id = "${var.kms_cmk_dest_arn}"
}
source_selection_criteria {
sse_kms_encrypted_objects {
enabled = true
}
}
}
}
}
resource "aws_iam_role" "replication" {
name = "cdd-iam-role-replication"
permissions_boundary = "arn:aws:iam::${var.account_id}:policy/ServiceRoleBoundary"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
resource "aws_iam_role_policy" "replication" {
name = "cdd-iam-role-policy-replication"
role = "${aws_iam_role.replication.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.source-bucket.arn}"
]
},
{
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl"
],
"Effect": "Allow",
"Resource": [
"${aws_s3_bucket.source-bucket.arn}/*"
]
},
{
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Effect": "Allow",
"Resource": "${aws_s3_bucket.destination-bucket.arn}/*"
}
]
}
POLICY
}
In case you're using a Customer Managed Key(CMK) for S3 encryption, you need extra configuration.
AWS S3 Documentation mentions that the CMK owner must grant the source bucket owner permission to use the CMK.
https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-config-for-kms-objects.html#replication-kms-cross-acct-scenario
Also, a good article to summarize the S3 cross region replication configuration:
https://medium.com/#devopslearning/100-days-of-devops-day-44-s3-cross-region-replication-crr-8c58ae8c68d4
If I understand you correctly, you've got two S3 Buckets in two different regions within the same account.
One way I've done this in the past is to plan/apply the KMS keys to both regions first.
Then on a separate plan/apply, I used Terraform's data sources:
data "aws_kms_key" "source_credentials_encryption_key" {
key_id = "alias/source-encryption-key"
}
data "aws_kms_key" "destination_credentials_encryption_key" {
provider = aws.usEast
key_id = "alias/destination-encryption-key"
}
And used the data source for the replication configuration like so:
replication_configuration {
role = aws_iam_role.replication_role.arn
rules {
status = "Enabled"
destination {
bucket = aws_s3_bucket.source_bucket.arn
storage_class = "STANDARD"
replicate_kms_key_id = data.aws_kms_key.destination_bucket_encryption_key.arn
}
source_selection_criteria {
sse_kms_encrypted_objects {
enabled = true
}
}
}
}

Terraform Definition of Cognito Identity Pool Auth/Unauth Roles

I've been trying to create a terraform script for creating a cognito user pool and identity pool with a linked auth and unauth role, but I can't find a good example of doing this. Here is what I have so far:
cognito.tf:
resource "aws_cognito_user_pool" "pool" {
name = "Sample User Pool"
admin_create_user_config {
allow_admin_create_user_only = false
}
/* More stuff here, not included*/
}
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = "${aws_cognito_user_pool.pool.id}"
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "SampleIdentityPool"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = "${aws_cognito_user_pool_client.id}"
provider_name = ""
server_side_token_check = true
}
}
So, I want to tack an auth role and an unauth role to this, but I'm still trying to get my head around how to define and link IAM roles in terraform, but here is what I have so far:
resource "aws_cognito_identity_pool_roles_attachment" "main" {
identity_pool_id = "${aws_cognito_identity_pool.main.id}"
roles {
"authenticated" = <<EOF
{
actions = ["sts:AssumeRoleWithWebIdentity"]
principals {
type = "Federated"
identifiers = ["cognito-identity.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "cognito-identity.amazonaws.com:aud"
values = ["${aws_cognito_identity_pool.main.id}"]
}
condition {
test = "ForAnyValue:StringLike"
variable = "cognito-identity.amazonaws.com:amr"
values = ["authenticated"]
}
}
EOF
"unauthenticated" = <<EOF
{
actions = ["sts:AssumeRoleWithWebIdentity"]
principals {
type = "Federated"
identifiers = ["cognito-identity.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "cognito-identity.amazonaws.com:aud"
values = ["${aws_cognito_identity_pool.main.id}"]
}
}
EOF
}
}
This however, doesn't work. It creates the pools and client correctly, but doesn't attach anything to auth/unauth roles. I can't figure out what I'm missing, and I can't find any examples of how to do this correctly other than by using the AWS console. Any help on working this out correctly in terraform would be much appreciated!
After messing around with this for a few days, i finally figured it out. I was merely getting confused with "Assume Role Policy" and "Policy". Once I had that sorted out, it worked. Here is (roughly) what I have now. I'll put it here in hopes that it will save someone trying to figure this out for the first time a lot of grief.
For User Pool:
resource "aws_cognito_user_pool" "pool" {
name = "Sample Pool"
/* ... Lots more attributes */
}
For User Pool Client:
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = aws_cognito_user_pool.pool.id
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
For Identity Pool:
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "SampleIdentities"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = aws_cognito_user_pool_client.client.id
provider_name = aws_cognito_user_pool.pool.endpoint
server_side_token_check = true
}
}
Attach Roles to Identity Pool:
resource "aws_cognito_identity_pool_roles_attachment" "main" {
identity_pool_id = aws_cognito_identity_pool.main.id
roles = {
authenticated = aws_iam_role.auth_iam_role.arn
unauthenticated = aws_iam_role.unauth_iam_role.arn
}
}
And, finally, the roles and policies:
resource "aws_iam_role" "auth_iam_role" {
name = "auth_iam_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "unauth_iam_role" {
name = "unauth_iam_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "web_iam_unauth_role_policy" {
name = "web_iam_unauth_role_policy"
role = aws_iam_role.unauth_iam_role.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Action": "*",
"Effect": "Deny",
"Resource": "*"
}
]
}
EOF
}
Note: Edited for updated terraform language changes that don't require "${...}" around references any more