I have a SQS Terraform module in which I defined the queue name as below
main_queue_name = "app-sqs-env-${var.env_name}"
by defining the env_name in a separate file and I am able to create a queue with the desired name.
Now I want to create an SNS topic and want the queue to be subscribed to this topic.
when I create the SNS topic using sns_topic_name = "app-sns-env-${var.env_name}" I an able to create the topic as expected
How do I define the sqs_endpoint in the SNS module, I want to use ${var.env_name} in this endpoint definition as we pass different names for different environments.
In order to be able to subscribe an SQS queue to an SNS topic we have to do the following:
# Create some locals for SQS and SNS names
locals {
sqs-name = "app-sqs-env-${var.env-name}"
sns-name = "app-sns-env-${var.env-name}"
}
# Inject caller ID for being able to use the account ID
data "aws_caller_identity" "current" {}
# Create a topic policy. This will allow for the SQS queue to be able to subscribe to the topic
data "aws_iam_policy_document" "sns-topic-policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive",
]
condition {
test = "StringLike"
variable = "SNS:Endpoint"
# In order to avoid circular dependencies, we must create the ARN ourselves
values = [
"arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}",
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
"arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sns-name}"
]
sid = "sid-101"
}
}
# Create a queue policy. This allows for the SNS topic to be able to publish messages to the SQS queue
data "aws_iam_policy_document" "sqs-queue-policy" {
policy_id = "arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}/SQSDefaultPolicy"
statement {
sid = "example-sns-topic"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
actions = [
"SQS:SendMessage",
]
resources = [
"arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}"
]
condition {
test = "ArnEquals"
variable = "aws:SourceArn"
values = [
"arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sns-name}"
]
}
}
}
# Create the SNS topic and assign the topic policy to it
resource "aws_sns_topic" "sns-topic" {
name = local.sns-name
display_name = local.sns-name
policy = data.aws_iam_policy_document.sns-topic-policy.json
}
# Create the SQS queue and assign the queue policy to it
resource "aws_sqs_queue" "sqs-queue" {
name = local.sqs-name
policy = data.aws_iam_policy_document.sqs-queue-policy.json
}
# Subscribe the SQS queue to the SNS topic
resource "aws_sns_topic_subscription" "sns-topic" {
topic_arn = aws_sns_topic.sns-topic.arn
protocol = "sqs"
endpoint = aws_sqs_queue.sqs-queue.arn
}
I hope the code and the comments above make sense. There is an example on the Terraform documentation for aws_sns_topic_subscription which is way more complex, but also usable.
Related
I am very new to Terraform, so still finding my way around at the moment.
I am needing to add SES permissions to a Lambda function, for sending emails.
I thought it would be as simple as adding the DynamoDB permissions, but there seems to be a different format aws_ses_identity_policy instead of aws_iam_policy_attachment, and as a result, in the todo problem line, I can’t seem to just use .arn to link the policy to the Role.
Is there a different way of doing this? Am I looking at older versions of the library? Any help would be appreciated. Thanks.
### DynamoDB
…
resource "aws_iam_policy" "DynamoDBCrudPolicy" {
name = "DynamoDBCrudPolicy"
policy = data.aws_iam_policy_document.dynamodbPolicyDocument.json
}
### SES
data "aws_iam_policy_document" "sesPolicyDocument" {
statement {
actions = ["SES:SendEmail", "SES:SendRawEmail"]
resources = [aws_ses_domain_identity.SESPolicy.arn]
principals {
identifiers = ["*"]
type = "AWS"
}
}
}
resource "aws_ses_domain_identity" "SESPolicyDomainID" {
domain = "example.com"
}
resource "aws_ses_identity_policy" "SESPolicy" {
identity = aws_ses_domain_identity.SESPolicyDomainID.arn
name = "SESPolicy"
policy = data.aws_iam_policy_document.sesPolicyDocument.json
}
## Attach Policies to Role
resource "aws_iam_policy_attachment" "DynamoDBCrudPolicy_iam_policy_attachment" {
name = "DynamoDBCrudPolicy_iam_policy_attachment"
roles = [ aws_iam_role.DomainRole.name ]
policy_arn = aws_iam_policy.DynamoDBCrudPolicy.arn
}
resource "aws_iam_policy_attachment" "SES_iam_policy_attachment" {
name = "SESPolicy_iam_policy_attachment"
roles = [ aws_iam_role.DomainRole.name ]
# Todo problem here
policy_arn = aws_ses_identity_policy.SESPolicy.arn
}
I am trying to build a simple Eventbridge -> SNS -> AWS Chatbot to notify Slack channel for any ECS deployment events. Below are my codes
resource "aws_cloudwatch_event_rule" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
description = "This rule sends notification on the all app ECS Fargate deployments with respect to the environment."
event_pattern = <<EOF
{
"source": ["aws.ecs"],
"detail-type": ["ECS Deployment State Change"],
"detail": {
"clusterArn": [
{
"prefix": "arn:aws:ecs:<REGION>:<ACCOUNT>:cluster/${var.namespace}-${var.environment}-"
}
]
}
}
EOF
tags = {
Environment = "${var.environment}"
Origin = "terraform"
}
}
resource "aws_cloudwatch_event_target" "ecs_deployment" {
rule = aws_cloudwatch_event_rule.ecs_deployment.name
target_id = "${var.namespace}-${var.environment}-infra-ecs-deployment"
arn = aws_sns_topic.ecs_deployment.arn
}
resource "aws_sns_topic" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
display_name = "${var.namespace} ${var.environment}"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.ecs_deployment.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
effect = "Allow"
actions = ["SNS:Publish"]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
resources = [aws_sns_topic.ecs_deployment.arn]
}
}
Based on the above code, Terraform will create AWS Eventbridge rule and with SNS target. From there, I create AWS Chatbot in the console, and subscribe to the SNS.
The problem is, when I try to remove the detail, it works. But what I want is to filter the events to be coming from cluster with mentioned prefix.
Is this possible? Or did I do it the wrong way?
Any help is appreciated.
I'm trying to create via terraform, a lambda that triggered by Kinesis and her destination on failures will be AWS SQS.
I created and lambda and configured the source and destination
When I'm sending a message to Kinesis queue, the lambda is triggered but not sending messages to the DLQ.
What am I missing?
my labmda source mapping:
resource "aws_lambda_event_source_mapping" "csp_management_service_integration_stream_mapping" {
event_source_arn = local.kinesis_csp_management_service_integration_stream_arn
function_name = module.csp_management_service_integration_lambda.lambda_arn
batch_size = var.shared_kinesis_configuration.batch_size
bisect_batch_on_function_error = var.shared_kinesis_configuration.bisect_batch_on_function_error
starting_position = var.shared_kinesis_configuration.starting_position
maximum_retry_attempts = var.shared_kinesis_configuration.maximum_retry_attempts
maximum_record_age_in_seconds = var.shared_kinesis_configuration.maximum_record_age_in_seconds
function_response_types = var.shared_kinesis_configuration.function_response_types
destination_config {
on_failure {
destination_arn = local.shared_default_sqs_error_handling_dlq_arn
}
}
}
resource "aws_iam_policy" "shared_deadletter_sqs_queue_policy" {
name = "shared-deadletter-sqs-queue-policy"
path = "/"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sqs:SendMessage",
]
Effect = "Allow"
Resource = [
local.shared_default_sqs_error_handling_dlq_arn
]
},
]
})
}
You should take a look on the following metric to see if you have permission error
I think you are facing some permission issue, try attaching a role to your lambda function with access to AWS SQS DLQ.
Is your DLQ encrypted by KMS? You will need top provide permissions to the KMS too in addition to SQS permissions
How is Lambda reporting failure?
I have the following setup in Google Cloud:
application 'generator' which publishes messages to a Google Cloud PubSub topic.
application 'worker' which consumes a unique message.
any invalid PubSub messages should end up in a 'dead letter' topic.
This topic should have a 'dead letter' topic where invalid messages end up.
However, whenever I configure this via Terraform, the google cloud console mentions I do not have the 'subscriber' and 'publisher' role attached to my project pubsub service account:
I have the following terraform configuration which seems to be correct AFAIK:
resource "google_project_service_identity" "pubsub_sa" {
provider = google-beta
project = var.project_id
service = "pubsub.googleapis.com"
}
/* ... topic and dead-letter topic config here ... */
data "google_iam_policy" "project_pubsub_publishers" {
binding {
role = "roles/pubsub.publisher"
members = [
"serviceAccount:${google_service_account.project_generator_serviceaccount.email}",
"serviceAccount:${google_service_account.project_worker_serviceaccount.email}",
"serviceAccount:${google_project_service_identity.pubsub_sa.email}",
]
}
}
resource "google_pubsub_topic_iam_policy" "project_request_publishers" {
project = var.project_id
topic = google_pubsub_topic.generator_request_pubsub.name
policy_data = data.google_iam_policy.project_pubsub_publishers.policy_data
}
data "google_iam_policy" "project_pubsub_subscribers" {
binding {
role = "roles/pubsub.subscriber"
members = [
"serviceAccount:${google_service_account.project_generator_serviceaccount.email}",
"serviceAccount:${google_service_account.project_worker_serviceaccount.email}",
"serviceAccount:${google_project_service_identity.pubsub_sa.email}",
]
}
}
resource "google_pubsub_topic_iam_policy" "project_request_subscribers" {
topic = google_pubsub_topic.generator_request_pubsub.name
project = var.project_id
policy_data = data.google_iam_policy.project_pubsub_subscribers.policy_data
}
Clicking 'Add' in the web gui and then doing a terraform plan shows following changes:
Terraform will perform the following actions:
# module.gcloud.google_pubsub_topic_iam_policy.project_invalid_request_publishers will be updated in-place
~ resource "google_pubsub_topic_iam_policy" "project_invalid_request_publishers" {
id = "projects/MY-GCLOUD-PROJECTID/topics/generator-request-pubsub-invalid"
~ policy_data = jsonencode(
~ {
~ bindings = [
~ {
~ members = [
+ "serviceAccount:cicd-generator-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
+ "serviceAccount:cicd-worker-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
"serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
# (1 unchanged element hidden)
},
- {
- members = [
- "serviceAccount:cicd-generator-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
- "serviceAccount:cicd-worker-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
- "serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
- role = "roles/pubsub.subscriber"
},
]
}
)
# (3 unchanged attributes hidden)
}
# module.gcloud.google_pubsub_topic_iam_policy.project_invalid_request_subscribers will be updated in-place
~ resource "google_pubsub_topic_iam_policy" "project_invalid_request_subscribers" {
id = "projects/MY-GCLOUD-PROJECTID/topics/generator-request-pubsub-invalid"
~ policy_data = jsonencode(
~ {
~ bindings = [
- {
- members = [
- "serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
- role = "roles/pubsub.publisher"
},
{
members = [
"serviceAccount:cicd-generator-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
"serviceAccount:cicd-worker-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
"serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
role = "roles/pubsub.subscriber"
},
]
}
)
# (3 unchanged attributes hidden)
}
# module.gcloud.google_pubsub_topic_iam_policy.project_request_subscribers will be updated in-place
~ resource "google_pubsub_topic_iam_policy" "project_request_subscribers" {
id = "projects/MY-GCLOUD-PROJECTID/topics/generator-request-pubsub"
~ policy_data = jsonencode(
~ {
~ bindings = [
~ {
~ role = "roles/pubsub.publisher" -> "roles/pubsub.subscriber"
# (1 unchanged element hidden)
},
]
}
)
# (3 unchanged attributes hidden)
}
But I'm not sure what I'm doing wrong here. Any ideas?
As per the documentation, seems that you need to first actually set the configuration for a 'dead-letter topic' in GCP.
Setting a dead-letter topic
Which (among some other information) states that:
To create a subscription and set a dead-letter topic, use the gcloud pubsub subscriptions create command:
gcloud pubsub subscriptions create subscription-id \
--topic=topic-id \
--dead-letter-topic=dead-letter-topic-id \
[--max-delivery-attempts=max-delivery-attempts] \
[--dead-letter-topic-project=dead-letter-topic-project]
To update a subscription and set a dead-letter topic, use the gcloud pubsub subscriptions update command:
gcloud pubsub subscriptions update subscription-id \
--dead-letter-topic=dead-letter-topic-id \
[--max-delivery-attempts=max-delivery-attempts] \
[--dead-letter-topic-project=dead-letter-topic-project]
Granting forwarding permissions
To forward undeliverable messages to a dead-letter topic, Pub/Sub must have permission to do the following:
Publish messages to the topic.
Acknowledge the messages, which removes them from the subscription.
Pub/Sub creates and maintains a service account for each project: service-project-number#gcp-sa-pubsub.iam.gserviceaccount.com. You can grant forwarding permissions by assigning publisher and subscriber roles to this service account. If you configured the subscription using Cloud Console, the roles are granted automatically.
Assigning Pub/Sub the publisher role
To grant Pub/Sub permission to publish messages to a dead-letter topic, run the following command:
PUBSUB_SERVICE_ACCOUNT="service-${project-number}#gcp-sa-pubsub.iam.gserviceaccount.com"
gcloud pubsub topics add-iam-policy-binding dead-letter-topic-id \
--member="serviceAccount:$PUBSUB_SERVICE_ACCOUNT"\
--role="roles/pubsub.publisher"
Assigning Pub/Sub the subscriber role
To grant Pub/Sub permission to acknowledge forwarded undeliverable messages, run the following command:
PUBSUB_SERVICE_ACCOUNT="service-${project-number}#gcp-sa-pubsub.iam.gserviceaccount.com"
gcloud pubsub subscriptions add-iam-policy-binding subscription-id \
--member="serviceAccount:$PUBSUB_SERVICE_ACCOUNT"\
--role="roles/pubsub.subscriber"
Hope this is helpful for you.
Regards.
Jaime is right, you need to add those IAM policies to
"service-${project-number}#gcp-sa-pubsub.iam.gserviceaccount.com"
It is a specific sa hidden from the main ones. You can find it in the console in >IAM and select the check box on the top right corner "include google-provided role grants"
There is also needed to add a google_pubsub_topic_iam_policy.
Here is a Terraform working example
data "google_project" "current" {}
data "google_iam_policy" "publisher" {
binding {
role = "roles/pubsub.publisher"
members = [
"serviceAccount:service-${data.google_project.current.number}#gcp-sa-pubsub.iam.gserviceaccount.com",
]
}
}
resource "google_pubsub_topic_iam_policy" "policy" {
project = var.project
topic = google_pubsub_topic.yourTopic.name
policy_data = data.google_iam_policy.publisher.policy_data
}
data "google_iam_policy" "subscriber" {
binding {
role = "roles/pubsub.subscriber"
members = [
"serviceAccount:service-${data.google_project.current.number}#gcp-sa-pubsub.iam.gserviceaccount.com",
]
}
}
resource "google_pubsub_subscription_iam_policy" "policy" {
subscription = google_pubsub_subscription.yourSubscription.name
policy_data = data.google_iam_policy.subscriber.policy_data
}
I had quite a hard time setting up an automization with Beanstalk and Codepipeline...
I finally got it running, the main issue was the S3 Cloudwatch event to trigger the start of the Codepipeline. I missed the Cloudtrail part which is necessary and I couldn't find that in any documentation.
So the current Setup is:
S3 file gets uploaded -> a CloudWatch Event triggers the Codepipeline -> Codepipeline deploys to ElasticBeanstalk env.
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_name}/file.zip"]
}
}
}
But this is only to create a new trail. The problem is that AWS only allows 5 trails max. On the AWS console you can add multiple data events to one trail, but I couldn't manage to do this in terraform. I tried to use the same name, but this just raises an error
"Error creating CloudTrail: TrailAlreadyExistsException: Trail codepipeline-source-trail already exists for customer: XXXX"
I tried my best to explain my problem. Not sure if it is understandable.
In a nutshell: I want to add a data events:S3 in an existing cloudtrail trail with terraform.
Thx for help,
Daniel
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
You do not need multiple CloudTrail to invoke a CloudWatch Event. You can create service-specific rules as well.
Create a CloudWatch Events rule for an Amazon S3 source (console)
From CloudWatch event rule to invoke CodePipeline as a target. Let's say you created this event rule
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
]
}
}
You add CodePipeline as a target for this rule and eventually, Codepipeline deploys to ElasticBeanstalk env.
Have you tried to add multiple data_resources to your current trail instead of adding a new trail with the same name:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_A}/file.zip"]
}
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_B}/fileB.zip"]
}
}
}
You should be able to add up to 250 data resources (across all event selectors in a trail), and up to 5 event selectors to your current trail (CloudTrail quota limits)