I have the following setup in Google Cloud:
application 'generator' which publishes messages to a Google Cloud PubSub topic.
application 'worker' which consumes a unique message.
any invalid PubSub messages should end up in a 'dead letter' topic.
This topic should have a 'dead letter' topic where invalid messages end up.
However, whenever I configure this via Terraform, the google cloud console mentions I do not have the 'subscriber' and 'publisher' role attached to my project pubsub service account:
I have the following terraform configuration which seems to be correct AFAIK:
resource "google_project_service_identity" "pubsub_sa" {
provider = google-beta
project = var.project_id
service = "pubsub.googleapis.com"
}
/* ... topic and dead-letter topic config here ... */
data "google_iam_policy" "project_pubsub_publishers" {
binding {
role = "roles/pubsub.publisher"
members = [
"serviceAccount:${google_service_account.project_generator_serviceaccount.email}",
"serviceAccount:${google_service_account.project_worker_serviceaccount.email}",
"serviceAccount:${google_project_service_identity.pubsub_sa.email}",
]
}
}
resource "google_pubsub_topic_iam_policy" "project_request_publishers" {
project = var.project_id
topic = google_pubsub_topic.generator_request_pubsub.name
policy_data = data.google_iam_policy.project_pubsub_publishers.policy_data
}
data "google_iam_policy" "project_pubsub_subscribers" {
binding {
role = "roles/pubsub.subscriber"
members = [
"serviceAccount:${google_service_account.project_generator_serviceaccount.email}",
"serviceAccount:${google_service_account.project_worker_serviceaccount.email}",
"serviceAccount:${google_project_service_identity.pubsub_sa.email}",
]
}
}
resource "google_pubsub_topic_iam_policy" "project_request_subscribers" {
topic = google_pubsub_topic.generator_request_pubsub.name
project = var.project_id
policy_data = data.google_iam_policy.project_pubsub_subscribers.policy_data
}
Clicking 'Add' in the web gui and then doing a terraform plan shows following changes:
Terraform will perform the following actions:
# module.gcloud.google_pubsub_topic_iam_policy.project_invalid_request_publishers will be updated in-place
~ resource "google_pubsub_topic_iam_policy" "project_invalid_request_publishers" {
id = "projects/MY-GCLOUD-PROJECTID/topics/generator-request-pubsub-invalid"
~ policy_data = jsonencode(
~ {
~ bindings = [
~ {
~ members = [
+ "serviceAccount:cicd-generator-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
+ "serviceAccount:cicd-worker-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
"serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
# (1 unchanged element hidden)
},
- {
- members = [
- "serviceAccount:cicd-generator-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
- "serviceAccount:cicd-worker-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
- "serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
- role = "roles/pubsub.subscriber"
},
]
}
)
# (3 unchanged attributes hidden)
}
# module.gcloud.google_pubsub_topic_iam_policy.project_invalid_request_subscribers will be updated in-place
~ resource "google_pubsub_topic_iam_policy" "project_invalid_request_subscribers" {
id = "projects/MY-GCLOUD-PROJECTID/topics/generator-request-pubsub-invalid"
~ policy_data = jsonencode(
~ {
~ bindings = [
- {
- members = [
- "serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
- role = "roles/pubsub.publisher"
},
{
members = [
"serviceAccount:cicd-generator-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
"serviceAccount:cicd-worker-sa#MY-GCLOUD-PROJECTID.iam.gserviceaccount.com",
"serviceAccount:service-251572179467#gcp-sa-pubsub.iam.gserviceaccount.com",
]
role = "roles/pubsub.subscriber"
},
]
}
)
# (3 unchanged attributes hidden)
}
# module.gcloud.google_pubsub_topic_iam_policy.project_request_subscribers will be updated in-place
~ resource "google_pubsub_topic_iam_policy" "project_request_subscribers" {
id = "projects/MY-GCLOUD-PROJECTID/topics/generator-request-pubsub"
~ policy_data = jsonencode(
~ {
~ bindings = [
~ {
~ role = "roles/pubsub.publisher" -> "roles/pubsub.subscriber"
# (1 unchanged element hidden)
},
]
}
)
# (3 unchanged attributes hidden)
}
But I'm not sure what I'm doing wrong here. Any ideas?
As per the documentation, seems that you need to first actually set the configuration for a 'dead-letter topic' in GCP.
Setting a dead-letter topic
Which (among some other information) states that:
To create a subscription and set a dead-letter topic, use the gcloud pubsub subscriptions create command:
gcloud pubsub subscriptions create subscription-id \
--topic=topic-id \
--dead-letter-topic=dead-letter-topic-id \
[--max-delivery-attempts=max-delivery-attempts] \
[--dead-letter-topic-project=dead-letter-topic-project]
To update a subscription and set a dead-letter topic, use the gcloud pubsub subscriptions update command:
gcloud pubsub subscriptions update subscription-id \
--dead-letter-topic=dead-letter-topic-id \
[--max-delivery-attempts=max-delivery-attempts] \
[--dead-letter-topic-project=dead-letter-topic-project]
Granting forwarding permissions
To forward undeliverable messages to a dead-letter topic, Pub/Sub must have permission to do the following:
Publish messages to the topic.
Acknowledge the messages, which removes them from the subscription.
Pub/Sub creates and maintains a service account for each project: service-project-number#gcp-sa-pubsub.iam.gserviceaccount.com. You can grant forwarding permissions by assigning publisher and subscriber roles to this service account. If you configured the subscription using Cloud Console, the roles are granted automatically.
Assigning Pub/Sub the publisher role
To grant Pub/Sub permission to publish messages to a dead-letter topic, run the following command:
PUBSUB_SERVICE_ACCOUNT="service-${project-number}#gcp-sa-pubsub.iam.gserviceaccount.com"
gcloud pubsub topics add-iam-policy-binding dead-letter-topic-id \
--member="serviceAccount:$PUBSUB_SERVICE_ACCOUNT"\
--role="roles/pubsub.publisher"
Assigning Pub/Sub the subscriber role
To grant Pub/Sub permission to acknowledge forwarded undeliverable messages, run the following command:
PUBSUB_SERVICE_ACCOUNT="service-${project-number}#gcp-sa-pubsub.iam.gserviceaccount.com"
gcloud pubsub subscriptions add-iam-policy-binding subscription-id \
--member="serviceAccount:$PUBSUB_SERVICE_ACCOUNT"\
--role="roles/pubsub.subscriber"
Hope this is helpful for you.
Regards.
Jaime is right, you need to add those IAM policies to
"service-${project-number}#gcp-sa-pubsub.iam.gserviceaccount.com"
It is a specific sa hidden from the main ones. You can find it in the console in >IAM and select the check box on the top right corner "include google-provided role grants"
There is also needed to add a google_pubsub_topic_iam_policy.
Here is a Terraform working example
data "google_project" "current" {}
data "google_iam_policy" "publisher" {
binding {
role = "roles/pubsub.publisher"
members = [
"serviceAccount:service-${data.google_project.current.number}#gcp-sa-pubsub.iam.gserviceaccount.com",
]
}
}
resource "google_pubsub_topic_iam_policy" "policy" {
project = var.project
topic = google_pubsub_topic.yourTopic.name
policy_data = data.google_iam_policy.publisher.policy_data
}
data "google_iam_policy" "subscriber" {
binding {
role = "roles/pubsub.subscriber"
members = [
"serviceAccount:service-${data.google_project.current.number}#gcp-sa-pubsub.iam.gserviceaccount.com",
]
}
}
resource "google_pubsub_subscription_iam_policy" "policy" {
subscription = google_pubsub_subscription.yourSubscription.name
policy_data = data.google_iam_policy.subscriber.policy_data
}
Related
I am very new to Terraform, so still finding my way around at the moment.
I am needing to add SES permissions to a Lambda function, for sending emails.
I thought it would be as simple as adding the DynamoDB permissions, but there seems to be a different format aws_ses_identity_policy instead of aws_iam_policy_attachment, and as a result, in the todo problem line, I can’t seem to just use .arn to link the policy to the Role.
Is there a different way of doing this? Am I looking at older versions of the library? Any help would be appreciated. Thanks.
### DynamoDB
…
resource "aws_iam_policy" "DynamoDBCrudPolicy" {
name = "DynamoDBCrudPolicy"
policy = data.aws_iam_policy_document.dynamodbPolicyDocument.json
}
### SES
data "aws_iam_policy_document" "sesPolicyDocument" {
statement {
actions = ["SES:SendEmail", "SES:SendRawEmail"]
resources = [aws_ses_domain_identity.SESPolicy.arn]
principals {
identifiers = ["*"]
type = "AWS"
}
}
}
resource "aws_ses_domain_identity" "SESPolicyDomainID" {
domain = "example.com"
}
resource "aws_ses_identity_policy" "SESPolicy" {
identity = aws_ses_domain_identity.SESPolicyDomainID.arn
name = "SESPolicy"
policy = data.aws_iam_policy_document.sesPolicyDocument.json
}
## Attach Policies to Role
resource "aws_iam_policy_attachment" "DynamoDBCrudPolicy_iam_policy_attachment" {
name = "DynamoDBCrudPolicy_iam_policy_attachment"
roles = [ aws_iam_role.DomainRole.name ]
policy_arn = aws_iam_policy.DynamoDBCrudPolicy.arn
}
resource "aws_iam_policy_attachment" "SES_iam_policy_attachment" {
name = "SESPolicy_iam_policy_attachment"
roles = [ aws_iam_role.DomainRole.name ]
# Todo problem here
policy_arn = aws_ses_identity_policy.SESPolicy.arn
}
I am trying to build a simple Eventbridge -> SNS -> AWS Chatbot to notify Slack channel for any ECS deployment events. Below are my codes
resource "aws_cloudwatch_event_rule" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
description = "This rule sends notification on the all app ECS Fargate deployments with respect to the environment."
event_pattern = <<EOF
{
"source": ["aws.ecs"],
"detail-type": ["ECS Deployment State Change"],
"detail": {
"clusterArn": [
{
"prefix": "arn:aws:ecs:<REGION>:<ACCOUNT>:cluster/${var.namespace}-${var.environment}-"
}
]
}
}
EOF
tags = {
Environment = "${var.environment}"
Origin = "terraform"
}
}
resource "aws_cloudwatch_event_target" "ecs_deployment" {
rule = aws_cloudwatch_event_rule.ecs_deployment.name
target_id = "${var.namespace}-${var.environment}-infra-ecs-deployment"
arn = aws_sns_topic.ecs_deployment.arn
}
resource "aws_sns_topic" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
display_name = "${var.namespace} ${var.environment}"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.ecs_deployment.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
effect = "Allow"
actions = ["SNS:Publish"]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
resources = [aws_sns_topic.ecs_deployment.arn]
}
}
Based on the above code, Terraform will create AWS Eventbridge rule and with SNS target. From there, I create AWS Chatbot in the console, and subscribe to the SNS.
The problem is, when I try to remove the detail, it works. But what I want is to filter the events to be coming from cluster with mentioned prefix.
Is this possible? Or did I do it the wrong way?
Any help is appreciated.
I'm trying to create via terraform, a lambda that triggered by Kinesis and her destination on failures will be AWS SQS.
I created and lambda and configured the source and destination
When I'm sending a message to Kinesis queue, the lambda is triggered but not sending messages to the DLQ.
What am I missing?
my labmda source mapping:
resource "aws_lambda_event_source_mapping" "csp_management_service_integration_stream_mapping" {
event_source_arn = local.kinesis_csp_management_service_integration_stream_arn
function_name = module.csp_management_service_integration_lambda.lambda_arn
batch_size = var.shared_kinesis_configuration.batch_size
bisect_batch_on_function_error = var.shared_kinesis_configuration.bisect_batch_on_function_error
starting_position = var.shared_kinesis_configuration.starting_position
maximum_retry_attempts = var.shared_kinesis_configuration.maximum_retry_attempts
maximum_record_age_in_seconds = var.shared_kinesis_configuration.maximum_record_age_in_seconds
function_response_types = var.shared_kinesis_configuration.function_response_types
destination_config {
on_failure {
destination_arn = local.shared_default_sqs_error_handling_dlq_arn
}
}
}
resource "aws_iam_policy" "shared_deadletter_sqs_queue_policy" {
name = "shared-deadletter-sqs-queue-policy"
path = "/"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sqs:SendMessage",
]
Effect = "Allow"
Resource = [
local.shared_default_sqs_error_handling_dlq_arn
]
},
]
})
}
You should take a look on the following metric to see if you have permission error
I think you are facing some permission issue, try attaching a role to your lambda function with access to AWS SQS DLQ.
Is your DLQ encrypted by KMS? You will need top provide permissions to the KMS too in addition to SQS permissions
How is Lambda reporting failure?
I have a SQS Terraform module in which I defined the queue name as below
main_queue_name = "app-sqs-env-${var.env_name}"
by defining the env_name in a separate file and I am able to create a queue with the desired name.
Now I want to create an SNS topic and want the queue to be subscribed to this topic.
when I create the SNS topic using sns_topic_name = "app-sns-env-${var.env_name}" I an able to create the topic as expected
How do I define the sqs_endpoint in the SNS module, I want to use ${var.env_name} in this endpoint definition as we pass different names for different environments.
In order to be able to subscribe an SQS queue to an SNS topic we have to do the following:
# Create some locals for SQS and SNS names
locals {
sqs-name = "app-sqs-env-${var.env-name}"
sns-name = "app-sns-env-${var.env-name}"
}
# Inject caller ID for being able to use the account ID
data "aws_caller_identity" "current" {}
# Create a topic policy. This will allow for the SQS queue to be able to subscribe to the topic
data "aws_iam_policy_document" "sns-topic-policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive",
]
condition {
test = "StringLike"
variable = "SNS:Endpoint"
# In order to avoid circular dependencies, we must create the ARN ourselves
values = [
"arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}",
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
"arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sns-name}"
]
sid = "sid-101"
}
}
# Create a queue policy. This allows for the SNS topic to be able to publish messages to the SQS queue
data "aws_iam_policy_document" "sqs-queue-policy" {
policy_id = "arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}/SQSDefaultPolicy"
statement {
sid = "example-sns-topic"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
actions = [
"SQS:SendMessage",
]
resources = [
"arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}"
]
condition {
test = "ArnEquals"
variable = "aws:SourceArn"
values = [
"arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sns-name}"
]
}
}
}
# Create the SNS topic and assign the topic policy to it
resource "aws_sns_topic" "sns-topic" {
name = local.sns-name
display_name = local.sns-name
policy = data.aws_iam_policy_document.sns-topic-policy.json
}
# Create the SQS queue and assign the queue policy to it
resource "aws_sqs_queue" "sqs-queue" {
name = local.sqs-name
policy = data.aws_iam_policy_document.sqs-queue-policy.json
}
# Subscribe the SQS queue to the SNS topic
resource "aws_sns_topic_subscription" "sns-topic" {
topic_arn = aws_sns_topic.sns-topic.arn
protocol = "sqs"
endpoint = aws_sqs_queue.sqs-queue.arn
}
I hope the code and the comments above make sense. There is an example on the Terraform documentation for aws_sns_topic_subscription which is way more complex, but also usable.
I'm trying to create elasticsearch cluster using terraform.
Using terraform 0.11.13
Please can someone point out why I'm not able to create log groups? What is the Resource Access Policy? is it the same as the data "aws_iam_policy_document" I'm creating?
Note: I'm using elasticsearch_version = "7.9"
code:
resource "aws_cloudwatch_log_group" "search_test_log_group" {
name = "/aws/aes/domains/test-es7/index-logs"
}
resource "aws_elasticsearch_domain" "amp_search_test_es7" {
domain_name = "es7"
elasticsearch_version = "7.9"
.....
log_publishing_options {
cloudwatch_log_group_arn = "${aws_cloudwatch_log_group.search_test_log_group.arn}"
log_type = "INDEX_SLOW_LOGS"
enabled = true
}
access_policies = "${data.aws_iam_policy_document.elasticsearch_policy.json}"
}
data "aws_iam_policy_document" "elasticsearch_policy" {
version = "2012-10-17"
statement {
effect = "Allow"
principals {
identifiers = ["*"]
type = "AWS"
}
actions = ["es:*"]
resources = ["arn:aws:es:us-east-1:xxx:domain/test_es7/*"]
}
statement {
effect = "Allow"
principals {
identifiers = ["es.amazonaws.com"]
type = "Service"
}
actions = [
"logs:PutLogEvents",
"logs:PutLogEventsBatch",
"logs:CreateLogStream",
]
resources = ["arn:aws:logs:*"]
}
}
I'm getting this error
aws_elasticsearch_domain.test_es7: Error creating ElasticSearch domain: ValidationException: The Resource Access Policy specified for the CloudWatch Logs log group /aws/aes/domains/test-es7/index-logs does not grant sufficient permissions for Amazon Elasticsearch Service to create a log stream. Please check the Resource Access Policy.
For ElasticSearch (ES) to be able to write to CloudWatch (CW) Logs, you have to provide a resource-based policy on your CW logs.
This is achieved using aws_cloudwatch_log_resource_policy which is missing from your code.
In fact, TF docs have a ready to use example of how to do it for ES, thus you should be able to just copy and paste it.
ES access policies are different from CW log policies, as they determine who can do what on your ES domain. Thus, you would have to adjust that part of your code to meet your requirements.