I have a working AWS project that I'm trying to implement in Terraform.
One of the steps requires a lambda function to query athena and return results to SQS (I am using this module for lambda instead of the original resource). Here is the code:
data "archive_file" "go_package" {
type = "zip"
source_file = "./report_to_SQS_go/main"
output_path = "./report_to_SQS_go/main.zip"
}
resource "aws_sqs_queue" "emails_queue" {
name = "sendEmails_tf"
}
module "lambda_report_to_sqs" {
source = "terraform-aws-modules/lambda/aws"
function_name = "report_to_SQS_Go_tf"
handler = "main"
runtime = "go1.x"
create_package = false
local_existing_package = "./report_to_SQS_go/main.zip"
attach_policy_json = true
policy_json = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect : "Allow"
Action : [
"dynamodb:*",
"lambda:*",
"logs:*",
"athena:*",
"cloudwatch:*",
"s3:*",
"sqs:*"
]
Resource : ["*"]
}
]
})
destination_on_success = aws_sqs_queue.emails_queue.arn
timeout = 200
memory_size = 1024
}
The code works fine and produces the desired output; however, the problem is, SQS doesn't show up as a destination (although the Queue shows up in SQS normally and can send/recieve messages).
I don't think permissions are the problem because I can add SQS destinations manually from the console successfully.
The variable destination_on_success is only used if you also set create_async_event_config as true. Below is extracted from https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/master
variables.tf
############################
# Lambda Async Event Config
############################
variable "create_async_event_config" {
description = "Controls whether async event configuration for Lambda Function/Alias should be created"
type = bool
default = false
}
variable "create_current_version_async_event_config" {
description = "Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources)"
type = bool
default = true
}
.....
variable "destination_on_failure" {
description = "Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations"
type = string
default = null
}
variable "destination_on_success" {
description = "Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations"
type = string
default = null
}
main.tf
resource "aws_lambda_function_event_invoke_config" "this" {
for_each = { for k, v in local.qualifiers : k => v if v != null && local.create && var.create_function && !var.create_layer && var.create_async_event_config }
function_name = aws_lambda_function.this[0].function_name
qualifier = each.key == "current_version" ? aws_lambda_function.this[0].version : null
maximum_event_age_in_seconds = var.maximum_event_age_in_seconds
maximum_retry_attempts = var.maximum_retry_attempts
dynamic "destination_config" {
for_each = var.destination_on_failure != null || var.destination_on_success != null ? [true] : []
content {
dynamic "on_failure" {
for_each = var.destination_on_failure != null ? [true] : []
content {
destination = var.destination_on_failure
}
}
dynamic "on_success" {
for_each = var.destination_on_success != null ? [true] : []
content {
destination = var.destination_on_success
}
}
}
}
}
So the destination_on_success is only used in this resource and this resources is only invoked if several conditions are met. The key one being var.create_async_event_config must be true.
You can see the example for this here https://github.com/terraform-aws-modules/terraform-aws-lambda/blob/be6cf9701071bf807cd7864fbcc751ed2552e434/examples/async/main.tf
module "lambda_function" {
source = "../../"
function_name = "${random_pet.this.id}-lambda-async"
handler = "index.lambda_handler"
runtime = "python3.8"
architectures = ["arm64"]
source_path = "${path.module}/../fixtures/python3.8-app1"
create_async_event_config = true
attach_async_event_policy = true
maximum_event_age_in_seconds = 100
maximum_retry_attempts = 1
destination_on_failure = aws_sns_topic.async.arn
destination_on_success = aws_sqs_queue.async.arn
}
Related
We are currently in the process of editing our core s3 module to adapt to the new V4.0 changes that terraform released -
Existing main.tf
bucket = local.bucket_name
tags = local.tags
force_destroy = var.force_destroy
dynamic "logging" {
for_each = local.logging
content {
target_bucket = logging.value["target_bucket"]
target_prefix = logging.value["target_prefix"]
}
}
....
}
I am trying to convert this to use the resource aws_s3_bucket_logging as below
resource "aws_s3_bucket" "bucket" {
bucket = local.bucket_name
tags = local.tags
force_destroy = var.force_destroy
hosted_zone_id = var.hosted_zone_id
}
resource "aws_s3_bucket_logging" "logging" {
bucket = aws_s3_bucket.bucket.id
dynamic "logging" {
for_each = local.logging
content {
target_bucket = logging.value["target_bucket"]
target_prefix = logging.value["target_prefix"]
}
}
locals.tf
locals {
logging = var.log_bucket == null ? [] : [
{
target_bucket = var.log_bucket
target_prefix = var.log_prefix
}
]
....
variables.tf
type = string
default = null
description = "The name of the bucket that will receive the log objects."
}
variable "log_prefix" {
type = string
default = null
description = "To specify a key prefix for log objects."
}
And I receive the error
Error: Unsupported block type
Blocks of type "logging" are not expected here.
Any help is greatly appreciated. TA
If you check TF docs aws_s3_bucket_logging, you will find that aws_s3_bucket_logging does not have any block nor attribute called logging. Please have a look at the docs linked, and follow the examples and the documentation.
Although, Is it the right usage of the resource if i remove the locals and simplify the resource to be as below? All am i trying to is pass null as default value
resource "aws_s3_bucket_logging" "logging" {
bucket = aws_s3_bucket.bucket.id
target_bucket = var.log_bucket
target_prefix = var.log_prefix
}
I want to create two Amazon SNS topics with the same aws_iam_policy_document, aws_sns_topic_policy & time_sleep configs.
This is my terraform, my_sns_topic.tf:
resource "aws_sns_topic" "topic_a" {
name = "topic-a"
}
resource "aws_sns_topic" "topic_b" {
name = "topic-b"
}
data "aws_iam_policy_document" "topic_notification" {
version = "2008-10-17"
statement {
sid = "__default_statement_ID"
actions = [
"SNS:Publish"
]
# Cut off some lines for simplification.
## NEW LINE ADDED
statement {
sid = "allow_snowflake_subscription"
principals {
type = "AWS"
identifiers = [var.storage_aws_iam_user_arn]
}
actions = ["SNS:Subscribe"]
resources = [aws_sns_topic.topic_a.arn] # Troubles with this line
}
}
resource "aws_sns_topic_policy" "topic_policy_notification" {
arn = aws_sns_topic.topic_a.arn
policy = data.aws_iam_policy_document.topic_policy_notification.json
}
resource "time_sleep" "topic_wait_10s" {
depends_on = [aws_sns_topic.topic_a]
create_duration = "10s"
}
As you can see here, I set up the configuration only for topic-a. I want to loop this over to apply for topic-b as well.
It would be better to use map and for_each, instead of separately creating "a" and "b" topics:
variable "topics" {
default = ["a", "b"]
}
resource "aws_sns_topic" "topic" {
for_each = toset(var.topics)
name = "topic-${each.key}"
}
data "aws_iam_policy_document" "topic_notification" {
version = "2008-10-17"
statement {
sid = "__default_statement_ID"
actions = [
"SNS:Publish"
]
# Cut off some lines for simplification.
}
resource "aws_sns_topic_policy" "topic_policy_notification" {
for_each = toset(var.topics)
arn = aws_sns_topic.topic[each.key].arn
policy = data.aws_iam_policy_document.topic_policy_notification.json
}
resource "time_sleep" "topic_wait_10s" {
for_each = toset(var.topics)
depends_on = [aws_sns_topic.topic[each.key]]
create_duration = "10s"
}
I have a lambda that I trigger with an EventBridge.
I have allowed_triggers in my lambda_function:
allowed_triggers = {
"RunDaily" = {
principal = "events.amazonaws.com"
source_arn = module.eventbridge.eventbridge_rule_arns["crons"]
}
}
And I have an eventbridge module:
module "eventbridge" {
source = "terraform-aws-modules/eventbridge/aws"
version = "1.14.0"
create_bus = false
create_role = false
create_rules = true
rules = {
crons = {
description = "deafault"
schedule_expression = "rate(1 day)"
}
}
targets = {
crons = [
{
arn = module.lambda_function.lambda_function_arn
input = jsonencode({ "job" : "crons" })
}
]
}
}
Now, this works great, as the rule is created and attached properly.
But when I want to change the name of the rule along with its description, terraform pickups only the description change:
module "eventbridge" {
...
rules = {
crons = {
description = "My custom cron rule"
schedule_expression = "rate(1 day)"
}
}
targets = {
crons = [
{
name = "my-custom-cron-rule-name"
arn = module.lambda_function.lambda_function_arn
input = jsonencode({ "job" : "crons" })
}
]
}
}
Plan:
Terraform will perform the following actions:
# module.eventbridge.aws_cloudwatch_event_rule.this["crons"] will be updated in-place
~ resource "aws_cloudwatch_event_rule" "this" {
~ description = "deafault" -> "My custom cron rule"
id = "crons-rule"
name = "crons-rule"
tags = {
"Name" = "crons-rule"
}
# (5 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Question: How do I change the name attribute for an eventbridge rule?
As per the module definition [1], the aws_cloudwatch_event_rule name is derived from value of the key of the rules block, i.e.:
rules = {
crons = {
description = "My custom cron rule"
schedule_expression = "rate(1 day)"
}
}
Based on the lines from the GitHub repo, the name is formed with:
locals {
eventbridge_rules = flatten([
for index, rule in var.rules :
merge(rule, {
"name" = index
"Name" = "${replace(index, "_", "-")}-rule"
})
])
... # rest of locals goes here
}
If you take a look at your definition and this part of code, you can see that the name will be crons-rule, which is visible in both the name and tags.Name arguments:
name = "crons-rule"
tags = {
"Name" = "crons-rule"
}
So in order to change the name of the rule, you would have to change the key of the rules block, i.e.:
rules = {
very-nice-new-crons = { # <----- here is where the change should be made
description = "My custom cron rule"
schedule_expression = "rate(1 day)"
}
}
You can verify this by looking at [2]:
resource "aws_cloudwatch_event_rule" "this" {
for_each = { for k, v in local.eventbridge_rules : v.name => v if var.create && var.create_rules }
name = each.value.Name
...
tags = merge(var.tags, {
Name = each.value.Name
})
}
EDIT: As pointed out, there are two more changes that need to be made after the name is changed:
The allowed_triggers of the Lambda function should now use the new key to reference the event rule that is allowed to trigger it. It has to be changed from
source_arn = module.eventbridge.eventbridge_rule_arns["crons"]
to
source_arn = module.eventbridge.eventbridge_rule_arns["very-nice-new-crons"]
The same name change has to be used in the targets block as well, i.e., the crons key in the targets has to be replaced with the same key name as in the rules block:
targets = {
very-nice-new-crons = [
{
name = "my-custom-cron-rule-name"
arn = module.lambda_function.lambda_function_arn
input = jsonencode({ "job" : "crons" })
}
]
}
[1] https://github.com/terraform-aws-modules/terraform-aws-eventbridge/blob/master/main.tf#L2-L6
[2] https://github.com/terraform-aws-modules/terraform-aws-eventbridge/blob/master/main.tf#L44
I have been having some trouble trying to get the fulfillment_activity codehook to work so that I can use Lambda Functions for the backend. For some reason, I am getting this error message from Terraform.
Error: error waiting for Lex Bot (helloBot) create: unexpected state 'FAILED', wanted target 'NOT_BUILT, READY, READY_BASIC_TESTING'. last error: Intent 'sample_intent' has an invalid message version defined for its fulfillment.
Here is my Terraform config:
# AWS Lex Bot
resource "aws_lex_bot" "helloBot" {
depends_on = [ aws_lex_intent.sample_intent ]
locale = "en-US"
name = "helloBot"
process_behavior = "BUILD"
voice_id = "Salli"
create_version = true
idle_session_ttl_in_seconds = 300
child_directed = false
abort_statement {
message {
content = "Abort Abort!"
content_type = "PlainText"
}
}
clarification_prompt {
max_attempts = 2
message {
content = "No Idea What You're Saying!"
content_type = "PlainText"
}
}
intent {
intent_name = "sampleIntentName"
intent_version = aws_lex_intent.sample_intent.version
}
}
resource "aws_lambda_permission" "lex_sample_intent_lambda" {
statement_id = "AllowExecutionFromAmazonLex"
action = "lambda:InvokeFunction"
function_name = "someLambdaFunctionName"
principal = "lex.amazonaws.com"
# https://docs.aws.amazon.com/lex/latest/dg/gs-cli-update-lambda.html
source_arn = "arn:aws:lex:myRegion:accountId:intent:sampleIntentName:*"
}
# AWS Lex Intents
data "aws_lambda_function" "existing" {
function_name = "someLambdaFunctionName"
qualifier = "dev"
}
resource "aws_lex_intent" "sample_intent" {
create_version = true
name = "sampleIntentName"
fulfillment_activity {
type = "CodeHook"
code_hook {
message_version = "1.0"
uri = data.aws_lambda_function.existing.qualified_arn
}
}
sample_utterances = [
"hi",
"hello"
]
}
I looked at the cli documentation and it appears that we are supposed to use "1.0" for the message version.
It looks like the Terraform configuration should be correct. The problem was related to the data type of the message version when it was a variable. It was incorrectly set as a number instead of a string.
I'm using Terraform to create a Cognito User pool. I'd like to use a lambda function for sending a custom message when a user signs up. When I run attempt to sign up on the client, I get an error saying that "CustomMessage invocation failed due to error AccessDeniedException." I've used Lambda Permissions before, but I can't find any examples of this configuration. How do I give the lambda function permission? The following is my current configuration.
resource "aws_cognito_user_pool" "main" {
name = "${var.user_pool_name}_${var.stage}"
username_attributes = [ "email" ]
schema {
attribute_data_type = "String"
mutable = true
name = "name"
required = true
}
schema {
attribute_data_type = "String"
mutable = true
name = "email"
required = true
}
password_policy {
minimum_length = "8"
require_lowercase = true
require_numbers = true
require_symbols = true
require_uppercase = true
}
mfa_configuration = "OFF"
lambda_config {
custom_message = aws_lambda_function.custom_message.arn
post_confirmation = aws_lambda_function.post_confirmation.arn
}
}
...
resource "aws_lambda_permission" "get_blog" {
statement_id = "AllowExecutionFromCognito"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.custom_message.function_name
principal = "cognito-idp.amazonaws.com"
source_arn = "${aws_cognito_user_pool.main.arn}/*/*"
depends_on = [ aws_lambda_function.custom_message ]
}
...
resource "aws_lambda_function" "custom_message" {
filename = "${var.custom_message_path}/${var.custom_message_file_name}.zip"
function_name = var.custom_message_file_name
role = aws_iam_role.custom_message.arn
handler = "${var.custom_message_file_name}.handler"
source_code_hash = filebase64sha256("${var.custom_message_path}/${var.custom_message_file_name}.zip")
runtime = "nodejs12.x"
timeout = 10
layers = [ var.node_layer_arn ]
environment {
variables = {
TABLE_NAME = var.table_name
RESOURCENAME = "blogAuthCustomMessage"
REGION = "us-west-2"
}
}
tags = {
Name = var.developer
}
depends_on = [
data.archive_file.custom_message,
]
}
Based on OP's feedback in the comment section, changing source_arn property in the aws_lambda_permission.get_blog to aws_cognito_user_pool.main.arn works.