How do I add a trigger to aws lambda using terraform?
the desired trigger is s3, object created all.
my terraform source code arouond the lambda is:
module "s3-object-created-lambda" {
source = "../../../../../modules/lambda"
s3_bucket = "${var.s3_lambda_bucket}"
s3_key = "${var.s3_lambda_key}"
name = "${var.lambda_some_name}"
handler = "code.handler"
env = {
lambda_name = "${var.lambda_base_name}"
lambda_version = "${var.lambda_version}"
}
}
trying to figure out how do I add the trigger.
via the aws console it is super simple.
After some reading in:
https://www.terraform.io/docs/providers/aws/r/s3_bucket_notification.html
the solution is:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${data.terraform_remote_state.stack.bucket_id}"
lambda_function {
lambda_function_arn = "${module.some_lambda.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "${var.cluster_name}/somepath/"
filter_suffix = ".txt"
}
}
Related
I have created two lambda functions. now i wanted to pass all the cloudwatch logs from first lambda to second lambda. i have created new log group name and subscription filter to pass the cloudwatch logs to second lambda.
I am not sure if this configuration needs to be added any resource.
resource "aws_lambda_function" "audit-logs" {
filename = var.audit_filename
function_name = var.audit_function
source_code_hash = filebase64sha256(var.audit_filename)
role = module.lambda_role.arn
handler = "cloudwatch.lambda_handler"
runtime = "python3.9"
timeout = 200
description = "audit logs"
depends_on = [module.lambda_role, module.security_group]
}
resource "aws_cloudwatch_log_group" "splunk_cloudwatch_loggroup" {
name = "/aws/lambda/audit_logs"
}
resource "aws_lambda_permission" "allow_cloudwatch_for_splunk" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.splunk-logs.arn
principal = "logs.amazonaws.com" #"logs.region.amazonaws.com"
source_arn = "${aws_cloudwatch_log_group.splunk_cloudwatch_loggroup.arn}:*"
}
resource "aws_cloudwatch_log_subscription_filter" "splunk_cloudwatch_trigger" {
depends_on = [aws_lambda_permission.allow_cloudwatch_for_splunk]
destination_arn = aws_lambda_function.splunk-logs.arn
filter_pattern = ""
log_group_name = aws_cloudwatch_log_group.splunk_cloudwatch_loggroup.name
name = "splunk_filter"
}
# splunk logs lambda function
resource "aws_lambda_function" "splunk-logs" {
filename = var.splunk_filename
function_name = var.splunk_function
source_code_hash = filebase64sha256(var.splunk_filename)
role = module.lambda_role.arn
handler = "${var.splunk_handler}.handler"
runtime = "python3.9"
timeout = 200
description = "audit logs"
depends_on = [module.lambda_role, module.security_group]
}
how can i pass all the logs from first lambda to newly created log group ? any help ?
I have created 2 modules. One for SQS and another for SSM. My SQS creates 4 queues and i am trying to create corresponding entries in the parameter store for their url and arn. I am importing the SSM module inside my SQS module such that it creates the parameters right after SQS creation is done.
This is what my SQS module looks like :-
resource "aws_sqs_queue" "FirstStandardSQSQueue" {
name = "sqs-${var.stage}-one"
message_retention_seconds = var.SQSMsgRetention
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.FirstDeadLetterQueue.arn
maxReceiveCount = 2
})
}
resource "aws_sqs_queue" "FirstDeadLetterQueue" {
name = "sqs-dead-letter-${var.stage}-one"
receive_wait_time_seconds = var.SQSRecvMsgWaitTime
}
resource "aws_sqs_queue" "SecondStandardSQSQueue" {
name = "sqs-${var.stage}-two"
message_retention_seconds = var.SQSMsgRetention
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.SecondDeadLetterQueue.arn
maxReceiveCount = 3
})
}
resource "aws_sqs_queue" "SecondDeadLetterQueue" {
name = "sqs-dead-letter-${var.stage}-two"
receive_wait_time_seconds = var.SQSRecvMsgWaitTime
}
module "ssm" {
source = "../ssm/"
// How do i create a dynamic map???
for_each = var.Queues
name = "/new/${var.stage}/${each.key}"
type = "SecureString"
value = "${each.value}"
}
My SSM module looks like this :-
resource "aws_ssm_parameter" "main" {
name = var.name
type = var.type
value = var.value
}
I am trying to create either a map or somehow dynamically be able to pass values in my ssm module using for_each? I tried setting up something like this :-
variable "Queues" {
type = map
default = {
"FirstStandardSQSQueueUrl" = aws_sqs_queue.FirstStandardSQSQueue.url
"FirstStandardSQSQueueArn" = aws_sqs_queue.FirstStandardSQSQueue.arn
"SecondStandardSQSQueueUrl" = aws_sqs_queue.SecondStandardSQSQueue.url
"SecondStandardSQSQueueArn" = aws_sqs_queue.SecondStandardSQSQueue.arn
}
}
But this is invalid because i keep running into
Error: Variables not allowed line 40: Variables may not be used here.
Can someone suggest a better/right way to do it? Thank you.
As the error msg writes, you can't have dynamic variables. locals should be used instead.
locals {
Queues = {
"FirstStandardSQSQueueUrl" = aws_sqs_queue.FirstStandardSQSQueue.url
"FirstStandardSQSQueueArn" = aws_sqs_queue.FirstStandardSQSQueue.arn
"SecondStandardSQSQueueUrl" = aws_sqs_queue.SecondStandardSQSQueue.url
"SecondStandardSQSQueueArn" = aws_sqs_queue.SecondStandardSQSQueue.arn
}
}
Then you refer to the value as local.Queues in other parts of your code.
I am trying to create an Event Bridge rule that will run my Lambda function every 30 mins.
I based my code on this answer I found here on SO Use terraform to set up a lambda function triggered by a scheduled event source
Here is my terraform code:
monitoring/main.tf:
...
module "cloudwatch_event_rule" {
source = "./cloudwatch_event_rule"
extra_tags = local.extra_tags
}
module "lambda_function" {
source = "./lambda_functions"
extra_tags = local.extra_tags
alb_names = var.alb_names
slack_webhook_url = var.slack_webhook_url
environment_tag = local.environment_tag
}
module "cloudwatch_event_target" {
source = "./cloudwatch_event_target"
lambda_function_arn = module.lambda_function.detect_bad_rejects_on_alb_lambda_arn
cloudwatch_event_rule_name = module.cloudwatch_event_rule.cloudwatch_event_rule_name
extra_tags = local.extra_tags
}
monitoring/lambda_functions/main.tf:
resource "aws_lambda_function" "detect_bad_rejects_on_alb" {
filename = var.filename
function_name = var.function_name
role = aws_iam_role.detect_bad_reject_on_alb.arn
handler = var.handler
source_code_hash = filebase64sha256(var.filename)
runtime = var.runtime
timeout = var.timeout
environment {
...
}
}
monitoring/cloudwatch_event_rule/main.tf
resource "aws_cloudwatch_event_rule" "event_rule" {
name = var.rule_name
description = var.description
schedule_expression = var.schedule_expression
tags = ...
}
monitoring/cloudwatch_event_rule/variables.tf
...
variable "schedule_expression" {
type = string
default = "rate(30 minutes)"
}
...
monitoring/cloudwatch_event_target/main.tf
resource "aws_cloudwatch_event_target" "event_target" {
arn = var.lambda_function_arn
rule = var.cloudwatch_event_rule_name
input = var.input
}
This ends up creating the lambda function and the event bridge rule with my lambda function as its target with the schedule expression "rate(30 minutes)" but the lambda function is never executed? What am I doing wrong?
From what you posted is seems that you are not adding permissions for invocations. Your code does not show creation of aws_lambda_permission with proper rules. So you should add such permissions so that EventBridge can invoke your function (example):
resource "aws_lambda_permission" "event-invoke" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = var.function_name
principal = "events.amazonaws.com"
source_arn = module.cloudwatch_event_rule.cloudwatch_event_rule_arn
}
Make sure source_arn correctly points to the ARN of your event rule.
In my main.tf I have the following:
data "template_file" "lambda_script_temp_file" {
template = "${file("../../../fn/lambda_script.py")}"
}
data "template_file" "library_temp_file" {
template = "${file("../../../library.py")}"
}
data "template_file" "init_temp_file" {
template = "${file("../../../__init__.py")}"
}
data "archive_file" "lambda_resources_zip" {
type = "zip"
output_path = "${path.module}/lambda_resources.zip"
source {
content = "${data.template_file.lambda_script_temp_file.rendered}"
filename = "lambda_script.py"
}
source {
content = "${data.template_file.library_temp_file.rendered}"
filename = "library.py"
}
source {
content = "${data.template_file.init_temp_file.rendered}"
filename = "__init__.py"
}
}
resource "aws_lambda_function" "MyLambdaFunction" {
filename = "${data.archive_file.lambda_resources_zip.output_path}"
function_name = "awesome_lambda"
role = "${var.my_role_arn}"
handler = "lambda_script.lambda_handler"
runtime = "python3.6"
timeout = "300"
}
The problem is when I modify one of the source files, say lambda_script.py, upon a new terraform apply, even though the archive file (lambda_resources_zip) gets updated, the lambda function's script does not get updated (the new archive file does not get uploaded).
I know that in order to avoid this, I could first run terraform destroy but that is not an option for my use case.
*I'm using Terraform v0.11.10
I resolved the issue by adding the following line the resource definition:
source_code_hash = "${data.archive_file.lambda_resources_zip.output_base64sha256}"
when the source files are modified, the hashed value will change and trigger the source file to be updated.
This worked for me -
add this line
source_hash = "${data.archive_file.source.output_base64sha256}"
to s3 lambda bucket this will tell if any changes made.
then add this to lambda -
source_code_hash = "${data.archive_file.source.output_base64sha256}"
So your code should look like this -
resource "aws_s3_object" "lambda_object" {
source_hash = "${data.archive_file.source.output_base64sha256}"
bucket = "${aws_s3_bucket.s3.bucket}"
key = "${var.key}"
source = data.archive_file.source.output_path
}
resource "aws_lambda_function" "lambda_" {
function_name = "lambda_name"
source_code_hash = "${data.archive_file.source.output_base64sha256}"
.......
.......
}
Worked for me. Best wishes.
I deploy lambda using Terraform as follows but have following questions:
1) I want null_resource.lambda to be called always or when stop_ec2.py is changed so that stop_ec2_upload.zip is not out-of-date. What should I write in triggers{}?
2) how to make aws_lambda_function.stop_ec2 update the new stop_ec2_upload.zip to cloud when stop_ec2_upload.zip is changed?
right now I have to destroy aws_lambda_function.stop_ec2 then create it again. is there anything I can write in the code so that when I run terraform apply, 1) and 2) will happen automatically?
resource "null_resource" "lambda" {
triggers {
#what should I write here?
}
provisioner "local-exec" {
command = "mkdir -p lambda_func && cd lambda_py && zip
../lambda_func/stop_ec2_upload.zip stop_ec2.py && cd .."
}
}
resource "aws_lambda_function" "stop_ec2" {
depends_on = ["null_resource.lambda"]
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "lambda_func/stop_ec2_upload.zip"
source_code_hash =
"${base64sha256(file("lambda_func/stop_ec2_upload.zip"))}"
role = "..."
}
I read the link provided by Chandan and figured out.
Here is my code and it works perfectly.
In fact, with "archive_file", and source_code_hash, I do not need trigger. whenever I create a new file stop_ec2.py or modify it. when I run terraform, the file will be re-zipped and uploaded to cloud.
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
resource "aws_lambda_function" "stop_ec2" {
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash = data.archive_file.stop_ec2.output_base64sha256
role = "..."
}
These might help:
triggers {
main = "${base64sha256(file("source/main.py"))}"
requirements = "${base64sha256(file("source/requirements.txt"))}"
}
triggers = {
source_file = "${sha1Folder("${path.module}/source")}"
}
REF: https://github.com/hashicorp/terraform/issues/8344