Error when creating S3 bucket notification in Terraform - amazon-web-services

I'm having an issue when creating a bucket notification to trigger a Lambda function. The error:
Error putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations
status code: 400
I've read that similar problems might be caused by the order in which the resources are created or that Lambda permissions are missing. However, I tried including depends_on in my code as well as applying the template couple of times and waiting in between. I'm using the least restrictive Lambda policy. I also tried using the exact sample code from the Terraform documentation, but that gives me a whole different error.
The exact same setup works fine if created in the console.
Here's the problematic part of my code:
resource "aws_lambda_function" "writeUsersToDB" {
filename = "writeUsersToDB.zip"
function_name = "writeUsersToDB"
role = "arn:aws:iam::0000000:role/AWSLambdaFullAccess"
handler = "main.lambda_handler"
memory_size = 256
timeout = 900
source_code_hash = filebase64sha256("writeUsersToDB.zip")
runtime = "python3.8"
environment {variables = local.parameters}
layers = [ "arn:aws:lambda:eu-west-2:0000000:layer:pandas-pandas-schema-numpy:1" ]
}
resource "aws_s3_bucket_notification" "event" {
bucket = aws_s3_bucket.user_data.id
lambda_function {
lambda_function_arn = aws_lambda_function.writeUsersToDB.arn
events = ["s3:ObjectCreated:*"]
filter_suffix = ".csv"
}
depends_on = [aws_lambda_function.writeUsersToDB]
}
resource "aws_s3_bucket" "user_data" {
bucket = "nameofthebucket"
}

You are missing aws_lambda_permission:
resource "aws_lambda_permission" "example" {
statement_id = "AllowExecutionFromS3Bucket"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.writeUsersToDB.function_name
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.user_data.arn
}

Related

Terraform - Error putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations

I'm trying to configure a Lambda event notification in S3 using terraform v0.11.8. This is how my terraform looks like -
###########################################
#### S3 bucket
###########################################
resource aws_s3_bucket ledger_summary_backups {
bucket = "${var.environment_id}-ledgersummary-backups"
acl = "private"
tags = local.common_tags
}
###########################################
###### Lambda Functions
###########################################
resource aws_s3_bucket_notification bucket_notification {
bucket = aws_s3_bucket.ledger_summary_backups.id
lambda_function {
lambda_function_arn = aws_lambda_function.account_restore_ledgersummary_from_s3.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "AWSDynamoDB/"
filter_suffix = ".gz"
}
depends_on = [aws_lambda_permission.allow_bucket]
}
resource aws_lambda_permission allow_bucket {
statement_id = "AllowS3Invoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.account_restore_ledgersummary_from_s3.arn
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.ledger_summary_backups.arn
}
resource aws_lambda_function account_restore_ledgersummary_from_s3 {
function_name = "${var.environment_id}-AccountService-${var.account_ledgersummary_restore_event_handler["namespace"]}"
description = "Event Handler for ${var.account_ledgersummary_restore_event_handler["name"]}"
runtime = "python3.7"
memory_size = 256
handler = "RestoreDynamoDbFromS3.lambda_handler"
role = aws_iam_role.account_s3_to_dynamodb_lambda_role.arn
timeout = var.account_ledgersummary_restore_event_handler["lambda_timeout"]
filename = data.archive_file.RestoreDynamoDbFromS3.output_path
source_code_hash = filebase64sha256(data.archive_file.RestoreDynamoDbFromS3.output_path)
vpc_config {
security_group_ids = slice(list(aws_security_group.inbound_core_security_group.id, data.terraform_remote_state.environment_state.outputs.default_vpc_security_group), local.sg_list_start, 2)
subnet_ids = data.terraform_remote_state.environment_state.outputs.private_subnets
}
environment {
variables = {
ENVIRONMENT = var.environment_id
}
}
The IAM role I've attached to the lambda function has AmazonS3FullAccess and AWSOpsWorksCloudWatchLogs policies attached. I'm able to add the event in AWS Console but in terraform it's throwing the below error
2021-04-08T18:57:23.6474244Z ##[error][1m[31mError: [0m[0m[1mError putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations
2021-04-08T18:57:23.6475638Z ##[error] status code: 400, request id: 3Y8F88E77CX8NZ2N, host id: q88f+go45dalh7+eiYSErkkeDbI0nv+9j7AAecvBWSJoBjZc8hvh2LVeaqo5aGIJv4+aoKwUlgk=[0m
2021-04-08T18:57:23.6476912Z ##[error][0m on dynamodb-upgrade.tf line 150, in resource "aws_s3_bucket_notification" "bucket_notification":
2021-04-08T18:57:23.6478084Z ##[error] 150: resource aws_s3_bucket_notification bucket_notification [4m{[0m
2021-04-08T18:57:23.6478895Z ##[error][0m
2021-04-08T18:57:23.6479554Z ##[error][0m[0m
2021-04-08T18:57:23.7908949Z ##[error]Failed to apply changes to configuration for workspace mahbis01: Cake.Core.CakeException: Terraform: Process returned an error (exit code 1).
2021-04-08T18:57:23.7911412Z ##[error] at Cake.Core.Tooling.Tool`1.ProcessExitCode(Int32 exitCode)
2021-04-08T18:57:23.7913466Z ##[error] at Cake.Core.Tooling.Tool`1.Run(TSettings settings, ProcessArgumentBuilder arguments, ProcessSettings processSettings, Action`1 postAction)
2021-04-08T18:57:23.7915512Z ##[error] at Cake.Terraform.TerraformApplyRunner.Run(TerraformApplySettings settings)
2021-04-08T18:57:23.7917197Z ##[error] at Submission#0.ApplyConfiguration(String env)
2021-04-08T18:57:23.7924027Z ##[error]An error occurred when executing task 'Deploy'.
2021-04-08T18:57:23.7974563Z ##[error]Error: One or more errors occurred.
2021-04-08T18:57:23.7976420Z ##[error] Terraform: Process returned an error (exit code 1).
2021-04-08T18:57:23.8371520Z ##[error]System.Exception: Unexpected exit code 1 returned from tool Cake.exe
2021-04-08T18:57:23.8372857Z at Microsoft.TeamFoundation.DistributedTask.Task.Internal.InvokeToolCmdlet.ProcessRecord()
2021-04-08T18:57:23.8373538Z at System.Management.Automation.CommandProcessor.ProcessRecord()
2021-04-08T18:57:23.8586136Z ##[error]PowerShell script completed with 1 errors.
What am I missing in terraform?
Answer -
I added bucket policy to my s3 bucket and added lambda function dependency in bucket notification
resource aws_s3_bucket ledger_summary_backups {
bucket = "${var.environment_id}-ledgersummary-backups"
acl = "private"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::${var.environment_id}-agl-event-files/*"
}
]
}
EOF
tags = local.common_tags
}
resource aws_s3_bucket_notification bucket_notification {
bucket = aws_s3_bucket.ledger_summary_backups.id
lambda_function {
lambda_function_arn = aws_lambda_function.account_restore_ledgersummary_from_s3.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "AWSDynamoDB/"
filter_suffix = ".gz"
}
depends_on = [
aws_lambda_permission.allow_bucket,
aws_lambda_function.account_restore_ledgersummary_from_s3
]
}
There is a conflict between s3 notification and lambda permission. Even though I put depends_on within s3 notification for lambda_permission, I got the same error. So, I've solved this problem to add null_resource like this. it waits a little bit right after the creation of lambda permission and creates bucket notification.
resource "null_resource" "wait_for_lambda_trigger" {
depends_on = [aws_lambda_permission.s3_trigger]
provisioner "local-exec" {
command = "sleep 3m"
}
}
resource "aws_s3_bucket_notification" "bucket_create_notification" {
bucket = aws_s3_bucket.aws_capstone_bucket.id
depends_on = [null_resource.wait_for_lambda_trigger]
lambda_function {
lambda_function_arn = aws_lambda_function.s3_to_dynamo_Lambda.arn
events = ["s3:ObjectCreated:*", "s3:ObjectRemoved:*"]
filter_prefix = "media/"
}
}
So typically you want the S3 Notification to be the last thing that's deployed. Try making the S3 Notification depend on the Lambda too so that you're sure the Lambda gets deployed before the S3 Notification.

How to refresh AWS Lambda permission for API Gateway using Terraform?

I am deploying a REST API Gateway using Terraform. Couple of endpoints are accessing Lambda function to return response. Whenever I deploy api-gw using terraform, the Lambda permission doesn't seem to refresh and I have to manually open the api-gw portal in AWS console and again add that lambda function post which it prompts me to allow invoke action. How can I refresh the permission without having to do these manual steps ? I am using below snippet for api-gw deployment and lambda permissions:
resource "aws_api_gateway_deployment" "deploy" {
rest_api_id = aws_api_gateway_rest_api.apigw.id
stage_name = ""
variables = {
deployed_at = timestamp()
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_lambda_permission" "customers_lambda_permission" {
statement_id = "AllowDemoAPIInvokeProjectGet"
action = "lambda:InvokeFunction"
function_name = local.lambda_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_api_gateway_rest_api.apigw.execution_arn}/*/GET/api/customers"
}
Your aws_api_gateway_deployment resource should depend on the aws_api_gateway_integration so that the lambda integration is created before deployment.
resource "aws_api_gateway_deployment" "deploy" {
...
depends_on = [
aws_api_gateway_integration.example1,
aws_api_gateway_integration.example2
]
}
or use triggers attribute:
resource "aws_api_gateway_deployment" "deploy" {
...
triggers = {
redeployment = sha1(jsonencode([
aws_api_gateway_resource.example1.id,
aws_api_gateway_method.example1.id,
aws_api_gateway_integration.example1.id,
]))
}

Terraform multiple cloudwatch events trigger same lambda function

I have setup the following in Terraform. So two event rules, start_event at 8am and stop_event at 6pm.
# Create cloudwatch event rules
resource "aws_cloudwatch_event_rule" "stop_ec2_event_rule" {
name = "stop-ec2-event-rule"
description = "Stop EC2 instance at a specified time each day"
schedule_expression = var.cloudwatch_schedule_stop
}
resource "aws_cloudwatch_event_rule" "start_ec2_event_rule" {
name = "start-ec2-event-rule"
description = "Start EC2 instance at a specified time each day"
schedule_expression = var.cloudwatch_schedule_start
}
Each event passes an action to the lambda
resource "aws_cloudwatch_event_target" "stop_ec2_event_rule_target" {
rule = aws_cloudwatch_event_rule.stop_ec2_event_rule.name
target_id = "TriggerLambdaFunction"
arn = aws_lambda_function.lambda_rscheduler.arn
input = "{\"environment\":\"${var.environment}\", \"action\":\"stop\"}"
}
resource "aws_cloudwatch_event_target" "start_ec2_event_rule_target" {
rule = aws_cloudwatch_event_rule.start_ec2_event_rule.name
target_id = "TriggerLambdaFunction"
arn = aws_lambda_function.lambda_rscheduler.arn
input = "{\"environment\":\"${var.environment}\", \"action\":\"start\"}"
}
This works
resource "aws_lambda_permission" "allow_cloudwatch" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda_rscheduler.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.stop_ec2_event_rule.arn
This issue I'm facing is that I cannot get Terraform to associate the start_event with the lambda function. I go into the AWS console and I can manually add the CloudWatch start_event trigger to the lambda function.
If I have the start_event resources
resource "aws_lambda_permission" "allow_cloudwatch" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda_rscheduler.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.start_ec2_event_rule.arn
It will complain that the statement_id is duplicated.
I needed something like the terraform aws_lambda_event_source_mapping but that only allows Lambda functions to get events from Kinesis, DynamoDB and SQS; and not a CloudWatch event.
How can I tell terraform to associate multiple cloudwatch events to the same lambda function; when I can manually do it from the AWS console?
statement_id is not compulsory, so you can safely omit it from your aws_lambda_permission and terraform will unique generate id for you automatically. You can also use count or for_each to save you some typing for aws_lambda_permission.
For example, using for_each you could define aws_lambda_permission to be:
resource "aws_lambda_permission" "allow_cloudwatch" {
for_each = {for idx, v in [
aws_cloudwatch_event_rule.stop_ec2_event_rule,
aws_cloudwatch_event_rule.start_ec2_event_rule
]: idx => v}
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda_rscheduler.function_name
principal = "events.amazonaws.com"
source_arn = each.value.arn
}
Analogical versions could be written for aws_cloudwatch_event_rule and aws_cloudwatch_event_target so that your code is based on for_each or count without the copy-and-paste repetitions.

Terraform : S3 trigger code is failing with status-code : 400

In terraform , Trying to S3 bucket as trigger to my lambda and giving the permissions. For this use case , creating S3 resource and trying to refer that lambda function in triggering logic. But When I refer code is failing with below error.Please help me to resolve this issue .
#########################################
# Creating Lambda resource
###########################################
resource "aws_lambda_function" "test_lambda" {
filename = "output/welcome.zip"
function_name = var.function_name
role = var.role_name
handler = var.handler_name
runtime = var.run_time
}
######################################################
# Creating s3 resource for invoking to lambda function
######################################################
resource "aws_s3_bucket" "bucket" {
bucket = "source-bucktet-testing"
}
#####################################################################
# Adding S3 bucket as trigger to my lambda and giving the permissions
#####################################################################
resource "aws_s3_bucket_notification" "aws-lambda-trigger" {
bucket = aws_s3_bucket.bucket.id
lambda_function {
lambda_function_arn = aws_lambda_function.test_lambda.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "file-prefix"
filter_suffix = "file-extension"
}
}
resource "aws_lambda_permission" "test" {
statement_id = "AllowS3Invoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.test_lambda.function_name
principal = "s3.amazonaws.com"
source_arn = "arn:aws:s3:::aws_s3_bucket.bucket.id"
}
Error Message :
Error: Error putting S3 notification configuration: InvalidArgument: Unable to validate the following destination configurations
status code: 400, request id: 8D16EE1EF8FC0E63, host id: PlzqurwmHo3hDJdr0nUhOGuJKnghOBCtMImZ+8fEFX3JPjKV2M47UZuJ5Z26FalKxmoF1Xl8lag=
Your source_arn in aws_lambda_permission is incorrect. It should be:
source_arn = aws_s3_bucket.bucket.arn
At present your source_arn is literally string "arn:aws:s3:::aws_s3_bucket.bucket.id", which is incorrect.

S3 failing to trigger aws lambda with no error

Hi I want to trigger a lambda function from an s3 bucket on a any *.csv file upload.
My lambda works fine i can run it.
but seems like when i upload a csv to s3 the lambda is not triggered.
below is the code for my s3 bucket notification
resource "aws_s3_bucket" "myfirst-s3-bucket" {
bucket = "myfirst-s3-bucket"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${aws_s3_bucket.myfirst-s3-bucket.id}"
lambda_function {
lambda_function_arn = "${aws_lambda_function.lambda.arn}"
events = ["s3:ObjectCreated:*"]
filter_suffix = ".jpg"
}
}
resource "aws_lambda_permission" "perme_bucket" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.lambda.arn}"
principal = "s3.amazonaws.com"
source_arn = "${aws_s3_bucket.myfirst-s3-bucket.arn}"
}
I think you should be using AllowExecutionFromS3Bucket and not AllowExecutionFromCloudWatch if you want to trigger it from S3 bucket.
Also the filter_suffix is for jpg and i think you want .csv
Please find the code on github that will be helpful github code
resource "aws_lambda_permission" "perme_bucket" {
statement_id = "AllowExecutionFromS3Bucket"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.lambda.arn}"
principal = "s3.amazonaws.com"
source_arn = "${aws_s3_bucket.myfirst-s3-bucket.arn}"
}