I deploy lambda using Terraform as follows but have following questions:
1) I want null_resource.lambda to be called always or when stop_ec2.py is changed so that stop_ec2_upload.zip is not out-of-date. What should I write in triggers{}?
2) how to make aws_lambda_function.stop_ec2 update the new stop_ec2_upload.zip to cloud when stop_ec2_upload.zip is changed?
right now I have to destroy aws_lambda_function.stop_ec2 then create it again. is there anything I can write in the code so that when I run terraform apply, 1) and 2) will happen automatically?
resource "null_resource" "lambda" {
triggers {
#what should I write here?
}
provisioner "local-exec" {
command = "mkdir -p lambda_func && cd lambda_py && zip
../lambda_func/stop_ec2_upload.zip stop_ec2.py && cd .."
}
}
resource "aws_lambda_function" "stop_ec2" {
depends_on = ["null_resource.lambda"]
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "lambda_func/stop_ec2_upload.zip"
source_code_hash =
"${base64sha256(file("lambda_func/stop_ec2_upload.zip"))}"
role = "..."
}
I read the link provided by Chandan and figured out.
Here is my code and it works perfectly.
In fact, with "archive_file", and source_code_hash, I do not need trigger. whenever I create a new file stop_ec2.py or modify it. when I run terraform, the file will be re-zipped and uploaded to cloud.
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
resource "aws_lambda_function" "stop_ec2" {
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash = data.archive_file.stop_ec2.output_base64sha256
role = "..."
}
These might help:
triggers {
main = "${base64sha256(file("source/main.py"))}"
requirements = "${base64sha256(file("source/requirements.txt"))}"
}
triggers = {
source_file = "${sha1Folder("${path.module}/source")}"
}
REF: https://github.com/hashicorp/terraform/issues/8344
Related
I am trying to create an Event Bridge rule that will run my Lambda function every 30 mins.
I based my code on this answer I found here on SO Use terraform to set up a lambda function triggered by a scheduled event source
Here is my terraform code:
monitoring/main.tf:
...
module "cloudwatch_event_rule" {
source = "./cloudwatch_event_rule"
extra_tags = local.extra_tags
}
module "lambda_function" {
source = "./lambda_functions"
extra_tags = local.extra_tags
alb_names = var.alb_names
slack_webhook_url = var.slack_webhook_url
environment_tag = local.environment_tag
}
module "cloudwatch_event_target" {
source = "./cloudwatch_event_target"
lambda_function_arn = module.lambda_function.detect_bad_rejects_on_alb_lambda_arn
cloudwatch_event_rule_name = module.cloudwatch_event_rule.cloudwatch_event_rule_name
extra_tags = local.extra_tags
}
monitoring/lambda_functions/main.tf:
resource "aws_lambda_function" "detect_bad_rejects_on_alb" {
filename = var.filename
function_name = var.function_name
role = aws_iam_role.detect_bad_reject_on_alb.arn
handler = var.handler
source_code_hash = filebase64sha256(var.filename)
runtime = var.runtime
timeout = var.timeout
environment {
...
}
}
monitoring/cloudwatch_event_rule/main.tf
resource "aws_cloudwatch_event_rule" "event_rule" {
name = var.rule_name
description = var.description
schedule_expression = var.schedule_expression
tags = ...
}
monitoring/cloudwatch_event_rule/variables.tf
...
variable "schedule_expression" {
type = string
default = "rate(30 minutes)"
}
...
monitoring/cloudwatch_event_target/main.tf
resource "aws_cloudwatch_event_target" "event_target" {
arn = var.lambda_function_arn
rule = var.cloudwatch_event_rule_name
input = var.input
}
This ends up creating the lambda function and the event bridge rule with my lambda function as its target with the schedule expression "rate(30 minutes)" but the lambda function is never executed? What am I doing wrong?
From what you posted is seems that you are not adding permissions for invocations. Your code does not show creation of aws_lambda_permission with proper rules. So you should add such permissions so that EventBridge can invoke your function (example):
resource "aws_lambda_permission" "event-invoke" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = var.function_name
principal = "events.amazonaws.com"
source_arn = module.cloudwatch_event_rule.cloudwatch_event_rule_arn
}
Make sure source_arn correctly points to the ARN of your event rule.
I am trying to create an AWS lambda Function using terraform.
My terraform directory looks like
terraform
iam-policies
main.tf
lambda
files/
main.tf
main.tf
I have my lambda function stored inside /terraform/lambda/files/lambda_function.py.
Whenever I terraform apply, I have a "null_resource" that executes some commands in local machine that will zip the python file
variable "pythonfile" {
description = "lambda function python filename"
type = "string"
}
resource "null_resource" "lambda_preconditions" {
triggers {
always_run = "${uuid()}"
}
provisioner "local-exec" {
command = "rm -rf ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "mkdir -p ${path.module}/files/zips"
}
provisioner "local-exec" {
command = "cp -R ${path.module}/files/${var.pythonfile} ${path.module}/files/zips/lambda_function.py"
}
provisioner "local-exec" {
command = "cd ${path.module}/files/zips && zip -r lambda.zip ."
}
}
My "aws_lambda_function" resource looks like this.
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${base64sha256(format("%s/files/zips/lambda.zip", path.module))}", length(path.cwd) + 1, -1)}")}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
depends_on = [
"null_resource.lambda_preconditions"
]
}
Problem:
Whenever I change the lambda_function.py file and terraform apply again, everything works fine but the actual code in the lambda function do not change.
Also if I delete all the terraform state files and apply again, the new change is propagated without any problem.
What could be the possible reason for this?
Instead of using null_resource, I used the archive_file data source that creates the zip file automatically if new changes are detected. Next I took a reference from the archive_file data in the lambda resource source_code_hash attribute.
archive_file data source
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "${path.module}/files/zips/lambda.zip"
source {
content = "${file("${path.module}/files/ebs_cleanup_lambda.py")}"
filename = "lambda_function.py"
}
}
The lambda resource
resource "aws_lambda_function" "lambda_function" {
filename = "${path.module}/files/zips/lambda.zip"
function_name = "${format("%s-%s-%s-lambda-function", var.name, var.environment, var.function_name)}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda_function.lambda_handler"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
runtime = "${var.function_runtime}"
timeout = "${var.function_timeout}"
memory_size = "${var.function_memory}"
environment {
variables = {
region = "${var.region}"
name = "${var.name}"
environment = "${var.environment}"
}
}
vpc_config {
subnet_ids = ["${var.subnet_ids}"]
security_group_ids = ["${aws_security_group.lambda_sg.id}"]
}
}
In my main.tf I have the following:
data "template_file" "lambda_script_temp_file" {
template = "${file("../../../fn/lambda_script.py")}"
}
data "template_file" "library_temp_file" {
template = "${file("../../../library.py")}"
}
data "template_file" "init_temp_file" {
template = "${file("../../../__init__.py")}"
}
data "archive_file" "lambda_resources_zip" {
type = "zip"
output_path = "${path.module}/lambda_resources.zip"
source {
content = "${data.template_file.lambda_script_temp_file.rendered}"
filename = "lambda_script.py"
}
source {
content = "${data.template_file.library_temp_file.rendered}"
filename = "library.py"
}
source {
content = "${data.template_file.init_temp_file.rendered}"
filename = "__init__.py"
}
}
resource "aws_lambda_function" "MyLambdaFunction" {
filename = "${data.archive_file.lambda_resources_zip.output_path}"
function_name = "awesome_lambda"
role = "${var.my_role_arn}"
handler = "lambda_script.lambda_handler"
runtime = "python3.6"
timeout = "300"
}
The problem is when I modify one of the source files, say lambda_script.py, upon a new terraform apply, even though the archive file (lambda_resources_zip) gets updated, the lambda function's script does not get updated (the new archive file does not get uploaded).
I know that in order to avoid this, I could first run terraform destroy but that is not an option for my use case.
*I'm using Terraform v0.11.10
I resolved the issue by adding the following line the resource definition:
source_code_hash = "${data.archive_file.lambda_resources_zip.output_base64sha256}"
when the source files are modified, the hashed value will change and trigger the source file to be updated.
This worked for me -
add this line
source_hash = "${data.archive_file.source.output_base64sha256}"
to s3 lambda bucket this will tell if any changes made.
then add this to lambda -
source_code_hash = "${data.archive_file.source.output_base64sha256}"
So your code should look like this -
resource "aws_s3_object" "lambda_object" {
source_hash = "${data.archive_file.source.output_base64sha256}"
bucket = "${aws_s3_bucket.s3.bucket}"
key = "${var.key}"
source = data.archive_file.source.output_path
}
resource "aws_lambda_function" "lambda_" {
function_name = "lambda_name"
source_code_hash = "${data.archive_file.source.output_base64sha256}"
.......
.......
}
Worked for me. Best wishes.
i want data "template_file" in below terraform code to execute after provisioner "file" (basically ansible playbook) is copied to the ec2 instance. I am not able to successfully use "depends_on" in this scenario. Can some one please help me how can i achieve this? below is the sample code snippet.
resource "aws_eip" "opendj-source-ami-eip" {
instance = "${aws_instance.opendj-source-ami-server.id}"
vpc = true
connection {
host = "${aws_eip.opendj-source-ami-eip.public_ip}"
user = "ubuntu"
timeout = "3m"
agent = false
private_key = "${file(var.private_key)}"
}
provisioner "file" {
source = "./${var.copy_password_file}"
destination = "/home/ubuntu/${var.copy_password_file}"
}
provisioner "file" {
source = "./${var.ansible_playbook}"
destination = "/home/ubuntu/${var.ansible_playbook}"
}
}
data "template_file" "run-ansible-playbooks" {
template = <<-EOF
#!/bin/bash
ansible-playbook /home/ubuntu/${var.copy_password_file} && ansible-playbook /home/ubuntu/${var.ansible_playbook}
EOF
#depends_on = ["<< not sure what to put here>>"]
}
The correct format for depends_on is pegged to the resource as a whole; so the format in your case would look like:
data "template_file" "run-ansible-playbooks" {
template = <<-EOF
#!/bin/bash
ansible-playbook /home/ubuntu/${var.copy_password_file} && ansible-playbook /home/ubuntu/${var.ansible_playbook}
EOF
depends_on = ["aws_eip.opendj-source-ami-eip"]
}
How do I add a trigger to aws lambda using terraform?
the desired trigger is s3, object created all.
my terraform source code arouond the lambda is:
module "s3-object-created-lambda" {
source = "../../../../../modules/lambda"
s3_bucket = "${var.s3_lambda_bucket}"
s3_key = "${var.s3_lambda_key}"
name = "${var.lambda_some_name}"
handler = "code.handler"
env = {
lambda_name = "${var.lambda_base_name}"
lambda_version = "${var.lambda_version}"
}
}
trying to figure out how do I add the trigger.
via the aws console it is super simple.
After some reading in:
https://www.terraform.io/docs/providers/aws/r/s3_bucket_notification.html
the solution is:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${data.terraform_remote_state.stack.bucket_id}"
lambda_function {
lambda_function_arn = "${module.some_lambda.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "${var.cluster_name}/somepath/"
filter_suffix = ".txt"
}
}