I am working on a aws stack and have some lambdas and s3 bucket ( sample code below) . how to generate zip file for lambda via terrarform. I have seen different styles and probably depends on the version of terraform as well.
resource "aws_lambda_function" "my_lambda" {
filename = "my_lambda_func.zip"
source_code_hash = filebase64sha256("my_lambda_func.zip")
Using archive_file would be most common. You can zip individual files or entire folders, depending how your lambda function is developed.
So to give a more up-to-date and use-case based answer, for terraform version 2.3.0, you can apply the following:
data "archive_file" "dynamodb_stream_lambda_function" {
type = "zip"
source_file = "../../lambda-dynamodb-streams/index.js"
output_path = "lambda_function.zip"
}
resource "aws_lambda_function" "my_dynamodb_stream_lambda" {
function_name = "my-dynamodb-stream-lambda"
role = aws_iam_role.my_stream_lambda_role.arn
handler = "index.handler"
filename = data.archive_file.dynamodb_stream_lambda_function.output_path
source_code_hash = data.archive_file.dynamodb_stream_lambda_function.output_base64sha256
runtime = "nodejs16.x"
}
Related
I have a Terraform configuration for creating a Lambda resource and am using the source_code_hash property to detect changes to the zip. I am also uploading a separate file that contains the SHA256 hash of the file along with the zip file to S3.
I am able to do the deploy once, but then the problem is that the running Lambda is not getting updated after I update the zip and in the build log I am seeing "Still creating..."
How can I see the value of the source_code_hash property? I just see + source_code_hash = (known after apply) both in the plan output and the apply output so I don't know if the value is getting updated or not.
My code is below:
data "aws_s3_object" "source_hash" {
bucket = "dap-bucket-2"
key = "lambda.zip.sha256"
}
resource "aws_lambda_function" "lambda" {
function_name = "lambda_function_name"
s3_bucket = "dap-bucket-2"
s3_key = "lambda.zip"
handler = "template.handleRequest"
runtime = "java11"
role = aws_iam_role.lambda_exec.arn
source_code_hash = "${data.aws_s3_object.source_hash.body}"
publish = true
}
For s3 objects, ususually you would use etag:
source_code_hash = data.aws_s3_object.source_hash.etag
I want to update a my code on Lambda function.
I previous creating lambda function on aws console.
I do a creating terraform code for update my function existing but I received an error
I try using this block code
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
data "aws_lambda_function" "existing" {
function_name = MyPocLambda
role = aws_iam_role.iam_for_lambda.arn
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash ="${data.archive_file.stop_ec2.output_base64sha256}"
}
My error says filename unsupported argument, filename is not expected here
it possible update lambda function using terafform data ?
You have to import your lambda function to TF first. Then you will be able to modify it using TF code using aws_lambda_function resource, not data source.
I have a Terraform project that allows to create multiple cloud functions.
I know that if I change the name of the google_storage_bucket_object related to the function itself, terraform will see the difference of the zip name and redeploy the cloud function.
My question is, there is a way to obtain the same behaviour, but only with the cloud functions that have been changed?
resource "google_storage_bucket_object" "zip_file" {
# Append file MD5 to force bucket to be recreated
name = "${local.filename}#${data.archive_file.source.output_md5}"
bucket = var.bucket.name
source = data.archive_file.source.output_path
}
# Create Java Cloud Function
resource "google_cloudfunctions_function" "java_function" {
name = var.function_name
runtime = var.runtime
available_memory_mb = var.memory
source_archive_bucket = var.bucket.name
source_archive_object = google_storage_bucket_object.zip_file.name
timeout = 120
entry_point = var.function_entry_point
event_trigger {
event_type = var.event_trigger.event_type
resource = var.event_trigger.resource
}
environment_variables = {
PROJECT_ID = var.env_project_id
SECRET_MAIL_PASSWORD = var.env_mail_password
}
timeouts {
create = "60m"
}
}
By appending MD5 every cloud functions will result in a different zip file name, so terraform will re-deploy every of them and I found that without the MD5, Terraform will not see any changes to deploy.
If I have changed some code only inside a function, how can I tell to Terraform to re-deploy only it (so for example to change only its zip file name)?
I hope my question is clear and I want to thank you everyone who tries to help me!
I have some Terraform code like this:
resource "aws_s3_bucket_object" "file1" {
key = "someobject1"
bucket = "${aws_s3_bucket.examplebucket.id}"
source = "./src/index.php"
}
resource "aws_s3_bucket_object" "file2" {
key = "someobject2"
bucket = "${aws_s3_bucket.examplebucket.id}"
source = "./src/main.php"
}
# same code here, 10 files more
# ...
Is there a simpler way to do this?
Terraform supports loops via the count meta parameter on resources and data sources.
So, for a slightly simpler example, if you wanted to loop over a well known list of files you could do something like the following:
locals {
files = [
"index.php",
"main.php",
]
}
resource "aws_s3_bucket_object" "files" {
count = "${length(local.files)}"
key = "${local.files[count.index]}"
bucket = "${aws_s3_bucket.examplebucket.id}"
source = "./src/${local.files[count.index]}"
}
Unfortunately Terraform's AWS provider doesn't have support for the equivalent of aws s3 sync or aws s3 cp --recursive although there is an issue tracking the feature request.
I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.