I have a Terraform configuration for creating a Lambda resource and am using the source_code_hash property to detect changes to the zip. I am also uploading a separate file that contains the SHA256 hash of the file along with the zip file to S3.
I am able to do the deploy once, but then the problem is that the running Lambda is not getting updated after I update the zip and in the build log I am seeing "Still creating..."
How can I see the value of the source_code_hash property? I just see + source_code_hash = (known after apply) both in the plan output and the apply output so I don't know if the value is getting updated or not.
My code is below:
data "aws_s3_object" "source_hash" {
bucket = "dap-bucket-2"
key = "lambda.zip.sha256"
}
resource "aws_lambda_function" "lambda" {
function_name = "lambda_function_name"
s3_bucket = "dap-bucket-2"
s3_key = "lambda.zip"
handler = "template.handleRequest"
runtime = "java11"
role = aws_iam_role.lambda_exec.arn
source_code_hash = "${data.aws_s3_object.source_hash.body}"
publish = true
}
For s3 objects, ususually you would use etag:
source_code_hash = data.aws_s3_object.source_hash.etag
Related
I want to update a my code on Lambda function.
I previous creating lambda function on aws console.
I do a creating terraform code for update my function existing but I received an error
I try using this block code
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
data "aws_lambda_function" "existing" {
function_name = MyPocLambda
role = aws_iam_role.iam_for_lambda.arn
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash ="${data.archive_file.stop_ec2.output_base64sha256}"
}
My error says filename unsupported argument, filename is not expected here
it possible update lambda function using terafform data ?
You have to import your lambda function to TF first. Then you will be able to modify it using TF code using aws_lambda_function resource, not data source.
I am working on a aws stack and have some lambdas and s3 bucket ( sample code below) . how to generate zip file for lambda via terrarform. I have seen different styles and probably depends on the version of terraform as well.
resource "aws_lambda_function" "my_lambda" {
filename = "my_lambda_func.zip"
source_code_hash = filebase64sha256("my_lambda_func.zip")
Using archive_file would be most common. You can zip individual files or entire folders, depending how your lambda function is developed.
So to give a more up-to-date and use-case based answer, for terraform version 2.3.0, you can apply the following:
data "archive_file" "dynamodb_stream_lambda_function" {
type = "zip"
source_file = "../../lambda-dynamodb-streams/index.js"
output_path = "lambda_function.zip"
}
resource "aws_lambda_function" "my_dynamodb_stream_lambda" {
function_name = "my-dynamodb-stream-lambda"
role = aws_iam_role.my_stream_lambda_role.arn
handler = "index.handler"
filename = data.archive_file.dynamodb_stream_lambda_function.output_path
source_code_hash = data.archive_file.dynamodb_stream_lambda_function.output_base64sha256
runtime = "nodejs16.x"
}
I'm converting some Cloudformation into Terraform that creates a Lambda and then sets up Provisioned Concurrency and Application Auto Scaling for the Lambda. When Terraform runs the aws_appautoscaling_target resource, it fails with the following message:
Error: Error creating application autoscaling target: ValidationException: Unsupported service namespace, resource type or scalable dimension
I haven't found too many examples of the aws_appautoscaling_target resource being used with Lambdas. Is this no longer supported? For reference, I'm running Terraform version 1.0.11 and I'm using AWS provider version 3.66.0. I'm posting my Terraform below. Thanks.
data "archive_file" "foo_create_dist_pkg" {
source_dir = var.lambda_file_location
output_path = "foo.zip"
type = "zip"
}
resource "aws_lambda_function" "foo" {
function_name = "foo"
description = "foo lambda"
handler = "foo.main"
runtime = "python3.8"
publish = true
role = "arn:aws:iam::${local.account_id}:role/serverless-role"
memory_size = 256
timeout = 900
depends_on = [data.archive_file.foo_create_dist_pkg]
source_code_hash = data.archive_file.foo_create_dist_pkg.output_base64sha256
filename = data.archive_file.foo_create_dist_pkg.output_path
}
resource "aws_lambda_provisioned_concurrency_config" "foo_provisioned_concurrency" {
function_name = aws_lambda_function.foo.function_name
provisioned_concurrent_executions = 15
qualifier = aws_lambda_function.foo.version
}
resource "aws_appautoscaling_target" "autoscale_foo" {
max_capacity = var.PCMax
min_capacity = var.PCMin
resource_id = "function:${aws_lambda_function.foo.function_name}"
scalable_dimension = "lambda:function:ProvisionedConcurrency"
service_namespace = "lambda"
}
You need to publish your Lambda to get a new version. This can be done by setting publish = true in aws_lambda_function resource. This will give a numeric version for your function which can be used in the aws_appautoscaling_target:
resource "aws_appautoscaling_target" "autoscale_foo" {
max_capacity = var.PCMax
min_capacity = var.PCMin
resource_id = "function:${aws_lambda_function.foo.function_name}:${aws_lambda_function.foo.version}"
scalable_dimension = "lambda:function:ProvisionedConcurrency"
service_namespace = "lambda"
}
Alternatively, you can create an aws_lambda_alias and use that in the aws_appautoscaling_target instead of the Lambda version. Nevertheless, this would require also the function to be published.
I have built the following terraform code:
data "archive_file" "lambda_dependencies_bundle" {
depends_on = [
null_resource.lambda_dependencies
]
output_path = "${local.function_build_folder_path}/build/${local.function_s3_object_key}.zip"
excludes = ["${local.function_build_folder_path}/build/*"]
source_dir = local.function_build_folder_path
type = "zip"
}
resource "aws_s3_bucket" "lambda_dependencies_bucket" {
bucket = local.function_s3_bucket
acl = "private"
}
resource "aws_s3_bucket_object" "lambda_dependencies_upload" {
bucket = aws_s3_bucket.lambda_dependencies_bucket.id
key = "${local.function_s3_object_key}.zip"
source = data.archive_file.lambda_dependencies_bundle.output_path
}
The null_resource.lambda_dependencies is triggered by a file change and just builds all of my code to local.function_build_folder_path.
Everytime the null_resource changes, the archive_file.lambda_dependencies_bundle rebuilds (correct behavior!).
But other than expected, the aws_s3_bucket_object.lambda_dependencies_upload is not triggered by the rebuild of the archive_file.
How can I achieve a reupload of my archive_file on a rebuild?
I would add etag:
Triggers updates when the value changes.
resource "aws_s3_bucket_object" "lambda_dependencies_upload" {
bucket = aws_s3_bucket.lambda_dependencies_bucket.id
key = "${local.function_s3_object_key}.zip"
source = data.archive_file.lambda_dependencies_bundle.output_path
etag = data.archive_file.lambda_dependencies_bundle.output_md5
}
I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.