Using Github Relase .zip file for Lambda Function - amazon-web-services

I am trying to use Terraform to spin up a lambda function that uses source code in a github release package. The location of the package is:
https://github.com/DataDog/datadog-serverless-functions/releases
This will allow me to manually create the AWS DataDog forwarder without using their Cloudformation template (we want to control as much of the process as possible).
I'm not entirely sure how to pull down that zip file for lambda functions to use
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "datadog-forwarder"
role = aws_iam_role.datadog_forwarder_role.arn
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "python3.7"
environment {
variables = {
DD_API_KEY_SECRET_ARN = aws_secretsmanager_secret_version.dd_api_key.arn
#This stops the Forwarder from generating enhanced metrics itself, but it will still forward custom metrics from other lambdas.
DD_ENHANCED_METRICS = false
DD_S3_BUCKET_NAME = aws_s3_bucket.datadog_forwarder.name
}
}
}
I know that the source_code_hash file name will change and the filename of the lambda function will change as well. Any help would be appreciated.

There is no build in functionality to download files from the internet in terraform. But you could relatively easily do that by using external data source. For that you would create a bash script that could use curl to download your zip, open it up, inspect or do any processing you need. The source would also return data that you can use for creation of your function.
Alternative is to use null_resource with local-exec to curl your zip file. But local-exec is less versitile then using the external data source.

There is a way to specify a zip file for an AWS Lambda. Checkout the example configuration in https://github.com/hashicorp/terraform-provider-aws/blob/main/examples/lambda.
It uses a data source of type archive_file
data "archive_file" "zip" {
type = "zip"
source_file = "hello_lambda.py"
output_path = "hello_lambda.zip"
}
to set the filename and source_code_hash for the aws_lambda_function resource:
resource "aws_lambda_function" "lambda" {
function_name = "hello_lambda"
filename = data.archive_file.zip.output_path
source_code_hash = data.archive_file.zip.output_base64sha256
.....
}
See the example files for complete details.
The Terraform AWS provider is calling the CreateFunction API ( https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html), which allows you to specify a zip file.

Related

Is it possible to update the source code of a GCP Cloud Function in Terraform?

I use Terraform to manage resources of Google Cloud Functions. But while the inital deployment of the cloud function worked, further deploments with changed cloud function source code (the source archive sourcecode.zip) were not redeployed when I use terraform apply after updating the source archive.
The storage bucket object gets updated but this does not trigger an update/redeployment of the cloud function resource.
Is this an error of the provider?
Is there a way to redeploy a function in terraform when the code changes?
The simplified source code I am using:
resource "google_storage_bucket" "cloud_function_source_bucket" {
name = "${local.project}-function-bucket"
location = local.region
uniform_bucket_level_access = true
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = "./../../../sourcecode.zip"
}
resource "google_cloudfunctions_function" "test_function" {
name = "test_func"
runtime = "python39"
region = local.region
project = local.project
available_memory_mb = 256
source_archive_bucket = google_storage_bucket.cloud_function_source_bucket.name
source_archive_object = google_storage_bucket_object.function_source_archive.name
trigger_http = true
entry_point = "trigger_endpoint"
service_account_email = google_service_account.function_service_account.email
vpc_connector = "projects/${local.project}/locations/${local.region}/connectors/serverless-main"
vpc_connector_egress_settings = "ALL_TRAFFIC"
ingress_settings = "ALLOW_ALL"
}
You can append MD5 or SHA256 checksum of the content of zip to the bucket object's name. That will trigger recreation of cloud function whenever source code changes.
${data.archive_file.function_src.output_md5}
data "archive_file" "function_src" {
type = "zip"
source_dir = "SOURCECODE_PATH/sourcecode"
output_path = "./SAVING/PATH/sourcecode.zip"
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.${data.archive_file.function_src.output_md5}.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = data.archive_file.function_src.output_path
}
You can read more about terraform archive here - terraform archive_file
You might consider that as a defect. Personally, I am not so sure about it.
Terraform has some logic, when an "apply" command is executed.
The question to think about - how does terraform know that the source code of the cloud function is changed, and the cloud function is to be redeployed? Terraform does not "read" the cloud function source code, does not compare it with the previous version. It only reads the terraform's script files. And if nothing is changed in those files (in comparison to the state file, and resources existed in GCP projects) - nothing to be redeployed.
Therefore, something is to be changed. For example the name of the archive file. In that case, terraform finds out that the cloud function has to be redeployed (because the state file has the old name of the archive object). The cloud function is redeployed.
An example of that code with more detailed explanation, was provided some time ago: don't take into account the question working - just read the answer

Terraform Update an existing Lambda function Environmental Variable that was created in the earlier

I want to update a Lambda function Environment Variable after it is created in the same script.
I want to preserve the ARN, I would just like to update an environmental variable after it is created. In my situation, I had to setup the API Gateway configuration to get the URL, and I add that URL as an Environmental Variable. So, I need the lambda to setup the deployment, and I need the URL to go back into the integrated Lambda function.
Lambda->API Gateway-> (API Gateway URL)->Lambda Tada !
resource "aws_lambda_function" "lambda" {
filename = "${data.archive_file.zip.output_path}"
source_code_hash = "${data.archive_file.zip.output_base64sha256}"
function_name = "terraformLambdaWebsite"
role = "${aws_iam_role.role.arn}"
handler = "index.handler"
runtime = "nodejs10.x"
tags = {
Environment = "KeepQL"
}
}
Then, after everything is setup, I want to change the Environment variable.
aws_lambda_function.lambda.tags.Environment = "KeepQL2"
I had hoped that Terraform was Smart enough to realize that it had already created that Lambda function, and since the Hash had not changed, it would just determine what was
different and update that variable.
Much Thanks
At first, you are not updating lambda function ENV variables. ENV variables are in the code below example -
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.test"
source_code_hash = "${filebase64sha256("lambda_function_payload.zip")}"
runtime = "nodejs12.x"
environment {
variables = {
foo = "bar"
}
}
}
What you are doing is updating the tag variable, not ENV variable. Although if you change anything in lambda config, you need to redeploy lambda which will keep the ARN Same. Just the latest version will be updated. So make sure to refer arn of latest version of lambda.
Also in this flow Lambda->API Gateway-> (API Gateway URL)->Lambda. Is lambda same?
If you really need to access the host(API-Gateway) link in lambda I think you need to handle or extract it from event.json Event->headers->host value, not from ENV variable. Check the event.json file at this link.
Thanks
Ashish

Why Lambda still using the old version of the zip file in S3 bucket?

I'm using Terraform to create a lambda and an S3 bucket, the binaries that the lambda is using are stored in this bucket, after I changed the content of the binary file, it seems like the lambda is still using the old version of the file (I overwrite the file in the bucket), how can I tell lambda to use the latest version of the file in the bucket?
Lambda function will not be updated just because the file in the S3 bucket is changed.
Similar to CloudFormation idea as in AWS::Lambda::Function Code
Changes to a deployment package in Amazon S3 are not detected automatically during stack updates. To update the function code, change the object key or version in the template.
In my understanding, we would not know what lambda code we are executing if Lambda function automatically picks up the changes in the S3 bucket. As a proper release management, we should be able to explicitly release a new lambda deployment (better to avoid using $LATEST but publish).
Need to trigger the lambda function update.
One way is to use the source_code_hash attribute of aws_lambda_function Terraform resource but the file needs to be local, not in S3, and needs to be changed before running terraform apply.
Or change the S3 bucket object location and set the new path to s3_key attribute of the aws_lambda_function. Such as for each new release, create a new S3 folder "v1", "v2", "v3", ... and use the new folder (as well as update the alias, preferably).
resource "aws_lambda_function" "authorizer" {
function_name = "${var.lambda_authorizer_name}"
source_code_hash = "${data.archive_file.lambda_authorizer.output_sha}" # <---
s3_bucket = "${aws_s3_bucket.package.bucket}"
s3_key = "${aws_s3_bucket_object.lambda_authorizer_package.id}" # <---
Or enable S3 bucket versioning and change the s3_object_version attribute of the aws_lambda_function, either using version_id from aws_s3_bucket_object, or after changing the file in S3 and checking the version id.
One of these will trigger the update calling UpdateFunctionCode API as in resource_aws_lambda_function.go.
func needsFunctionCodeUpdate(d resourceDiffer) bool {
return d.HasChange("filename") ||
d.HasChange("source_code_hash") ||
d.HasChange("s3_bucket") ||
d.HasChange("s3_key") ||
d.HasChange("s3_object_version")
}
// UpdateFunctionCode in the API / SDK
func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) error {
conn := meta.(*AWSClient).lambdaconn
...
codeUpdate := needsFunctionCodeUpdate(d)
if codeUpdate {
...
log.Printf("[DEBUG] Send Update Lambda Function Code request: %#v", codeReq)
_, err := conn.UpdateFunctionCode(codeReq)
if err != nil {
return fmt.Errorf("Error modifying Lambda Function Code %s: %s", d.Id(), err)
}
...
}
Alternatively, invoke AWS CLI update-function-code, which basically the terraform aws_lambda_function code is doing.

Terraform - Upload file to S3 on every apply

I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:
uploaded version outputs as null. I would expect some version_id like 1, 2, 3
When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.
What am I doing wrong? Here is my Terraform config:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket_name"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "my_files.zip"
}
output "my_bucket_file_version" {
value = "${aws_s3_bucket_object.file_upload.version_id}"
}
Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.
To make subsequent changes, there are a few options:
You could use a different local filename for each new version.
You could use a different remote object path for each new version.
You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.
The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "${path.module}/my_files.zip"
etag = "${filemd5("${path.module}/my_files.zip")}"
}
With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.
(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)
The preferred solution is now to use the source_hash property. Note that aws_s3_bucket_object has been replaced by aws_s3_object.
locals {
object_source = "${path.module}/my_files.zip"
}
resource "aws_s3_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = local.object_source
source_hash = filemd5(local.object_source)
}
Note that etag can have issues when encryption is used.
You shouldn't be using Terraform to do this. Terraform is supposed to orchestrate and provision your infrastructure and its configuration, not files. That said, terraform is not aware of changes on your files. Unless you change their names, terraform will not update the state.
Also, it is better to use local-exec to do that. Something like:
resource "aws_s3_bucket" "my-bucket" {
# ...
provisioner "local-exec" {
command = "aws s3 cp path_to_my_file ${aws_s3_bucket.my-bucket.id}"
}
}

Can you clone an AWS lambda?

Cloning for different environments. Staging/QA/PROD/DEV etc.
Is there a quick an easy way to clone my lambdas, give a different name, and adjust configurations from there?
You will need to recreate your Lambda Functions in the new account. Go to lambda function click on Action and export your function .
Download a deployment package (your code and libraries), and/or an AWS
Serverless Application Model (SAM) file that defines your function,
its events sources, and permissions.
You or others who you share this file with can use AWS CloudFormation
to deploy and manage a similar serverless application. Learn more
about how to deploy a serverless application with AWS CloudFormation.
This is an example of terraform code(Infrastructure as Code) which can be used to stamp out same lambdas within different environment dev/prod.
If you have a look at this bit of code function_name = "${var.environment}-first_lambda" it will be clear as to how the name of the function is prefixed with environments like dev/prod etc.
This variable can be passed in at terraform command execution time eg TF_VAR_environment="dev" terraform apply or defaulted in the variables.tf or passed in using *.tfvars
#main.tf
resource "aws_lambda_function" "first_lambda" {
function_name = "${var.environment}-first_lambda"
filename = "${data.archive_file.first_zip.output_path}"
source_code_hash = "${data.archive_file.first_zip.output_base64sha256}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "first_lambda.lambda_handler"
runtime = "python3.6"
timeout = 15
environment {
variables = {
value_one = "some value_one"
}
}
}
# variables.tf
variable "environment" {
type = "string"
description = "The name of the environment within the project"
default = "dev"
}