Can you clone an AWS lambda? - amazon-web-services

Cloning for different environments. Staging/QA/PROD/DEV etc.
Is there a quick an easy way to clone my lambdas, give a different name, and adjust configurations from there?

You will need to recreate your Lambda Functions in the new account. Go to lambda function click on Action and export your function .
Download a deployment package (your code and libraries), and/or an AWS
Serverless Application Model (SAM) file that defines your function,
its events sources, and permissions.
You or others who you share this file with can use AWS CloudFormation
to deploy and manage a similar serverless application. Learn more
about how to deploy a serverless application with AWS CloudFormation.

This is an example of terraform code(Infrastructure as Code) which can be used to stamp out same lambdas within different environment dev/prod.
If you have a look at this bit of code function_name = "${var.environment}-first_lambda" it will be clear as to how the name of the function is prefixed with environments like dev/prod etc.
This variable can be passed in at terraform command execution time eg TF_VAR_environment="dev" terraform apply or defaulted in the variables.tf or passed in using *.tfvars
#main.tf
resource "aws_lambda_function" "first_lambda" {
function_name = "${var.environment}-first_lambda"
filename = "${data.archive_file.first_zip.output_path}"
source_code_hash = "${data.archive_file.first_zip.output_base64sha256}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "first_lambda.lambda_handler"
runtime = "python3.6"
timeout = 15
environment {
variables = {
value_one = "some value_one"
}
}
}
# variables.tf
variable "environment" {
type = "string"
description = "The name of the environment within the project"
default = "dev"
}

Related

Is it possible to update the source code of a GCP Cloud Function in Terraform?

I use Terraform to manage resources of Google Cloud Functions. But while the inital deployment of the cloud function worked, further deploments with changed cloud function source code (the source archive sourcecode.zip) were not redeployed when I use terraform apply after updating the source archive.
The storage bucket object gets updated but this does not trigger an update/redeployment of the cloud function resource.
Is this an error of the provider?
Is there a way to redeploy a function in terraform when the code changes?
The simplified source code I am using:
resource "google_storage_bucket" "cloud_function_source_bucket" {
name = "${local.project}-function-bucket"
location = local.region
uniform_bucket_level_access = true
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = "./../../../sourcecode.zip"
}
resource "google_cloudfunctions_function" "test_function" {
name = "test_func"
runtime = "python39"
region = local.region
project = local.project
available_memory_mb = 256
source_archive_bucket = google_storage_bucket.cloud_function_source_bucket.name
source_archive_object = google_storage_bucket_object.function_source_archive.name
trigger_http = true
entry_point = "trigger_endpoint"
service_account_email = google_service_account.function_service_account.email
vpc_connector = "projects/${local.project}/locations/${local.region}/connectors/serverless-main"
vpc_connector_egress_settings = "ALL_TRAFFIC"
ingress_settings = "ALLOW_ALL"
}
You can append MD5 or SHA256 checksum of the content of zip to the bucket object's name. That will trigger recreation of cloud function whenever source code changes.
${data.archive_file.function_src.output_md5}
data "archive_file" "function_src" {
type = "zip"
source_dir = "SOURCECODE_PATH/sourcecode"
output_path = "./SAVING/PATH/sourcecode.zip"
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.${data.archive_file.function_src.output_md5}.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = data.archive_file.function_src.output_path
}
You can read more about terraform archive here - terraform archive_file
You might consider that as a defect. Personally, I am not so sure about it.
Terraform has some logic, when an "apply" command is executed.
The question to think about - how does terraform know that the source code of the cloud function is changed, and the cloud function is to be redeployed? Terraform does not "read" the cloud function source code, does not compare it with the previous version. It only reads the terraform's script files. And if nothing is changed in those files (in comparison to the state file, and resources existed in GCP projects) - nothing to be redeployed.
Therefore, something is to be changed. For example the name of the archive file. In that case, terraform finds out that the cloud function has to be redeployed (because the state file has the old name of the archive object). The cloud function is redeployed.
An example of that code with more detailed explanation, was provided some time ago: don't take into account the question working - just read the answer

Terraform does not update AWS canary code

I have being changing an AWS canary code.
After running terraform apply, I see the updates in the new zip file but in AWS console the code is the old on.
What have I done wrong?
My terraform code:
resource "aws_synthetics_canary" "canary" {
depends_on = [time_sleep.wait_5_minutes]
name = var.name
artifact_s3_location = "s3://${local.artifacts_bucket_and_path}"
execution_role_arn = aws_iam_role.canary_role.arn
handler = "apiCanary.handler"
start_canary = true
zip_file = data.archive_file.source_zip.output_path
runtime_version = "syn-nodejs-puppeteer-3.3"
tags = {
Description = var.description
Entity = var.entity
Service = var.service
}
run_config {
timeout_in_seconds = 300
}
schedule {
expression = "rate(${var.rate_in_minutes} ${var.rate_in_minutes == 1 ? "minute" : "minutes"})"
}
}
I read this but it didn't help me.
I agree with #mjd2 but in the meantime worked around it by manually hashing the lambda source and embedding that hash into the source file name:
locals {
source_code = <whatever your source is>
source_code_hash = sha256(local.source_code)
}
data "archive_file" "canary_lambda" {
type = "zip"
output_path = "/tmp/canary_lambda_${local.source_code_hash}.zip"
source {
content = local.source_code
filename = "nodejs/node_modules/heartbeat.js"
}
}
This way, anytime the source_code is edited a new output filename will be used, triggering a replacement of the archive resource.
This could be a permission issue with your deployment role. Your role must have permission to modify the lambda behind the canary in order to apply the new layer that your zip file change creates.
Unfortunately any errors that occur when applying changes to the lambda are not communicated via terraform, or anywhere in the AWS console, but if it fails then your canary will continue to point to an old version of the lambda, without your code changes.
You should be able to see which version of the lambda your canary is using by checking the "Script location" field on the Configuration tab for your canary. Additionally, if you click in to the script location you will be able to see if you have newer, unpublished versions of the lambda layer available with your code changes in it.
To verify if the failure is a permission issue you need to query your canary via the AWS CLI.
Run aws synthetics get-canary --name <your canary name> and check the Status.StateReason.
If there was a permission issue when attempting to apply your change you should see something along the lines of:
<user> is not authorized to perform: lambda:UpdateFunctionConfiguration on resource: <lamdba arn>
Based on the above you should be able to add any missing permissions to your deployment roles iam policy and try your deployment again.
Hit the same issue. Seems like canary itself is a beta project that made it to production and the terraform resource that manages it also leaves much to be desired. There is no source_code_hash attr like with lambda, so you need to taint the entire canary resource so it gets recreated with any updated code. AWS Canary as of Nov 2022 is not mature at all. It should support integration with slack or at least AWS Chatbot out of the box, but it doesn't. Hopefully AWS team gives it some love because it's terrible as is right now in comparison to NewRelic, Dynatrace, and most other monitoring services that support synthetics.

Using Github Relase .zip file for Lambda Function

I am trying to use Terraform to spin up a lambda function that uses source code in a github release package. The location of the package is:
https://github.com/DataDog/datadog-serverless-functions/releases
This will allow me to manually create the AWS DataDog forwarder without using their Cloudformation template (we want to control as much of the process as possible).
I'm not entirely sure how to pull down that zip file for lambda functions to use
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "datadog-forwarder"
role = aws_iam_role.datadog_forwarder_role.arn
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "python3.7"
environment {
variables = {
DD_API_KEY_SECRET_ARN = aws_secretsmanager_secret_version.dd_api_key.arn
#This stops the Forwarder from generating enhanced metrics itself, but it will still forward custom metrics from other lambdas.
DD_ENHANCED_METRICS = false
DD_S3_BUCKET_NAME = aws_s3_bucket.datadog_forwarder.name
}
}
}
I know that the source_code_hash file name will change and the filename of the lambda function will change as well. Any help would be appreciated.
There is no build in functionality to download files from the internet in terraform. But you could relatively easily do that by using external data source. For that you would create a bash script that could use curl to download your zip, open it up, inspect or do any processing you need. The source would also return data that you can use for creation of your function.
Alternative is to use null_resource with local-exec to curl your zip file. But local-exec is less versitile then using the external data source.
There is a way to specify a zip file for an AWS Lambda. Checkout the example configuration in https://github.com/hashicorp/terraform-provider-aws/blob/main/examples/lambda.
It uses a data source of type archive_file
data "archive_file" "zip" {
type = "zip"
source_file = "hello_lambda.py"
output_path = "hello_lambda.zip"
}
to set the filename and source_code_hash for the aws_lambda_function resource:
resource "aws_lambda_function" "lambda" {
function_name = "hello_lambda"
filename = data.archive_file.zip.output_path
source_code_hash = data.archive_file.zip.output_base64sha256
.....
}
See the example files for complete details.
The Terraform AWS provider is calling the CreateFunction API ( https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html), which allows you to specify a zip file.

Terraform Update an existing Lambda function Environmental Variable that was created in the earlier

I want to update a Lambda function Environment Variable after it is created in the same script.
I want to preserve the ARN, I would just like to update an environmental variable after it is created. In my situation, I had to setup the API Gateway configuration to get the URL, and I add that URL as an Environmental Variable. So, I need the lambda to setup the deployment, and I need the URL to go back into the integrated Lambda function.
Lambda->API Gateway-> (API Gateway URL)->Lambda Tada !
resource "aws_lambda_function" "lambda" {
filename = "${data.archive_file.zip.output_path}"
source_code_hash = "${data.archive_file.zip.output_base64sha256}"
function_name = "terraformLambdaWebsite"
role = "${aws_iam_role.role.arn}"
handler = "index.handler"
runtime = "nodejs10.x"
tags = {
Environment = "KeepQL"
}
}
Then, after everything is setup, I want to change the Environment variable.
aws_lambda_function.lambda.tags.Environment = "KeepQL2"
I had hoped that Terraform was Smart enough to realize that it had already created that Lambda function, and since the Hash had not changed, it would just determine what was
different and update that variable.
Much Thanks
At first, you are not updating lambda function ENV variables. ENV variables are in the code below example -
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.test"
source_code_hash = "${filebase64sha256("lambda_function_payload.zip")}"
runtime = "nodejs12.x"
environment {
variables = {
foo = "bar"
}
}
}
What you are doing is updating the tag variable, not ENV variable. Although if you change anything in lambda config, you need to redeploy lambda which will keep the ARN Same. Just the latest version will be updated. So make sure to refer arn of latest version of lambda.
Also in this flow Lambda->API Gateway-> (API Gateway URL)->Lambda. Is lambda same?
If you really need to access the host(API-Gateway) link in lambda I think you need to handle or extract it from event.json Event->headers->host value, not from ENV variable. Check the event.json file at this link.
Thanks
Ashish

Elastic Beanstalk Application Version in Terraform

I attempted to manage my application versions in my terraform template by parameterising the name. This was an attempt to have a new application version created by our CI process whenever the contents of the application changed. This way in elasticbeanstalk i could keep a list of historic application versions so that i could roll back etc. This didnt work as the same application version was constantly updated and in effect i lost the history of all application versions.
resource "aws_elastic_beanstalk_application_version" "default" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
I then tried to parameterise the logical resource reference name, but this isnt supported by terraform.
resource "aws_elastic_beanstalk_application_version" "${var.build-number}" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Currently my solution is to manage my application versions outside of terraform which is disappointing as there are other associated resources such as the S3 bucket and permissions to worry about.
Am i missing something?
As far as Terraform is concerned you are just updating a single EB application version resource there. If you wanted to keep the previous versions around then you might need to try and increment the count of resources that Terraform is managing.
Off the top of my head you could try something like this:
variable "builds" = {
type = list
}
resource "aws_elastic_beanstalk_application_version" "default" {
count = "${length(var.builds)}"
name = "${var.eb-app-name}-${element(builds, count.index)}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Then if you have a list of builds it should create a new application version for each build.
Of course that could be dynamic in that the variable could instead be a data source that returns a list of all your builds. If a data source doesn't exist for it already you could write a small script that is used as an external data source.