How to set up a lambda alias with the same event source mapping as the LATEST/Unqualified lambda function in terraform - amazon-web-services

I'm trying to create a lambda alias for my lambda function using terraform. I've been able to successfully create the alias but the created alias is missing the dynamodb as the trigger.
how the event source is set up
resource "aws_lambda_event_source_mapping" "db_stream_trigger" {
batch_size = 10
event_source_arn = "${data.terraform_remote_state.testddb.table_stream_arn}"
enabled = true
function_name = "${aws_lambda_function.test_lambda.arn}"
starting_position = "LATEST"
}
how the alias is created
resource "aws_lambda_alias" "test_lambda_alias" {
count = "${var.create_alias ? 1 : 0}"
depends_on = [ "aws_lambda_function.test_lambda" ]
name = "test_alias"
description = "alias for my test lambda"
function_name = "${aws_lambda_function.test_lambda.arn}"
function_version = "${var.current_running_version}"
routing_config = {
additional_version_weights = "${map(
"${aws_lambda_function.test_lambda.version}", "0.5"
)}"
}
}
The lambda works with the dynamodb stream as a trigger
The Alias for the lambda is successfully created.
The Alias is using the correct version
The Alias is using the correct weight
The Alias is NOT using the dynamo-db stream as the event source

I had the wrong function_name for the resource "aws_lambda_event_source_mapping". I was providing it the main lambda function's arn as oppose to the alias lambda function's arn. Once i switched it to the alias's arn, I was able to successfully divide the traffic from the stream dependent on the weight!
From aws doc:
Simplify management of event source mappings – Instead of using Amazon Resource Names (ARNs) for Lambda function in event source mappings, you can use an alias ARN. This approach means that you don't need to update your event source mappings when you promote a new version or roll back to a previous version.
https://docs.aws.amazon.com/lambda/latest/dg/aliases-intro.html

Related

how to set log retention days for Cloudfront function in terraform?

I have an example Cloudfront function:
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
runtime = "cloudfront-js-1.0"
comment = "The cool function"
publish = true
code = <<EOT
function handler(event) {
var headers = event.request.headers;
if (
typeof headers.coolheader === "undefined" ||
headers.coolheader.value !== "That_is_cool_bro"
) {
console.log("That is not cool bro!")
}
return event.request;
}
EOT
}
When I create this function, Cloudwatch /aws/cloudfront/function/cool-function log group will be created automatically
But log group retention policy is Never Expire
And I can't see any parameters in terraform that allow to set retention days
So the question is:
is it possible to automatically import aws_cloudwatch_log_group every time when Cloudfront function creating and change retention_in_days for this resource?
Quite a few AWS services create their log groups implicitly on first use. To prevent that you need to explicitly create the group before the service has a chance to do it.
For that you need to define the aws_cloudwatch_log_group with the given name yourself, specify the correct retention and then create an explicit depends_on relation between the function and the log group to ensure the log group is created first. For migration purposes you now would need to import already created log groups into your terraform state.
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
...
depends_on = [
aws_cloudwatch_log_group.logs
]
}
resource "aws_cloudwatch_log_group" "logs" {
name = "/aws/cloudfront/function/cool-function"
retention_in_days = 123
...
}

Terraform Update an existing Lambda function was created on AWS console

I want to update a my code on Lambda function.
I previous creating lambda function on aws console.
I do a creating terraform code for update my function existing but I received an error
I try using this block code
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
data "aws_lambda_function" "existing" {
function_name = MyPocLambda
role = aws_iam_role.iam_for_lambda.arn
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash ="${data.archive_file.stop_ec2.output_base64sha256}"
}
My error says filename unsupported argument, filename is not expected here
it possible update lambda function using terafform data ?
You have to import your lambda function to TF first. Then you will be able to modify it using TF code using aws_lambda_function resource, not data source.

Pointing API Gateway to specific version of a function or an alias

I have created a Lambda function using terraform like so:
module "my-lambda" {
source = "terraform-aws-modules/lambda/aws"
version = "~> v1.31.0"
function_name = "${var.environment_name}-${local.lambda_name}"
...
publish = true
}
#...
module "alias" {
source = "terraform-aws-modules/lambda/aws//modules/alias"
name = "${var.lambda_name}-latest"
function_name = module.my-lambda.this_lambda_function_name
function_version = module.my-lambda.this_lambda_function_version
}
so presently I have a published Lambda version with a number, to which I also added an alias, and also $LATEST version. The reason I need the published version is that Provisioned Concurrency can be only attached to a published version.
My API gateway integration looks like:
module "api_gateway" {
source = "terraform-aws-modules/apigateway-v2/aws"
version = "~> 0.9.0"
name = "${var.environment_name}-my-api"
# ...Abriged...
integrations = {
"GET /mymethod" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
}
}
However in console I can see that it is triggering the $LATEST version, not the published one. How I can alter the configuration so that a particular version (or alias) is triggered by the API Gateway integration?
I am not familiar with terraform but to invoke alias from api gateway you need to have lambda alias arn in API gateway integration.
You are getting lambda arn from lambda module but you need arn of alias.
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
Alias arn is nothing but <Lambda ARN>:<AliasName>.
So you try taking arn from module.alias or append alias name and lambda arn with : in integrations.

Use an existing lambda function as datasource in Terraform

I'm trying to use an existing Lambda function as a data source and create an EC2 instance. This Lambda function essentially provides the latest AMI.
I'm looking at this doc:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_invocation
Source Block:
data "aws_lambda_invocation" "example" {
function_name = aws_lambda_function.resource_selector.ResourceSelector
input = <<JSON
{
"key1": "AMIRegexPattern"
}
JSON
}
output "result_entry" {
value = jsondecode(data.aws_lambda_invocation.example.result)["key1"]
}
It throws this error and I'm a little lost:
Error: Reference to undeclared resource
on create-ec2.tf line 26, in data "aws_lambda_invocation" "example":
26: function_name = aws_lambda_function.resource_selector.ResourceSelector
A managed resource "aws_lambda_function" "resource_selector" has not been
declared in the root module.
Here is the function details:
Function Name - ResourceSelector
Function ARN : arn:aws:lambda:us-east-1:xx50:function:ResourceSelector
Any help on what I am missing? Also curious on this line esp and if this is correct:
function_name = aws_lambda_function.resource_selector.ResourceSelector
Thanks
If the lambda is created outside terraform you have to hardcode or pass via parameter, like so:
function_name = "ResourceSelector"
aws_lambda_function.resource_selector doesn't exist, or you can manage the lambda defininng the aws_lambda_function resource.
Also if you just want to get the latest AMI you don't need a lambda. Terraform actually has a data source that can pull that for you: aws_ami
Example:
data "aws_ami" "example" {
most_recent = true
owners = ["self"]
filter {
name = "name"
values = ["myami-*"]
}
}

Terraform not uploading a new ZIP

I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.