I want to update a my code on Lambda function.
I previous creating lambda function on aws console.
I do a creating terraform code for update my function existing but I received an error
I try using this block code
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
data "aws_lambda_function" "existing" {
function_name = MyPocLambda
role = aws_iam_role.iam_for_lambda.arn
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash ="${data.archive_file.stop_ec2.output_base64sha256}"
}
My error says filename unsupported argument, filename is not expected here
it possible update lambda function using terafform data ?
You have to import your lambda function to TF first. Then you will be able to modify it using TF code using aws_lambda_function resource, not data source.
Related
How to get aws configuration parameters stored in json format on S3 in terraform scripts. I want to use those parameters in another resources.
I just want to externalise all the variable parameters in the script.
e.g: we have Data Source: aws_ssm_parameter to get AWS ssm parameters.
'''
data "aws_ssm_parameter" "foo" {
name = "foo"
}
'''
Similarly how can we get aws app configurations in terraform scripts.
From my understanding you need to read S3 objects' value's and use it in terraform.
Used data because it's external resource that we're referencing.
I would use like this:
data "aws_s3_object" "obj" {
bucket = "foo"
key = "foo.json"
}
output "s3_json_value" {
value = data.aws_s3_object.obj.body
}
To parse JSON you can use jsondecode
locals {
a_variable = jsondecode(data.aws_s3_object.obj.body)
}
output "Username" {
value = local.a_variable.name
}
I am working on a aws stack and have some lambdas and s3 bucket ( sample code below) . how to generate zip file for lambda via terrarform. I have seen different styles and probably depends on the version of terraform as well.
resource "aws_lambda_function" "my_lambda" {
filename = "my_lambda_func.zip"
source_code_hash = filebase64sha256("my_lambda_func.zip")
Using archive_file would be most common. You can zip individual files or entire folders, depending how your lambda function is developed.
So to give a more up-to-date and use-case based answer, for terraform version 2.3.0, you can apply the following:
data "archive_file" "dynamodb_stream_lambda_function" {
type = "zip"
source_file = "../../lambda-dynamodb-streams/index.js"
output_path = "lambda_function.zip"
}
resource "aws_lambda_function" "my_dynamodb_stream_lambda" {
function_name = "my-dynamodb-stream-lambda"
role = aws_iam_role.my_stream_lambda_role.arn
handler = "index.handler"
filename = data.archive_file.dynamodb_stream_lambda_function.output_path
source_code_hash = data.archive_file.dynamodb_stream_lambda_function.output_base64sha256
runtime = "nodejs16.x"
}
I'm trying to use an existing Lambda function as a data source and create an EC2 instance. This Lambda function essentially provides the latest AMI.
I'm looking at this doc:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_invocation
Source Block:
data "aws_lambda_invocation" "example" {
function_name = aws_lambda_function.resource_selector.ResourceSelector
input = <<JSON
{
"key1": "AMIRegexPattern"
}
JSON
}
output "result_entry" {
value = jsondecode(data.aws_lambda_invocation.example.result)["key1"]
}
It throws this error and I'm a little lost:
Error: Reference to undeclared resource
on create-ec2.tf line 26, in data "aws_lambda_invocation" "example":
26: function_name = aws_lambda_function.resource_selector.ResourceSelector
A managed resource "aws_lambda_function" "resource_selector" has not been
declared in the root module.
Here is the function details:
Function Name - ResourceSelector
Function ARN : arn:aws:lambda:us-east-1:xx50:function:ResourceSelector
Any help on what I am missing? Also curious on this line esp and if this is correct:
function_name = aws_lambda_function.resource_selector.ResourceSelector
Thanks
If the lambda is created outside terraform you have to hardcode or pass via parameter, like so:
function_name = "ResourceSelector"
aws_lambda_function.resource_selector doesn't exist, or you can manage the lambda defininng the aws_lambda_function resource.
Also if you just want to get the latest AMI you don't need a lambda. Terraform actually has a data source that can pull that for you: aws_ami
Example:
data "aws_ami" "example" {
most_recent = true
owners = ["self"]
filter {
name = "name"
values = ["myami-*"]
}
}
I'm trying to create a lambda alias for my lambda function using terraform. I've been able to successfully create the alias but the created alias is missing the dynamodb as the trigger.
how the event source is set up
resource "aws_lambda_event_source_mapping" "db_stream_trigger" {
batch_size = 10
event_source_arn = "${data.terraform_remote_state.testddb.table_stream_arn}"
enabled = true
function_name = "${aws_lambda_function.test_lambda.arn}"
starting_position = "LATEST"
}
how the alias is created
resource "aws_lambda_alias" "test_lambda_alias" {
count = "${var.create_alias ? 1 : 0}"
depends_on = [ "aws_lambda_function.test_lambda" ]
name = "test_alias"
description = "alias for my test lambda"
function_name = "${aws_lambda_function.test_lambda.arn}"
function_version = "${var.current_running_version}"
routing_config = {
additional_version_weights = "${map(
"${aws_lambda_function.test_lambda.version}", "0.5"
)}"
}
}
The lambda works with the dynamodb stream as a trigger
The Alias for the lambda is successfully created.
The Alias is using the correct version
The Alias is using the correct weight
The Alias is NOT using the dynamo-db stream as the event source
I had the wrong function_name for the resource "aws_lambda_event_source_mapping". I was providing it the main lambda function's arn as oppose to the alias lambda function's arn. Once i switched it to the alias's arn, I was able to successfully divide the traffic from the stream dependent on the weight!
From aws doc:
Simplify management of event source mappings – Instead of using Amazon Resource Names (ARNs) for Lambda function in event source mappings, you can use an alias ARN. This approach means that you don't need to update your event source mappings when you promote a new version or roll back to a previous version.
https://docs.aws.amazon.com/lambda/latest/dg/aliases-intro.html
I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.