I created tf file that takes input from cli and then use this as name for aws lambda, and api gateway.
Currently inputing different name just replace name in currently working one.
My goal is that every time i input new name new lambda and gateway should be created. Is it possible?
variable "repo_name" {
type = string
}
resource "aws_lambda_function" "lambda" {
function_name = var.repo_name
handler = "lambda_function.lambda_handler"
runtime = "python3.9"
role = ""
filename = "python.zip"
}
This can be done in a couple of different ways, but the easiest one would be creating a local variable and just add additional names to it whenever you need a new function and API Gateway. That would look something like this:
locals {
repo_names = ["one", "two", "three"]
}
Now, the second part can be solved with count [1] or for_each [2] meta-arguments. It's usually a matter of preference, but I would suggest using for_each:
resource "aws_lambda_function" "lambda" {
for_each = toset(local.repo_names)
function_name = each.value
handler = "lambda_function.lambda_handler"
runtime = "python3.9"
role = ""
filename = "python.zip"
}
You would then follow a similar approach for creating different API Gateway resources. Note that for_each has to be used with sets or maps, hence the toset built-in function [3] usage. Also, make sure you understand how each object works [4]. In case of a set, the each.key and each.value are the same which is not the case when using maps.
[1] https://developer.hashicorp.com/terraform/language/meta-arguments/count
[2] https://developer.hashicorp.com/terraform/language/meta-arguments/for_each
[3] https://developer.hashicorp.com/terraform/language/functions/toset
[4] https://developer.hashicorp.com/terraform/language/meta-arguments/for_each#the-each-object
Related
I've got a terraform plan that creates a number of resources in a for_each loop, and I need another resource to depend_on those first ones. How can I do it without having to explicitly list them?
Here's the first resource (AWS API Gateway resource):
locals {
apps = toset(["app1", "app2", "app3"])
}
resource "aws_api_gateway_integration" "lambda" {
for_each = local.apps
rest_api_id = aws_api_gateway_rest_api.primary.id
resource_id = aws_api_gateway_resource.lambda[each.key].id
http_method = aws_api_gateway_method.lambda_post[each.key].http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.lambda[each.key].invoke_arn
}
Now I need to wait for all the 3 apps integrations before creating an API Gateway deployment:
resource "aws_api_gateway_deployment" "primary" {
rest_api_id = aws_api_gateway_rest_api.primary.id
depends_on = [
aws_api_gateway_integration.lambda["app1"],
aws_api_gateway_integration.lambda["app2"],
aws_api_gateway_integration.lambda["app3"],
]
However the list of apps keeps growing and I don't want to maintain it manually over here. Everywhere else I can simply use for or for_each together with local.apps but I can't figure out how to dynamically build the list for depends_on? Any ideas?
You have to do it manually. TF docs clearly explain that depends_on must be explicitly defined:
You only need to explicitly specify a dependency
Dependencies in Terraform are always between static blocks (resource, data, and module blocks mainly) and not between individual instances of those objects.
Therefore listing individual instances like you did in your example is redundant:
depends_on = [
aws_api_gateway_integration.lambda["app1"],
aws_api_gateway_integration.lambda["app2"],
aws_api_gateway_integration.lambda["app3"],
]
The above is exactly equivalent to declaring a dependency on the resource as a whole:
depends_on = [
aws_api_gateway_integration.lambda,
]
The for_each argument itself typically has its own dependencies (e.g. local.apps in your example) and so Terraform needs to construct the dependency graph before evaluating for_each. This means that there is only one node in the dependency graph representing the entire resource (including its for_each expression), and the individual instances are not represented in the plan-time dependency graph at all.
There is another way:
Create a module with for_each approach called aws_api_gateway_integration
Modify module to support all parameters needed
Then you can just call new objects to create - example
At the end you can put depends_on on a module - depends_on = [
module.logic_app
]
I'm importing roles which already have been created in AWS console and unfortunately the names are strange. So in order to use those roles I am trying like this
I've two IAM roles as follows
data "aws_iam_role" "reithera-rtcov201" {
name = "exomcloudrosareitherartcov-YRX1M2GJKD6H"
}
data "aws_iam_role" "dompe-rlx0120" {
name = "exomcloudrosadomperlx0120p-1SCGY0RG5JXFF"
}
In this file I have 2 variables as follows:
sponsor = ["reithera", "dompe"]
study = ["rtcov201", "rlx0120"]
I'm trying in the following way, but terraform doesn't allow to use $.
data.aws_iam_role.${var.sponsor}-${var.study}.arn
Do you know any solution for this.
Its not possible. You can dynamically create references to resources.
Instead of two separate data sources you should create one:
variable "iam_roles"
default = ["exomcloudrosareitherartcov-YRX1M2GJKD6H", "exomcloudrosadomperlx0120p-1SCGY0RG5JXFF"]
}
and then
data "aws_iam_role" "role" {
for_each = toset(var.iam_roles)
name = each.key
}
and you can refer to them using role name:
data.aws_iam_role["exomcloudrosareitherartcov-YRX1M2GJKD6H"].arn
I'm trying to use an existing Lambda function as a data source and create an EC2 instance. This Lambda function essentially provides the latest AMI.
I'm looking at this doc:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/lambda_invocation
Source Block:
data "aws_lambda_invocation" "example" {
function_name = aws_lambda_function.resource_selector.ResourceSelector
input = <<JSON
{
"key1": "AMIRegexPattern"
}
JSON
}
output "result_entry" {
value = jsondecode(data.aws_lambda_invocation.example.result)["key1"]
}
It throws this error and I'm a little lost:
Error: Reference to undeclared resource
on create-ec2.tf line 26, in data "aws_lambda_invocation" "example":
26: function_name = aws_lambda_function.resource_selector.ResourceSelector
A managed resource "aws_lambda_function" "resource_selector" has not been
declared in the root module.
Here is the function details:
Function Name - ResourceSelector
Function ARN : arn:aws:lambda:us-east-1:xx50:function:ResourceSelector
Any help on what I am missing? Also curious on this line esp and if this is correct:
function_name = aws_lambda_function.resource_selector.ResourceSelector
Thanks
If the lambda is created outside terraform you have to hardcode or pass via parameter, like so:
function_name = "ResourceSelector"
aws_lambda_function.resource_selector doesn't exist, or you can manage the lambda defininng the aws_lambda_function resource.
Also if you just want to get the latest AMI you don't need a lambda. Terraform actually has a data source that can pull that for you: aws_ami
Example:
data "aws_ami" "example" {
most_recent = true
owners = ["self"]
filter {
name = "name"
values = ["myami-*"]
}
}
I'm trying to create a lambda alias for my lambda function using terraform. I've been able to successfully create the alias but the created alias is missing the dynamodb as the trigger.
how the event source is set up
resource "aws_lambda_event_source_mapping" "db_stream_trigger" {
batch_size = 10
event_source_arn = "${data.terraform_remote_state.testddb.table_stream_arn}"
enabled = true
function_name = "${aws_lambda_function.test_lambda.arn}"
starting_position = "LATEST"
}
how the alias is created
resource "aws_lambda_alias" "test_lambda_alias" {
count = "${var.create_alias ? 1 : 0}"
depends_on = [ "aws_lambda_function.test_lambda" ]
name = "test_alias"
description = "alias for my test lambda"
function_name = "${aws_lambda_function.test_lambda.arn}"
function_version = "${var.current_running_version}"
routing_config = {
additional_version_weights = "${map(
"${aws_lambda_function.test_lambda.version}", "0.5"
)}"
}
}
The lambda works with the dynamodb stream as a trigger
The Alias for the lambda is successfully created.
The Alias is using the correct version
The Alias is using the correct weight
The Alias is NOT using the dynamo-db stream as the event source
I had the wrong function_name for the resource "aws_lambda_event_source_mapping". I was providing it the main lambda function's arn as oppose to the alias lambda function's arn. Once i switched it to the alias's arn, I was able to successfully divide the traffic from the stream dependent on the weight!
From aws doc:
Simplify management of event source mappings – Instead of using Amazon Resource Names (ARNs) for Lambda function in event source mappings, you can use an alias ARN. This approach means that you don't need to update your event source mappings when you promote a new version or roll back to a previous version.
https://docs.aws.amazon.com/lambda/latest/dg/aliases-intro.html
I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.