Terraform AWS Lambda's redeployment - amazon-web-services

We have a list of lambda functions that get updated quite often and new Lambda's are being introduced occasionally. The artifacts for lambdas are stored in AWS S3 bucket. We use Terraform AWS module to deploy lambdas in the following way:
module "lambdas" {
count = length(local.lambdas_list)
source = "terraform-aws-modules/lambda/aws"
function_name = "${local.lamdas_list[count.index]}"
lambda_role = module.create_network.lambda_role_arn
handler = "com.box.processing.Handler::handleRequest"
runtime = "java11"
memory_size = "1024"
timeout = "10"
vpc_subnet_ids = module.create_network.private_subnet_ids
vpc_security_group_ids = [module.create_test2_server.server_sg]
create_role = false
create_package = false
publish = false
ignore_source_code_hash = true
create_current_version_allowed_triggers = false
s3_existing_package = {
bucket = "lambdas_artifacts_bucket"
key = "${local.lambdas_list[count.index]}/${local.lambdas_list[count.index]}.jar"
}
}
lambdas_list variable looks like this:
lambdas_list = ["lambda1", "lambda2", "lambda3"]
===============================================================================================
The problem with this approach is that module is being instantiated as many times as there are elements in the lambdas_list variable, in case above - 3 times.
The biggest problem that we have is when somebody creates a new lambda and asks for it to be deployed. This means that we will be expanding the lambdas_list variable in the following way:
lambdas_list = ["lambda1", "lambda2", "lambda3", "lambda4"]
If I do a "terraform apply" after updating the lambdas_list variable in the way shown above, the code will deploy a new lambda (lambda4) and that's exactly what we want. The problem is when the lambdas_list variable is updated in the following way:
lambdas_list = ["lambda1", "lambda4", "lambda2", "lambda3"]
Note that "lambda4" is now at index 1 instead of index 3. In this case, the TF code will fail because the ordering of lambdas is changed and it the situation with lambda deployment looks like this:
INDEX OLD_STATE NEW_STATE
0 lambda1 lambda1
1 lambda2 lambda4
2 lambda3 lambda2
3 lambda3
The TF code will try to replace indexes 1, 2 and 3 with new lambdas. One of the things where it will fail is on the creation of Lambda Log Groups. When creating lambda with index 2 (lambda2), it will say that Log group already exists.
===============================================================================================
Is there a way to easily add/update list of lambdas to be deployed and not to have this problem every time? One of the important things is also that when a new lambda is added, we try to keep alphabetically ordered list of lambdas which causes this shift in the indexes.

Related

Reference terraform resource by variable

I created tf file that takes input from cli and then use this as name for aws lambda, and api gateway.
Currently inputing different name just replace name in currently working one.
My goal is that every time i input new name new lambda and gateway should be created. Is it possible?
variable "repo_name" {
type = string
}
resource "aws_lambda_function" "lambda" {
function_name = var.repo_name
handler = "lambda_function.lambda_handler"
runtime = "python3.9"
role = ""
filename = "python.zip"
}
This can be done in a couple of different ways, but the easiest one would be creating a local variable and just add additional names to it whenever you need a new function and API Gateway. That would look something like this:
locals {
repo_names = ["one", "two", "three"]
}
Now, the second part can be solved with count [1] or for_each [2] meta-arguments. It's usually a matter of preference, but I would suggest using for_each:
resource "aws_lambda_function" "lambda" {
for_each = toset(local.repo_names)
function_name = each.value
handler = "lambda_function.lambda_handler"
runtime = "python3.9"
role = ""
filename = "python.zip"
}
You would then follow a similar approach for creating different API Gateway resources. Note that for_each has to be used with sets or maps, hence the toset built-in function [3] usage. Also, make sure you understand how each object works [4]. In case of a set, the each.key and each.value are the same which is not the case when using maps.
[1] https://developer.hashicorp.com/terraform/language/meta-arguments/count
[2] https://developer.hashicorp.com/terraform/language/meta-arguments/for_each
[3] https://developer.hashicorp.com/terraform/language/functions/toset
[4] https://developer.hashicorp.com/terraform/language/meta-arguments/for_each#the-each-object

Trigger random_id resource recreation on rds instance destroy and recreate

Folks, am trying to find a way with terraform random_id resource to recreate and provide a new random value when the rds instance destroys and recreates due to a change that went in, say the username on rds has changed.
This random value am trying to attach to final_snapshot_identifier of the aws_db_instance resource so that the snapshot should have a unique value to its id everytime it gets created upon rds instance being destroyed.
Current code:
resource "random_id" "snap_id" {
byte_length = 8
}
locals {
inst_id = "test-rds-inst"
inst_snap_id = "${local.inst_id}-snap-${format("%.4s", random_id.snap_id.dec)}"
}
resource "aws_db_instance" "rds" {
.....
identifier = local.inst_id
final_snapshot_identifier = local.inst_snap_id
skip_final_snapshot = false
username = "foo"
apply_immediately = true
.....
}
output "snap_id" {
value = aws_db_instance.rds.final_snapshot_identifier
}
Output after terraform apply:
snap_id = "test-rds-inst-snap-5553"
Use case am trying out:
#1:
Modify value in rds instance to simulate a destroy & recreate:
Modify username to "foo-tmp"
terraform apply -auto-approve
Output:
snap_id = "test-rds-inst-snap-5553"
I was expecting the random_id to kick in and output a unique id, but it didn't.
Observation:
rds instance in deleting state
snapshot "test-rds-inst-snap-5553" in creating state
rds instance recreated and in available state
snapshot "test-rds-inst-snap-5553" in available state
#2:
Modify value again in rds instance to simulate a destroy & recreate:
Modify username to "foo-new"
terraform apply -auto-approve
Kind of expected below error, coz snap id didn't get a new value in prior attempt, but tired anyways..
Observation:
**Error:** error deleting DB Instance (test-rds-inst): DBSnapshotAlreadyExists: Cannot create the snapshot because a snapshot with the identifier test-rds-inst-snap-5553 already exists.
Am aware of the keepers{} map for random_id resource, but not sure on what from the rds_instance that I need to put in the map so that the random_id resource will be recreated and it ends up providing a new unique value to the snap_id suffix.
Also I feel using any attribute of rds instance in the random_id keepers, might cause a circular dependency issue. I may be wrong but haven't tried it though.
Any suggestions will be helpful. Thanks.
The easiest way to do this would be to use taint on the random_id resource, as per the documentation [1]:
To force a random result to be replaced, the taint command can be used to produce a new result on the next run.
Alternatively, looking at the example from the documentation, you could do something like:
resource "random_id" "snap_id" {
byte_length = 8
keepers {
snapshot_id = var.snapshot_id
}
}
resource "aws_db_instance" "rds" {
.....
identifier = local.inst_id
final_snapshot_identifier = random_id.snap_id.keepers.snapshot_id
skip_final_snapshot = false
username = "foo"
apply_immediately = true
.....
}
This means that until the value of the variable snapshot_id changes, the random_id will generate the same result. Not sure if that would work with locals, but you could try replacing var.snapshot_id with local.inst_snap_id. If that works, you could then name the snapshot using built-in functions like formatdate [2] and timestamp [3] to create a snapshot id which will be tied to the time when you were running apply, something like:
locals {
inst_id = "test-rds-inst"
snap_time = formatdate("YYYYMMDD", timestamp())
inst_snap_id = "${local.inst_id}-snap-${format("%.4s", random_id.snap_id.dec)}-${local.snap_time}"
}
[1] https://registry.terraform.io/providers/hashicorp/random/latest/docs#resource-keepers
[2] https://www.terraform.io/language/functions/formatdate
[3] https://www.terraform.io/language/functions/timestamp

How to attach AWS Lambda fn to EXISTING vpc using terraform?

TLDR: We deploy Lambda functions using Terraform. A new lambda requires VPC attachment to an existing VPC. How do I define this network attachment in terraform? My current solution passes all terraform steps, but when inspect my Lambda in the console, it's not attached to any VPC.
I found this article Deploy AWS Lambda to VPC with Terraform insightful, but the example involves adding a new VPC (with subnets, security groups, etc.) as opposed to attaching to existing VPC, existing subnets, security groups etc.
Here's my current solution. From my project's main.tf I call a module...
module "lambda" {
source = "git::https://corpsource.io/corp-cloud-platform-team/corpcloudv2/terraform/lambda-modules.git?ref=dev"
lambda_name = var.name
lambda_role = "arn:aws:iam::${var.ACCOUNT}:role/${var.lambda_role}"
lambda_handler = var.handler
lambda_runtime = var.runtime
default_lambda_timeout = var.timeout
ACCOUNT = var.ACCOUNT
vpc_subnet_ids = "${var.SUBNET_IDS}"
vpc_security_group_ids = "${var.SECURITY_GROUP_IDS}"
}
And here is the module:
resource "aws_lambda_function" "lambda_function" {
filename = "lambda_package.zip"
function_name = var.lambda_name
role = var.lambda_role
handler = var.lambda_handler
runtime = var.lambda_runtime
memory_size = 256
timeout = var.default_lambda_timeout
source_code_hash = filebase64sha256("lambda_code/lambda_package.zip")
vpc_config {
subnet_ids = var.vpc_subnet_ids
security_group_ids = var.vpc_security_group_ids
}
}
It passes all Terraform steps without error, and yet doesn't appear to attach my Lambda to a VPC. What am I doing wrong?
Thanks in advance.
Update:
Output of Terraform Plan:
$ terraform plan
Acquiring state lock. This may take a few moments...
module.lambda.aws_lambda_function.lambda_function: Refreshing state... [id=create-vault-entry]
module.lambda_iam.aws_iam_policy.base_policy: Refreshing state... [id=arn:aws:iam::############:policy/create-vault-entry-role]
module.lambda_iam.aws_iam_role.module_role: Refreshing state... [id=create-vault-entry-role]
module.lambda_iam.aws_iam_role_policy_attachment.lambda_attach: Refreshing state... [id=create-vault-entry-role-############################]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.lambda.aws_lambda_function.lambda_function will be updated in-place
~ resource "aws_lambda_function" "lambda_function" {
id = "create-vault-entry"
~ last_modified = "2022-01-11T19:48:18.000+0000" -> (known after apply)
~ source_code_hash = "g/hash/hash=" -> "hash/hash"
tags = {}
# (18 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Warning: Interpolation-only expressions are deprecated
on main.tf line 3, in locals:
3: vault_HOST = "${var.vault_HOST}",
Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.
Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.
(and 5 more similar warnings elsewhere)
You appear to be converting lists to strings. The Lambda VPC subnet_ids and security_group_ids attributes expect a list, not a string. I'm really not sure how your current code is working without any errors being reported.
It looks like you need to change this:
vpc_subnet_ids = "${var.SUBNET_IDS}"
vpc_security_group_ids = "${var.SECURITY_GROUP_IDS}"
To this:
vpc_subnet_ids = var.SUBNET_IDS
vpc_security_group_ids = var.SECURITY_GROUP_IDS

Terraform 12 convert count to for_each

I am currently experiencing some challenges with the way that count index's in Terraform. I am seeking some help to convert this to for_each.
# Data Source for github repositories. This is used for adding all repos to the teams.
data "github_repositories" "repositories" {
query = "org:theorg"
}
resource "github_team_repository" "business_analysts" {
count = length(data.github_repositories.repositories.names)
team_id = github_team.Business_Analysts.id
repository = element(data.github_repositories.repositories.names, count.index)
permission = "pull"
}
I have tried the following with no success:
resource "github_team_repository" "business_analysts" {
for_each = toset(data.github_repositories.repositories.names)
team_id = github_team.Business_Analysts.id
repository = "${each.value}"
permission = "pull"
}
I am querying a Github organization and receiving a huge list of repositories. I am then using count to add those repositories to a Team. Unfortunately, Terraform will error out once a new repository is added or changed. This being said I think the new for_each function could solve this dilemma for me however, I am having trouble wrapping my head around how to implement it; in this particular scenario. Any help would be appreciated.
False Alarm. The I had the answer all along and the issue was attributed to the way I was referencing a variable.....
if someone stumbles upon this then you want to structure your loop like so
for_each = toset(data.github_repositories.repositories.names)
team_id = github_team.Business_Analysts.id
repository = each.key
permission = "pull"
}

Preventing destroy of resources when refactoring Terraform to use indices

When I was just starting to use Terraform, I more or less naively declared resources individually, like this:
resource "aws_cloudwatch_log_group" "image1_log" {
name = "${var.image1}-log-group"
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_group" "image2_log" {
name = "${var.image2}-log-group"
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_stream" "image1_stream" {
name = "${var.image1}-log-stream"
log_group_name = aws_cloudwatch_log_group.image1_log.name
}
resource "aws_cloudwatch_log_stream" "image2_stream" {
name = "${var.image2}-log-stream"
log_group_name = aws_cloudwatch_log_group.image2_log.name
}
Then, 10-20 different log groups later, I realized this wasn't going to work well as infrastructure grew. I decided to define a variable list:
variable "image_names" {
type = list(string)
default = [
"image1",
"image2"
]
}
Then I replaced the resources using indices:
resource "aws_cloudwatch_log_group" "service-log-groups" {
name = "${element(var.image_names, count.index)}-log-group"
count = length(var.image_names)
tags = module.tagging.tags
}
resource "aws_cloudwatch_log_stream" "service-log-streams" {
name = "${element(var.image_names, count.index)}-log-stream"
log_group_name = aws_cloudwatch_log_group.service-log-groups[count.index].name
count = length(var.image_names)
}
The problem here is that when I run terraform apply, I get 4 resources to add, 4 resources to destroy. I tested this with an old log group, and saw that all my logs were wiped (obviously, since the log was destroyed).
The names and other attributes of the log groups/streams are identical- I'm simply refactoring the infrastructure code to be more maintainable. How can I maintain my existing log groups without deleting them yet still refactor my code to use lists?
You'll need to move the existing resources within the Terraform state.
Try running terraform show to get the strings under which the resources are stored, this will be something like [module.xyz.]aws_cloudwatch_log_group.image1_log ...
You can move it with terraform state mv [module.xyz.]aws_cloudwatch_log_group.image1_log '[module.xyz.]aws_cloudwatch_log_group.service-log-groups[0]'.
You can choose which index to assign to each resource by changing [0] accordingly.
Delete the old resource definition for each moved resource, as Terraform would otherwise try to create a new group/stream.
Try it with the first import and check with terraform plan if the resource was moved correctly...
Also check if you need to choose some index for the image_names list jsut to be sure, but I think that won't be necessary.