terraform does not detect changes to lambda source files - amazon-web-services

In my main.tf I have the following:
data "template_file" "lambda_script_temp_file" {
template = "${file("../../../fn/lambda_script.py")}"
}
data "template_file" "library_temp_file" {
template = "${file("../../../library.py")}"
}
data "template_file" "init_temp_file" {
template = "${file("../../../__init__.py")}"
}
data "archive_file" "lambda_resources_zip" {
type = "zip"
output_path = "${path.module}/lambda_resources.zip"
source {
content = "${data.template_file.lambda_script_temp_file.rendered}"
filename = "lambda_script.py"
}
source {
content = "${data.template_file.library_temp_file.rendered}"
filename = "library.py"
}
source {
content = "${data.template_file.init_temp_file.rendered}"
filename = "__init__.py"
}
}
resource "aws_lambda_function" "MyLambdaFunction" {
filename = "${data.archive_file.lambda_resources_zip.output_path}"
function_name = "awesome_lambda"
role = "${var.my_role_arn}"
handler = "lambda_script.lambda_handler"
runtime = "python3.6"
timeout = "300"
}
The problem is when I modify one of the source files, say lambda_script.py, upon a new terraform apply, even though the archive file (lambda_resources_zip) gets updated, the lambda function's script does not get updated (the new archive file does not get uploaded).
I know that in order to avoid this, I could first run terraform destroy but that is not an option for my use case.
*I'm using Terraform v0.11.10

I resolved the issue by adding the following line the resource definition:
source_code_hash = "${data.archive_file.lambda_resources_zip.output_base64sha256}"
when the source files are modified, the hashed value will change and trigger the source file to be updated.

This worked for me -
add this line
source_hash = "${data.archive_file.source.output_base64sha256}"
to s3 lambda bucket this will tell if any changes made.
then add this to lambda -
source_code_hash = "${data.archive_file.source.output_base64sha256}"
So your code should look like this -
resource "aws_s3_object" "lambda_object" {
source_hash = "${data.archive_file.source.output_base64sha256}"
bucket = "${aws_s3_bucket.s3.bucket}"
key = "${var.key}"
source = data.archive_file.source.output_path
}
resource "aws_lambda_function" "lambda_" {
function_name = "lambda_name"
source_code_hash = "${data.archive_file.source.output_base64sha256}"
.......
.......
}
Worked for me. Best wishes.

Related

terraform combine data template file and each.key in a for_each block

Using a module to create multiple IAM roles with a for_each block I am trying to pass into the policy a rendered data output and the key from the for_each loop. The name of this policy will be slightly different for each role
module "sso_roles" {
source = "git::ssh://git#gitlab.com/iam/role?ref=1.1.0"
for_each = local.roles
policy = "${data.template_file}.${each.key}_policy".rendered
role_name = each.key
assume_role_policy_def = data.template_file.testing_role.rendered
}
These are the locals its looping through:
locals {
roles = {
"test_Read_Only" = ["arn:aws:iam::*:role/testReadOnly", "]
"test_OS_Only" = ["arn:aws:iam::*:role/testSigninOSOnly"]
}
}
what I need terraform to see when its running are these two:
${data.template_file.test_Read_Only_policy.rendered}
${data.template_file.test_OS_Only_policy.rendered}
But there is something not right with the syntax I have. The error I get says "The "data" object must be followed by two attribute names: the data source type and the resource name."
I don't know how to combine the each.key into the rendered data template file
What I would suggest is:
To use the data source with for_each and use the same variable
To switch to the templatefile built-in function and pass the value as a variable.
To achieve the first, you would do something like:
module "sso_roles" {
source = "git::ssh://git#gitlab.com/iam/role?ref=1.1.0"
for_each = local.roles
policy = data.template_file.policy[each.key].rendered
role_name = each.key
assume_role_policy_def = data.template_file.testing_role.rendered
}
data "template_file" "policy" {
for_each = local.roles
...
}
The second one is probably a bit more convenient and it's using a newer and better templatefile function [1]:
module "sso_roles" {
source = "git::ssh://git#gitlab.com/iam/role?ref=1.1.0"
for_each = local.roles
policy = templatefile("${path.module}/path/to/template/file.tpl", {
iam_role = each.value
})
role_name = each.key
assume_role_policy_def = data.template_file.testing_role.rendered
}
With more information about the template file you are using I would be able to adjust the second example.
[1] https://www.terraform.io/language/functions/templatefile

How can I get all AWS Lambdas by Tag in Terraform

I have Lambdas with Tag of Name=production.
I would like to get them using Terraform, looking at aws_lambda_function, I can only get single lambda by function_name
data "aws_lambda_function" "existing" {
function_name = var.function_name
}
You can use the aws_resourcegroupstaggingapi_resources to retrieve information for several AWS resources that don't have more specific data sources.
For your use case, considering Name=production, you can use:
data "aws_resourcegroupstaggingapi_resources" "existing" {
resource_type_filters = ["lambda:function"]
tag_filter {
key = "Name"
values = ["production"]
}
}
output "arns" {
value = data.aws_resourcegroupstaggingapi_resources.existing.resource_tag_mapping_list.*.resource_arn
}
Update: as noted in a comment, the code above returns information from resource_tag_mapping_list, which is mostly compliant information and ARN of resources. But you can pair that with a regular aws_lambda_function data source and using a for_each to retrieve all information from your Lambda functions:
# continuation of the code above
data "aws_lambda_function" "existing" {
for_each = toset(data.aws_resourcegroupstaggingapi_resources.existing.resource_tag_mapping_list.*.resource_arn)
function_name = each.value
}
output "functions" {
value = data.aws_lambda_function.existing.*
}
# example of information available with this data source
output "functions_runtime" {
value = {for fn, result in data.aws_lambda_function.existing: fn => result.runtime}
}

Terraform - Copy multiple Files to bucket at the same time bucket creation

Hello,
I have a little head hache.
I want to create buckets and cp bulk files at the same time. I have multiple folder (datasetname) in schema folder with json file: schema/dataset1 schema/dataset2 schema/dataset3
The trick is Terraform generate bucketname + random numbers to avoid already name used. I have one question:
How to copy bulk files in a bucket (at the same time bucket creation)
resource "google_storage_bucket" "map" {
for_each = {for i, v in var.gcs_buckets: i => v}
name = "${each.value.id}_${random_id.suffix[0].hex}"
location = var.default_region
storage_class = "REGIONAL"
uniform_bucket_level_access = true
#If you destroy your bucket, this option will delete all objects inside this bucket
#if not Terrafom will fail that run
force_destroy = true
labels = {
env = var.env_label
}
resource "google_storage_bucket_object" "map" {
for_each = {for i, v in var.json_buckets: i => v}
name = ""
source = "schema/${each.value.dataset_name}/*"
bucket = contains([each.value.bucket_name], each.value.dataset_name)
#bucket = "${google_storage_bucket.map[contains([each.value.bucket_name], each.value.dataset_name)]}"
}
variable "json_buckets" {
type = list(object({
bucket_name = string
dataset_name = string
}))
default = [
{
bucket_name = "schema_table1",
dataset_name = "dataset1",
},
{
bucket_name = "schema_table2",
dataset_name = "dataset2",
},
{
bucket_name = "schema_table2",
dataset_name = "dataset3",
},
]
}
variable "gcs_buckets" {
type = list(object({
id = string
description = string
}))
default = [
{
id = "schema_table1",
description = "schema_table1",
},
]
}
...
Why do you have bucket = contains([each.value.bucket_name], each.value.dataset_name)? The contains function returns a bool, and bucket takes a string input (the name of the bucket).
There is no resource that will allow you to copy multiple objects at once to the bucket. If you need to do this in Terraform, you can use the fileset function to get a list of files in your directory, then use that list in your for_each for the google_storage_bucket_object. It might look something like this (untested):
locals {
// Create a master list that has all files for all buckets
all_files = merge([
// Loop through each bucket/dataset combination
for bucket_idx, bucket_data in var.json_buckets:
{
// For each bucket/dataset combination, get a list of all files in that dataset
for file in fileset("schema/${bucket_data.dataset_name}/", "**"):
// And stick it in a map of all bucket/file combinations
"bucket-${bucket_idx}-${file}" => merge(bucket_data, {
file_name = file
})
}
]...)
}
resource "google_storage_bucket_object" "map" {
for_each = local.all_files
name = each.value.file_name
source = "schema/${each.value.dataset_name}/${each.value.file_name}"
bucket = each.value.bucket_name
}
WARNING: Do not do this if you have a lot of files to upload. This will create a resource in the Terraform state file for each uploaded file, meaning every time you run terraform plan or terraform apply, it will do an API call to check the status of each uploaded file. It will get very slow very quickly if you have hundreds of files to upload.
If you have a ton of files to upload, consider using an external CLI-based tool to sync the local files with the remote bucket after the bucket is created. You can use a module such as this one to run external CLI commands.

How to trigger terraform to upload new lambda code

I deploy lambda using Terraform as follows but have following questions:
1) I want null_resource.lambda to be called always or when stop_ec2.py is changed so that stop_ec2_upload.zip is not out-of-date. What should I write in triggers{}?
2) how to make aws_lambda_function.stop_ec2 update the new stop_ec2_upload.zip to cloud when stop_ec2_upload.zip is changed?
right now I have to destroy aws_lambda_function.stop_ec2 then create it again. is there anything I can write in the code so that when I run terraform apply, 1) and 2) will happen automatically?
resource "null_resource" "lambda" {
triggers {
#what should I write here?
}
provisioner "local-exec" {
command = "mkdir -p lambda_func && cd lambda_py && zip
../lambda_func/stop_ec2_upload.zip stop_ec2.py && cd .."
}
}
resource "aws_lambda_function" "stop_ec2" {
depends_on = ["null_resource.lambda"]
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "lambda_func/stop_ec2_upload.zip"
source_code_hash =
"${base64sha256(file("lambda_func/stop_ec2_upload.zip"))}"
role = "..."
}
I read the link provided by Chandan and figured out.
Here is my code and it works perfectly.
In fact, with "archive_file", and source_code_hash, I do not need trigger. whenever I create a new file stop_ec2.py or modify it. when I run terraform, the file will be re-zipped and uploaded to cloud.
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
resource "aws_lambda_function" "stop_ec2" {
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash = data.archive_file.stop_ec2.output_base64sha256
role = "..."
}
These might help:
triggers {
main = "${base64sha256(file("source/main.py"))}"
requirements = "${base64sha256(file("source/requirements.txt"))}"
}
triggers = {
source_file = "${sha1Folder("${path.module}/source")}"
}
REF: https://github.com/hashicorp/terraform/issues/8344

terraform - how to add s3 Object Created trigger for lambda

How do I add a trigger to aws lambda using terraform?
the desired trigger is s3, object created all.
my terraform source code arouond the lambda is:
module "s3-object-created-lambda" {
source = "../../../../../modules/lambda"
s3_bucket = "${var.s3_lambda_bucket}"
s3_key = "${var.s3_lambda_key}"
name = "${var.lambda_some_name}"
handler = "code.handler"
env = {
lambda_name = "${var.lambda_base_name}"
lambda_version = "${var.lambda_version}"
}
}
trying to figure out how do I add the trigger.
via the aws console it is super simple.
After some reading in:
https://www.terraform.io/docs/providers/aws/r/s3_bucket_notification.html
the solution is:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${data.terraform_remote_state.stack.bucket_id}"
lambda_function {
lambda_function_arn = "${module.some_lambda.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "${var.cluster_name}/somepath/"
filter_suffix = ".txt"
}
}