I want to update a my code on Lambda function.
I previous creating lambda function on aws console.
I do a creating terraform code for update my function existing but I received an error
I try using this block code
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
data "aws_lambda_function" "existing" {
function_name = MyPocLambda
role = aws_iam_role.iam_for_lambda.arn
filename = "dest_dir/stop_ec2_upload.zip"
source_code_hash ="${data.archive_file.stop_ec2.output_base64sha256}"
}
My error says filename unsupported argument, filename is not expected here
it possible update lambda function using terafform data ?
You have to import your lambda function to TF first. Then you will be able to modify it using TF code using aws_lambda_function resource, not data source.
I have a Terraform project that allows to create multiple cloud functions.
I know that if I change the name of the google_storage_bucket_object related to the function itself, terraform will see the difference of the zip name and redeploy the cloud function.
My question is, there is a way to obtain the same behaviour, but only with the cloud functions that have been changed?
resource "google_storage_bucket_object" "zip_file" {
# Append file MD5 to force bucket to be recreated
name = "${local.filename}#${data.archive_file.source.output_md5}"
bucket = var.bucket.name
source = data.archive_file.source.output_path
}
# Create Java Cloud Function
resource "google_cloudfunctions_function" "java_function" {
name = var.function_name
runtime = var.runtime
available_memory_mb = var.memory
source_archive_bucket = var.bucket.name
source_archive_object = google_storage_bucket_object.zip_file.name
timeout = 120
entry_point = var.function_entry_point
event_trigger {
event_type = var.event_trigger.event_type
resource = var.event_trigger.resource
}
environment_variables = {
PROJECT_ID = var.env_project_id
SECRET_MAIL_PASSWORD = var.env_mail_password
}
timeouts {
create = "60m"
}
}
By appending MD5 every cloud functions will result in a different zip file name, so terraform will re-deploy every of them and I found that without the MD5, Terraform will not see any changes to deploy.
If I have changed some code only inside a function, how can I tell to Terraform to re-deploy only it (so for example to change only its zip file name)?
I hope my question is clear and I want to thank you everyone who tries to help me!
I am working on a aws stack and have some lambdas and s3 bucket ( sample code below) . how to generate zip file for lambda via terrarform. I have seen different styles and probably depends on the version of terraform as well.
resource "aws_lambda_function" "my_lambda" {
filename = "my_lambda_func.zip"
source_code_hash = filebase64sha256("my_lambda_func.zip")
Using archive_file would be most common. You can zip individual files or entire folders, depending how your lambda function is developed.
So to give a more up-to-date and use-case based answer, for terraform version 2.3.0, you can apply the following:
data "archive_file" "dynamodb_stream_lambda_function" {
type = "zip"
source_file = "../../lambda-dynamodb-streams/index.js"
output_path = "lambda_function.zip"
}
resource "aws_lambda_function" "my_dynamodb_stream_lambda" {
function_name = "my-dynamodb-stream-lambda"
role = aws_iam_role.my_stream_lambda_role.arn
handler = "index.handler"
filename = data.archive_file.dynamodb_stream_lambda_function.output_path
source_code_hash = data.archive_file.dynamodb_stream_lambda_function.output_base64sha256
runtime = "nodejs16.x"
}
I have a terraform module that creates a s3 bucket. I want the module to be able to accept lifecycle rules.
resource "aws_s3_bucket" "somebucket" {
bucket = "my-versioning-bucket"
acl = "private"
lifecycle_rule {
prefix = "config/"
enabled = true
noncurrent_version_transition {
days = 30
storage_class = "STANDARD_IA"
}
}
}
I want to be able to to send above lifecycle_rule block of code when I call the module. I tried to send it through a variable but it did not work. I have done some research but no luck. Any help is highly appreciated.
Try to use output, from one module , get the desire value in output
e.g
output "lifecycle_rule" {
value = aws_s3_bucket.somebucket.id
}
and call this value into your another module
like:
module "somename" {
source = "/somewhere"
lifecycle_rule = module.amodule-name-where-output-is-applied.lifecycle_rule
...
You would need to play around this.
Just give it a try, these are my guess as far as I understand terraform and your questing.
below link can also help you:
Terraform: Output a field from a module
I was able to create a bucket in an amazon S3 using this link.
I used the following code to create a bucket :
resource "aws_s3_bucket" "b" {
bucket = "my_tf_test_bucket"
acl = "private"
}
Now I wanted to create folders inside the bucket, say Folder1.
I found the link for creating an S3 object. But this has a mandatory parameter source. I am not sure what this value have to , since my intent is to create a folder inside the S3 bucket.
For running terraform on Mac or Linux, the following will do what you want
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/"
source = "/dev/null"
}
If you're on windows you can use an empty file.
While folks will be pedantic about s3 not having folders, there are a number of operations where having an object placeholder for a key prefix (otherwise called a folder) make life easier. Like s3 sync for example.
Actually, there is a canonical way to create it, without being OS dependent, by inspecting the Network on a UI put you see the content headers, as stated by : https://stackoverflow.com/users/1554386/alastair-mccormack ,
And S3 does support folders these days as visible from the UI.
So this is how you can achieve it:
resource "aws_s3_bucket_object" "base_folder" {
bucket = "${aws_s3_bucket.default.id}"
acl = "private"
key = "${var.named_folder}/"
content_type = "application/x-directory"
kms_key_id = "key_arn_if_used"
}
Please notice the trailing slash otherwise it creates an empty file
Above has been used with a Windows OS to successfully create a folder using terraform s3_bucket_object.
The answers here are outdated, it's now definitely possible to create an empty folder in S3 via Terraform. Using the aws_s3_object resource, as follows:
resource "aws_s3_bucket" "this_bucket" {
bucket = "demo_bucket"
}
resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.this_bucket.id
key = "demo/directory/"
}
If you don't supply a source for the object then terraform will create an empty directory.
IMPORTANT - Note the trailing slash this will ensure you get a directory and not an empty file
S3 doesn't support folders. Objects can have prefix names with slashes that look like folders, but that's just part of the object name. So there's no way to create a folder in terraform or anything else, because there's no such thing as a folder in S3.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
http://docs.aws.amazon.com/AWSImportExport/latest/DG/ManipulatingS3KeyNames.html
If you want to pretend, you could create a zero-byte object in the bucket named "Folder1/" but that's not required. You can just create objects with key names like "Folder1/File1" and it will work.
old answer but if you specify the key with the folder (that doesn't exist yet) terraform will create the folder automatically for you
terraform {
backend "s3" {
bucket = "mysql-staging"
key = "rds-mysql-state/terraform.tfstate"
region = "us-west-2"
encrypt = true
}
}
I would like to add to this discussion that you can create a set of empty folders by providing the resource a set of strings:
resource "aws_s3_object" "default_s3_content" {
for_each = var.default_s3_content
bucket = aws_s3_bucket.bucket.id
key = "${each.value}/"
}
where var.default_s3_content is a set of strings:
variable "default_s3_content" {
description = "The default content of the s3 bucket upon creation of the bucket"
type = set(string)
default = ["folder1", "folder2", "folder3", "folder4", "folder5"]
}
v0.12.8 introduces a new fileset() function which can be used in combination with for_each to support this natively :
NEW FEATURES:
lang/funcs: New fileset function, for finding static local files that
match a glob pattern. (#22523)
A sample usage of this function is as follows (from here):
# Given the file structure from the initial issue:
# my-dir
# |- file_1
# |- dir_a
# | |- file_a_1
# | |- file_a_2
# |- dir_b
# | |- file_b_1
# |- dir_c
# And given the expected behavior of the base_s3_key prefix in the initial issue
resource "aws_s3_bucket_object" "example" {
for_each = fileset(path.module, "my-dir/**/file_*")
bucket = aws_s3_bucket.example.id
key = replace(each.value, "my-dir", "base_s3_key")
source = each.value
}
At the time of this writing, v0.12.8 is a day old (Released on 2019-09-04) so the documentation on https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html does not yet reference it. I am not certain if that's intentional.
As an aside, if you use the above, remember to update/create version.tf in your project like so:
terraform {
required_version = ">= 0.12.8"
}