How to send lifecycle_rules to a s3 module in terraform - amazon-web-services

I have a terraform module that creates a s3 bucket. I want the module to be able to accept lifecycle rules.
resource "aws_s3_bucket" "somebucket" {
bucket = "my-versioning-bucket"
acl = "private"
lifecycle_rule {
prefix = "config/"
enabled = true
noncurrent_version_transition {
days = 30
storage_class = "STANDARD_IA"
}
}
}
I want to be able to to send above lifecycle_rule block of code when I call the module. I tried to send it through a variable but it did not work. I have done some research but no luck. Any help is highly appreciated.

Try to use output, from one module , get the desire value in output
e.g
output "lifecycle_rule" {
value = aws_s3_bucket.somebucket.id
}
and call this value into your another module
like:
module "somename" {
source = "/somewhere"
lifecycle_rule = module.amodule-name-where-output-is-applied.lifecycle_rule
...
You would need to play around this.
Just give it a try, these are my guess as far as I understand terraform and your questing.
below link can also help you:
Terraform: Output a field from a module

Related

Terraform - AWS - TypeError: planResultMessage.search is not a function

I've been scratching my head over this one for longer than I'd like to admit, but I'm throwing in the towel...
I have a large Terraform package and in the Terraform Plan, I get this error:
Terraform Plan (Error) Log
Exception Error in plan - TypeError: planResultMessage.search is not a function
I do not use the planResultMessage.search anywhere in my code, so my guess is that it is a Terraform error?
What I do know is that this set of resources that it is deploying is a bunch of yaml documents that I am trying to leverage to create SSM Documents. They are being loaded as such:
member_data.tf
data "template_file" "member_createmultiregiontrail" {
template = file("${path.module}/member-runbooks/member-asr-CreateCloudTrailMultiRegionTrail.yml")
}
data "template_file" "member_createlogmetricsfilteralarm" {
template = file("${path.module}/member-runbooks/member-asr-CreateLogMetricFilterAndAlarm.yml")
}
asr-member.tf
resource "aws_ssm_document" "asr_document_cloudtrail_multiregion" {
provider = aws.customer
count = var.enabled == true && var.child_account == true ? 1 : 0
name = "ASR-CreateCloudTrailMultiRegionTrail"
document_format = "YAML"
document_type = "Automation"
content = data.template_file.member_createmultiregiontrail.template
}
resource "aws_ssm_document" "asr_document_logs_metricsfilter_alarm" {
provider = aws.customer
count = var.enabled == true && var.child_account == true ? 1 : 0
name = "ASR-CreateLogMetricFilterAndAlarm"
document_format = "YAML"
document_type = "Automation"
content = data.template_file.member_createlogmetricsfilteralarm.template
}
As an example. I think the cause might be in these document files because the Terraform Error populates in the middle of the contents of these documents, it's always a random location in one of the documents...
Example:
This one fell into a document for SecHub's AFSBP Redshift 6 control, but at the beginning of the section contents it acknowledges that the resource will be deployed:
# module.aws-securityhub-master.aws_ssm_document.AFSBP_Redshift_6[0] will be created
I have tried loading the contents directly, using yamlencode, using simply "file", loading them into locals, pulling a file from locals, and now I'm on data sources.
If anyone can offer any help, it would be greatly appreciated.
DISCLAIMER:
This Terraform build out is a deconstruction of Amazon's SHARR solution:
https://aws.amazon.com/solutions/implementations/automated-security-response-on-aws/
you can see the various yaml build-outs here based on which security control:
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/tree/main/source/playbooks
The two that I specifically called out in my data sources are:
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/blob/main/source/remediation_runbooks/CreateCloudTrailMultiRegionTrail.yaml
and
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/blob/main/source/remediation_runbooks/CreateLogMetricFilterAndAlarm.yaml
and the AFSBP yaml can be found here (just in case it matters):
https://github.com/aws-solutions/aws-security-hub-automated-response-and-remediation/blob/main/source/playbooks/AFSBP/ssmdocs/AFSBP_Redshift.6.yaml
Thank you in advance!
This turned out to be a buffer overflow issue. Expanded resources to accommodate the deployment and that solved the issue.

Terraform update only a cloud function from a bunch

I have a Terraform project that allows to create multiple cloud functions.
I know that if I change the name of the google_storage_bucket_object related to the function itself, terraform will see the difference of the zip name and redeploy the cloud function.
My question is, there is a way to obtain the same behaviour, but only with the cloud functions that have been changed?
resource "google_storage_bucket_object" "zip_file" {
# Append file MD5 to force bucket to be recreated
name = "${local.filename}#${data.archive_file.source.output_md5}"
bucket = var.bucket.name
source = data.archive_file.source.output_path
}
# Create Java Cloud Function
resource "google_cloudfunctions_function" "java_function" {
name = var.function_name
runtime = var.runtime
available_memory_mb = var.memory
source_archive_bucket = var.bucket.name
source_archive_object = google_storage_bucket_object.zip_file.name
timeout = 120
entry_point = var.function_entry_point
event_trigger {
event_type = var.event_trigger.event_type
resource = var.event_trigger.resource
}
environment_variables = {
PROJECT_ID = var.env_project_id
SECRET_MAIL_PASSWORD = var.env_mail_password
}
timeouts {
create = "60m"
}
}
By appending MD5 every cloud functions will result in a different zip file name, so terraform will re-deploy every of them and I found that without the MD5, Terraform will not see any changes to deploy.
If I have changed some code only inside a function, how can I tell to Terraform to re-deploy only it (so for example to change only its zip file name)?
I hope my question is clear and I want to thank you everyone who tries to help me!

How do I apply a lifecycle rule to an EXISTING s3 bucket in Terraform?

New to Terraform. Trying to apply a lifecycle rule to an existing s3 bucket declared as a datasource, but I guess I can't do that with a datasource - throws an error. Here's the gist of what I'm trying to achieve:
data "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...and if this were a resource, not a datasource, then it would work. How can I apply a lifecycle rule to an s3 bucket declared as a data source? Google Fu has yielded little in the way of results. Thanks!
The best way to solve this is to import your bucket to terraform state instead of using it as data.
For that try to put this on your terraform code:
resource "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
And then run on the terminal:
terraform import aws_s3_bucket.test-bucket bucket_name
This will import the bucket to your state and now you can make the changes or add new things to your bucket using terraform.
Last step just run terraform apply and the lifecycle rule will be added.

Terraform not uploading a new ZIP

I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.

How to specify dead letter dependency using modules?

I have the following core module based off this official module:
module "sqs" {
source = "github.com/terraform-aws-modules/terraform-aws-sqs?ref=0d48cbdb6bf924a278d3f7fa326a2a1c864447e2"
name = "${var.site_env}-sqs-${var.service_name}"
}
I'd like to create two queues: xyz and xyz_dead. xyz sends its dead letter messages to xyz_dead.
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
redrive_policy = <<POLICY {
"deadLetterTargetArn" : "${data.TODO.TODO.arn}",
"maxReceiveCount" : 5
}
POLICY
site_env = "${var.site_env}"
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
How do I specify the deadLetterTargetArn dependency?
If I do:
data "aws_sqs_queue" "dead_queue" {
filter {
name = "tag:Name"
values = ["${var.site_env}-sqs-xyz_dead"]
}
}
and set deadLetterTargetArn to "${data.aws_sqs_queue.dead_queue.arn}", then I get this error:
Error: data.aws_sqs_queue.thumbnail_requests_queue_dead: "name":
required field is not set Error:
data.aws_sqs_queue.thumbnail_requests_queue_dead: : invalid or unknown
key: filter
The best way to do this is to use the outputted ARN from the module:
module "xyz_queue" {
source = "../helpers/sqs"
service_name = "xyz"
site_env = "${var.site_env}"
redrive_policy = <<POLICY
{
"deadLetterTargetArn" : "${module.xyz_dead_queue.this_sqs_queue_arn}",
"maxReceiveCount" : 5
}
POLICY
}
module "xyz_dead_queue" {
source = "../helpers/sqs"
service_name = "xyz_dead"
site_env = "${var.site_env}"
}
NB: I've also changed the indentation of your HEREDOC here because you normally need to remove the indentation with these.
This will pass the ARN of the SQS queue directly from the xyz_dead_queue module to the xyz_queue.
As for the errors you were getting, the aws_sqs_queue data source takes only a name argument, not a filter block like some of the other data sources do.
If you wanted to use the aws_sqs_queue data source then you'd just want to use:
data "aws_sqs_queue" "dead_queue" {
name = "${var.site_env}-sqs-${var.service_name}"
}
That said, if you are creating two things at the same time then you are going to have issues using a data source to refer to one of those things unless you create the first resource first. This is because data sources run before resources so if neither queue yet exists your data source would run and not find the dead letter queue and thus fail. If the dead letter queue did exist then it would be okay. In general though you're best off avoiding using data sources like this and only use them to refer to things being created in a separate terraform apply (or perhaps even created outside of Terraform).
You are also much better off simply passing the outputs of resources or modules to other resources/modules and allowing Terraform to correctly build a dependency tree for them as well.