Terraform aws_s3_bucket_object not triggered by archive_file - amazon-web-services

I have built the following terraform code:
data "archive_file" "lambda_dependencies_bundle" {
depends_on = [
null_resource.lambda_dependencies
]
output_path = "${local.function_build_folder_path}/build/${local.function_s3_object_key}.zip"
excludes = ["${local.function_build_folder_path}/build/*"]
source_dir = local.function_build_folder_path
type = "zip"
}
resource "aws_s3_bucket" "lambda_dependencies_bucket" {
bucket = local.function_s3_bucket
acl = "private"
}
resource "aws_s3_bucket_object" "lambda_dependencies_upload" {
bucket = aws_s3_bucket.lambda_dependencies_bucket.id
key = "${local.function_s3_object_key}.zip"
source = data.archive_file.lambda_dependencies_bundle.output_path
}
The null_resource.lambda_dependencies is triggered by a file change and just builds all of my code to local.function_build_folder_path.
Everytime the null_resource changes, the archive_file.lambda_dependencies_bundle rebuilds (correct behavior!).
But other than expected, the aws_s3_bucket_object.lambda_dependencies_upload is not triggered by the rebuild of the archive_file.
How can I achieve a reupload of my archive_file on a rebuild?

I would add etag:
Triggers updates when the value changes.
resource "aws_s3_bucket_object" "lambda_dependencies_upload" {
bucket = aws_s3_bucket.lambda_dependencies_bucket.id
key = "${local.function_s3_object_key}.zip"
source = data.archive_file.lambda_dependencies_bundle.output_path
etag = data.archive_file.lambda_dependencies_bundle.output_md5
}

Related

Terraform GCP executes resources in wrong order

I have this main.tf file:
provider "google" {
project = var.projNumber
region = var.regName
zone = var.zoneName
}
resource "google_storage_bucket" "bucket_for_python_application" {
name = "python_bucket_exam"
location = var.regName
force_destroy = true
}
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = "python_bucket_exam"
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = "python_bucket_exam"
}
When executed first time It worked fine, but after terraform destroy and again terraform plan -> terraform apply I've noticed that terraform tries to create object before actually creating a bucket:
Ofc it cant't create object inside something that does'nt exist. Why is that?
You have to create a dependency between your objects and your bucket (see code below). Otherwise, Terraform won't know that it has to create bucket first, and then objects. This is related to how Terraform stores the resources in a directed graph.
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
By doing this, you declare an implicit order : bucket, then objects. This is equivalent to using depends_on in your google_storage_bucket_objects, but in that particular case I recommend using a reference to your bucket in your objects, rather than using an explicit depends_on.

Batch replace of files content in Terraform

I have multiple files under some root directory, let’s call it module/data/.
I need to upload this directory to the corresponding S3 bucket. All this works as expected with:
resource "aws_s3_bucket_object" "k8s-state" {
for_each = fileset("${path.module}/data", "**/*")
bucket = aws_s3_bucket.kops.bucket
key = each.value
source = "${path.module}/data/${each.value}"
etag = filemd5("${path.module}/data/${each.value}")
}
The only thing is left is that I need to loop over all files recursively and replace markers (for example !S3!) with values from variables of terraform’s module.
Similar to this, but across all files in directories/subdirectories:
replace(file("${path.module}/launchconfigs/file"), “#S3”, aws_s3_bucket.kops.bucket)
So the question in one sentence: how to loop over files and replace parts of them with variables from terraform?
An option could be using templates, the code will look like:
provider "aws" {
region = "us-west-1"
}
resource "aws_s3_bucket" "sample_bucket2222" {
bucket = "my-tf-test-bucket2222"
acl = "private"
}
resource "aws_s3_bucket_object" "k8s-state" {
for_each = fileset("${path.module}/data", "**/*")
bucket = aws_s3_bucket.sample_bucket2222.bucket
key = each.value
content = data.template_file.data[each.value].rendered
etag = filemd5("${path.module}/data/${each.value}")
}
data "template_file" "data" {
for_each = fileset("${path.module}/data", "**/*")
template = "${file("${path.module}/data/${each.value}")}"
vars = {
bucket_id = aws_s3_bucket.sample_bucket2222.id
bucket_arn = aws_s3_bucket.sample_bucket2222.arn
}
}
Instead of source you can see I'm using content to consume the template_file, that is the only difference in that resource with yours
On your files the variables could be consumed like:
Hello ${bucket_id}
I have all my test code here:
https://github.com/heldersepu/hs-scripts/tree/master/TerraForm/regional

How to add lifecycle rules to an S3 bucket using terraform?

I am using Terraform to create a bucket in S3 and I want to add "folders" and lifecycle rules to it.
I can create the bucket (using an "aws_s3_bucket" resource).
I can create the bucket and define my lifecycle rules within the same "aws_s3_bucket" resource, ie. at creation time.
I can add "folders" to the bucket (I know they aren't really folders, but they are presented to the client systems as if they were... :-) ), using an "aws_s3_bucket_object" resource, ie. after bucket creation.
All good...
But I want to be able to add lifecycle rules AFTER I've created the bucket, but I get an error telling me the bucket already exists. (Actually I want to be able to subsequently add folders and corresponding lifecycle rules as and when required.)
Now, I can add lifecycle rules to an existing bucket in the AWS GUI, so I know it is a reasonable thing to want to do.
But is there a way of doing it with Terraform?
Am I missing something?
resource "aws_s3_bucket" "bucket" {
bucket = "${replace(var.tags["Name"],"/_/","-")}"
region = "${var.aws_region}"
#tags = "${merge(var.tags, map("Name", "${var.tags["Name"]}"))}"
tags = "${merge(var.tags, map("Name", "${replace(var.tags["Name"],"/_/","-")}"))}"
}
resource "aws_s3_bucket" "bucket_quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "quarterly_retention"
prefix = "quarterly/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "bucket_permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
resource "aws_s3_bucket_object" "quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "quarterly"
source = "/dev/null"
}
resource "aws_s3_bucket_object" "permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "permanent"
source = "/dev/null"
}
I expect to have a bucket with 2 lifecycle rules, but I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* module.s3.aws_s3_bucket.bucket_quarterly: 1 error(s) occurred:
* aws_s3_bucket.bucket_quarterly: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: EFE9C62B25341478, host id: hcsCNracNrpTJZ4QdU0AV2wNm/FqhYSEY4KieQ+zSHNsj6AUR69XvPF+0BiW4ZOpfgIoqwFoXkI=
* module.s3.aws_s3_bucket.bucket_permanent: 1 error(s) occurred:
* aws_s3_bucket.bucket_permanent: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: 7DE1B1A36138A614, host id: 8jB6l7d6Hc6CZFgQSLQRMJg4wtvnrSL6Yp5R4RScq+GtuMW+6rkN39bcTUwQhzxeI7jRStgLXSc=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Lets first break down whats happening and how we can overcome this issue. Each time you define a resource "aws_s3_bucket", terraform will attempt to create a bucket with the parameters specified. If you want to attach a lifecycle policy to a bucket, do it where you define the bucket, e.g.:
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
lifecycle_rule {
id = "quarterly_retention"
prefix = "folder/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "permanent" {
bucket = "perm_bucket_name"
acl = "private"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
A bucket can have multiple lifecycle_rule blocks on it.
If you want to define the lifecycle rules as external blocks, you can do it in this way:
// example of what the variable would look like:
variable "lifecycle_rules" {
type = "list"
default = []
}
// example of what the assignment would look like:
lifecycle_rules = [{
id = "cleanup"
prefix = ""
enabled = true
expiration = [{
days = 1
}]
}, {...}, {...} etc...]
// example what the usage would look like
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
source = "/dev/null"
lifecycle_rule = [ "${var.lifecycle_rules}" ]
}
Note: the implementation above of having an external lifecycle policy isn't really the best way to do it, but the only way. You pretty much trick terraform into accepting the list of maps, which happens to be the same type as lifecycle_rule, so it works. Ideally, Terraform should have it's own resource block for lifecycle rules, but it doesn't.
Edit: why have separate resource blocks when we now have dynamic blocks! Woohoo
As far as I am aware, you cannot make a lifecycle policy separately.
Someone raised a PR for a resource to be created to allow you to do so, but looks like it is still open: https://github.com/terraform-providers/terraform-provider-aws/issues/6188
As for your error, I believe the reason you're getting the error is because:
resource "aws_s3_bucket" "bucket"
Creates a bucket with a particular name.
resource "aws_s3_bucket" "bucket_quarterly"
References bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the previous resource (which cannot be done as names are unique).
resource "aws_s3_bucket" "bucket_permanent"
Similarly, this resource references bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the first resource (which cannot be done as names are unique).
You mentioned I expect to have a bucket with 2 lifecycle rules but in your above code you are creating 3 separate s3 buckets (one without a lifecycle, and 2 with a lifecycle) and two objects (folders) that are being placed into the s3 bucket without a lifecycle policy.
Thanks for the info (I like the idea of the list to separate the rules from the resource).
The issue was that I didn't appreciate that you could define lifecycle rules within the resource AND change them subsequently, so I was trying to figure out how to define them separately...
All that's required is to specify them in the resource and do terraform apply, then you can edit it and add/amend/remove lifecycle_rules items and just do terraform apply again to apply the changes.
source "aws_s3_bucket" "my_s3_bucket" {
bucket = local.s3_bucket_name
}
resource "aws_s3_bucket_acl" "my_s3_bucket_acl" {
bucket = aws_s3_bucket.my_s3_bucket.arn
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "my_s3_bucket_versioning" {
bucket = aws_s3_bucket.my_s3_bucket.arn
versioning_configuration {
status = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_s3-bucket_encryption" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "my_s3_bucket_lifecycle_config" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
id = "dev_lifecycle_7_days"
status = true
abort_incomplete_multipart_upload {
days_after_initiation = 30
}
noncurrent_version_expiration {
noncurrent_days = 1
}
transition {
storage_class = "STANDARD_IA"
days = 30
}
expiration {
days = 30
}
}
}

AWS Beanstalk Tomcat and Terraform

I try to set up a Tomcat using Beanstalk.
Here's my Terraform code:
(bucket is created beforehand)
# Upload the JAR to bucket
resource "aws_s3_bucket_object" "myjar" {
bucket = "${aws_s3_bucket.mybucket.id}"
key = "src/java-tomcat-v3.zip"
source = "${path.module}/src/java-tomcat-v3.zip"
etag = "${md5(file("${path.module}/src/java-tomcat-v3.zip"))}"
}
# Define app
resource "aws_elastic_beanstalk_application" "tftestapp" {
name = "tf-test-name"
description = "tf-test-desc"
}
# Define beanstalk jar version
resource "aws_elastic_beanstalk_application_version" "myjarversion" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "My description"
bucket = "${aws_s3_bucket.mybucket.id}"
key = "${aws_s3_bucket_object.myjar.id}"
force_delete = true
}
# Deploy env
resource "aws_elastic_beanstalk_environment" "tftestenv" {
name = "tf-test-name"
application = "${aws_elastic_beanstalk_application.tftestapp.name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v3.0.0 running Tomcat 7 Java 7"
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "1"
}
...
}
And I end up with a very strange error, saying it can't find the file on the bucket.
InvalidParameterCombination: Unable to download from S3 location
(Bucket: mybucket Key: src/java-tomcat-v3.zip). Reason: Not Found
Nevertheless, connecting to the web console and accessing my bucket, I can see the zip file is right there...
I don't get it, any help please?
PS: I tried with and without the src/
Cheers
I was recently having this same error on Terraform 0.13.
Differences between 0.13 and older versions:
The documentation appears to be out of date. For instance, under aws_elastic_beanstalk_application_version it shows
resource "aws_s3_bucket" "default" {
bucket = "tftest.applicationversion.bucket"
}
resource "aws_s3_bucket_object" "default" {
bucket = aws_s3_bucket.default.id
key = "beanstalk/go-v1.zip"
source = "go-v1.zip"
}
resource "aws_elastic_beanstalk_application" "default" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = aws_s3_bucket.default.id
key = aws_s3_bucket_object.default.id
}
If you attempt to use this, terraform fails with the bucket object because the "source" argument is no longer available within aws_elastic_beanstalk_application_version.
After removing the "source" property, it moved to the next issue, which was Error: InvalidParameterCombination: Unable to download from S3 location (Bucket: mybucket Key: mybucket/myfile.txt). Reason: Not Found
This error comes from the terraform:
resource "aws_s3_bucket" "bucket" {
bucket = "mybucket"
}
resource "aws_s3_bucket_object" "default" {
bucket = aws_s3_bucket.bucket.id
key = "myfile.txt"
}
resource "aws_elastic_beanstalk_application" "default" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = aws_s3_bucket.bucket.id
key = aws_s3_bucket_object.default.id
}
What Terraform ends up doing here is it prepends the bucket to the key. When you run terraform plan you see that bucket = "mybucket" and key = "mybucket/myfile.txt". The problem with this is that Terraform looks in the bucket for the file "mybucket/myfile.txt" when it should ONLY be looking for "myfile.txt"
Solution
What I did was REMOVE the bucket and bucket object resources from the script and place the names in variables, as follows:
variable "sourceCodeS3BucketName" {
type = string
description = "The bucket that contains the engine code."
default = "mybucket"
}
variable "sourceCodeFilename" {
type = string
description = "The code file name."
default = "myfile.txt"
}
resource "aws_elastic_beanstalk_application" "myApp" {
name = "my-beanstalk-app"
description = "My application"
}
resource "aws_elastic_beanstalk_application_version" "v1_0_0" {
name = "my-application-v1_0_0"
application = aws_elastic_beanstalk_application.myApp.name
description = "Application v1.0.0"
bucket = var.sourceCodeS3BucketName
key = var.sourceCodeFilename
}
By directly using the name of the file and the bucket, Terraform does not prepend the bucket name to the key, and it can find the file just fine.

how to create multiple folders inside an existing AWS bucket

How to create a multiple folders inside an existing bucket using terraform.
example: bucket/folder1/folder2
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/"
source = "/dev/null"
}
While the answer of Nate is correct, this would lead to a lot of code duplication. A better solution in my opinion would be to work with a list and loop over it.
Create a variable (variable.tf file) that contains a list of possible folders:
variable "s3_folders" {
type = "list"
description = "The list of S3 folders to create"
default = ["folder1", "folder2", "folder3"]
}
Then alter the piece of code you already have:
resource "aws_s3_bucket_object" "folders" {
count = "${length(var.s3_folders)}"
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "${var.s3_folders[count.index]}/"
source = "/dev/null"
}
Apply the same logic as you did to create the first directory.
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/Folder2/"
source = "/dev/null"
}
There a no tips for windows users but this should work for you.
Slightly easier than using an empty file as "source"
resource "aws_s3_bucket_object" "output_subdir" {
bucket = "${aws_s3_bucket.file_bucket.id}"
key = "output/"
content_type = "application/x-directory"
}
resource "aws_s3_bucket_object" "input_subdir" {
bucket = "${aws_s3_bucket.file_bucket.id}"
key = "input/"
content_type = "application/x-directory"
}