how to create multiple folders inside an existing AWS bucket - amazon-web-services

How to create a multiple folders inside an existing bucket using terraform.
example: bucket/folder1/folder2
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/"
source = "/dev/null"
}

While the answer of Nate is correct, this would lead to a lot of code duplication. A better solution in my opinion would be to work with a list and loop over it.
Create a variable (variable.tf file) that contains a list of possible folders:
variable "s3_folders" {
type = "list"
description = "The list of S3 folders to create"
default = ["folder1", "folder2", "folder3"]
}
Then alter the piece of code you already have:
resource "aws_s3_bucket_object" "folders" {
count = "${length(var.s3_folders)}"
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "${var.s3_folders[count.index]}/"
source = "/dev/null"
}

Apply the same logic as you did to create the first directory.
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/Folder2/"
source = "/dev/null"
}

There a no tips for windows users but this should work for you.
Slightly easier than using an empty file as "source"
resource "aws_s3_bucket_object" "output_subdir" {
bucket = "${aws_s3_bucket.file_bucket.id}"
key = "output/"
content_type = "application/x-directory"
}
resource "aws_s3_bucket_object" "input_subdir" {
bucket = "${aws_s3_bucket.file_bucket.id}"
key = "input/"
content_type = "application/x-directory"
}

Related

Combine Each.Value with String text?

Working on an AWS SFTP solution with custom IDP. I have this s3 object block, which is intended to create a folder in s3:
resource "aws_s3_bucket_object" "home_directory" {
for_each = var.idp_users
bucket = aws_s3_bucket.s3.id
key = each.value["HomeDirectory"]
}
And this map variable input for idp_users:
idp_users = {
secret01 = {
Password = "password",
HomeDirectory = "test-directory-1",
Role = "arn:aws:iam::XXXXXXXXXXXX:role/custom_idp_sftp_role",
},
secret02 = {
Password = "password",
HomeDirectory = "test-directory-2",
Role = "arn:aws:iam::XXXXXXXXXXXX:role/custom_idp_sftp_role",
}
}
What I need is to simply add a "/" to the end of the HomeDirectory value in the aws_s3_bucket_object block, which will create a folder with the specific name in the s3 bucket. I know it could just be typed into the variable, but in the spirit of automation I want Terraform to append it manually and save us the hassle. I've monkeyed around with join and concatenate but can't figure out how to simply add a "/" to the end of the HomeDirectory value in the s3 object block. Can anyone provide some insight?
You can do that using string templating:
resource "aws_s3_bucket_object" "home_directory" {
for_each = var.idp_users
bucket = aws_s3_bucket.s3.id
key = "${each.value["HomeDirectory"]}/"
}

Terraform GCP executes resources in wrong order

I have this main.tf file:
provider "google" {
project = var.projNumber
region = var.regName
zone = var.zoneName
}
resource "google_storage_bucket" "bucket_for_python_application" {
name = "python_bucket_exam"
location = var.regName
force_destroy = true
}
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = "python_bucket_exam"
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = "python_bucket_exam"
}
When executed first time It worked fine, but after terraform destroy and again terraform plan -> terraform apply I've noticed that terraform tries to create object before actually creating a bucket:
Ofc it cant't create object inside something that does'nt exist. Why is that?
You have to create a dependency between your objects and your bucket (see code below). Otherwise, Terraform won't know that it has to create bucket first, and then objects. This is related to how Terraform stores the resources in a directed graph.
resource "google_storage_bucket_object" "file-hello-py" {
name = "src/hello.py"
source = "app-files/src/hello.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
resource "google_storage_bucket_object" "file-main-py" {
name = "main.py"
source = "app-files/main.py"
bucket = google_storage_bucket.bucket_for_python_application.name
}
By doing this, you declare an implicit order : bucket, then objects. This is equivalent to using depends_on in your google_storage_bucket_objects, but in that particular case I recommend using a reference to your bucket in your objects, rather than using an explicit depends_on.

How to ensure an S3 bucket name is not used with Terraform?

I know the data aws_s3_bucket resource can be used to get a reference to an existing bucket, but how would it be used to ensure that a new potential bucket name is unique?
I'm thinking a loop using random numbers, but how can that be used to search for a bucket name which has not been used?
As discussed in the comments, this behaviour can be achieved with the bucket_prefix functionality
This code:
resource "aws_s3_bucket" "my_s3_bucket" {
bucket_prefix = "my-stackoverflow-bucket-"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
Produces the following unique bucket:
Another solution by using bucket instead of bucket_prefix and random_uuid, for example:
resource "aws_s3_bucket" "my_s3_bucket" {
bucket = "my-s3-bucket-${random_uuid.uuid.result}"
}
resource "random_uuid" "uuid" {}
This will give you a name like this:
my-s3-bucket-ebb92011-3cd9-503f-0977-7371102405f5

Batch replace of files content in Terraform

I have multiple files under some root directory, let’s call it module/data/.
I need to upload this directory to the corresponding S3 bucket. All this works as expected with:
resource "aws_s3_bucket_object" "k8s-state" {
for_each = fileset("${path.module}/data", "**/*")
bucket = aws_s3_bucket.kops.bucket
key = each.value
source = "${path.module}/data/${each.value}"
etag = filemd5("${path.module}/data/${each.value}")
}
The only thing is left is that I need to loop over all files recursively and replace markers (for example !S3!) with values from variables of terraform’s module.
Similar to this, but across all files in directories/subdirectories:
replace(file("${path.module}/launchconfigs/file"), “#S3”, aws_s3_bucket.kops.bucket)
So the question in one sentence: how to loop over files and replace parts of them with variables from terraform?
An option could be using templates, the code will look like:
provider "aws" {
region = "us-west-1"
}
resource "aws_s3_bucket" "sample_bucket2222" {
bucket = "my-tf-test-bucket2222"
acl = "private"
}
resource "aws_s3_bucket_object" "k8s-state" {
for_each = fileset("${path.module}/data", "**/*")
bucket = aws_s3_bucket.sample_bucket2222.bucket
key = each.value
content = data.template_file.data[each.value].rendered
etag = filemd5("${path.module}/data/${each.value}")
}
data "template_file" "data" {
for_each = fileset("${path.module}/data", "**/*")
template = "${file("${path.module}/data/${each.value}")}"
vars = {
bucket_id = aws_s3_bucket.sample_bucket2222.id
bucket_arn = aws_s3_bucket.sample_bucket2222.arn
}
}
Instead of source you can see I'm using content to consume the template_file, that is the only difference in that resource with yours
On your files the variables could be consumed like:
Hello ${bucket_id}
I have all my test code here:
https://github.com/heldersepu/hs-scripts/tree/master/TerraForm/regional

How to reduce repeated HCL code in Terraform?

I have some Terraform code like this:
resource "aws_s3_bucket_object" "file1" {
key = "someobject1"
bucket = "${aws_s3_bucket.examplebucket.id}"
source = "./src/index.php"
}
resource "aws_s3_bucket_object" "file2" {
key = "someobject2"
bucket = "${aws_s3_bucket.examplebucket.id}"
source = "./src/main.php"
}
# same code here, 10 files more
# ...
Is there a simpler way to do this?
Terraform supports loops via the count meta parameter on resources and data sources.
So, for a slightly simpler example, if you wanted to loop over a well known list of files you could do something like the following:
locals {
files = [
"index.php",
"main.php",
]
}
resource "aws_s3_bucket_object" "files" {
count = "${length(local.files)}"
key = "${local.files[count.index]}"
bucket = "${aws_s3_bucket.examplebucket.id}"
source = "./src/${local.files[count.index]}"
}
Unfortunately Terraform's AWS provider doesn't have support for the equivalent of aws s3 sync or aws s3 cp --recursive although there is an issue tracking the feature request.