Access selective folders in S3 bucket visa AWS Transfer User - amazon-web-services

I have 3 folders in a S3 bucket and AWS Transfer User which has access to one folder in that bucket which is setup via terraform :
resource "aws_transfer_user" "foo" {
server_id = aws_transfer_server.foo.id
user_name = "tftestuser"
role = aws_iam_role.foo.arn
home_directory_type = "LOGICAL"
home_directory_mappings {
entry = "/test.pdf"
target = "/bucket3/test-path/folder1"
//target = "/bucket3/test-path/folder2" --> Something like this accessing folder1 and folder2
}
}
Now I want it to have access to 2nd folder as well. Is it possible to add another folder to the user or I'll have to create a new aws transfer user ?

Try defining multiple home_directory_mappings because terraform accepts multiple items in certain cases like ordered_cache_behavior in aws_cloudfront_distribution

Related

GCP terraform-google-project-factory multiple projects update the service account with new bindings?

I am using the terraform-google-project-factory module to create multiple GCP projects at once. The projects create just fine and I am using the included option to disable the default GCP compute service account and stand-up a new Service Account in each project.
The module has an "sa_role" input where I assign "roles/compute.admin" to the new S.A. However, I would also like to assign some additional IAM roles to that Service Account in the same deployment. The sa_role input seems to only take one string value:
module "project-factory" {
source = "terraform-google-modules/project-factory/google"
version = "12.0.0"
for_each = toset(local.project_names)
random_project_id = true
name = each.key
org_id = local.organization_id
billing_account = local.billing_account
folder_id = google_folder.DQS.id
default_service_account = "disable"
default_network_tier = "PREMIUM"
create_project_sa = true
auto_create_network = false
project_sa_name = local.service_account
sa_role = ["roles/compute.admin"]
activate_apis = ["compute.googleapis.com","storage.googleapis.com","oslogin.googleapis.com",]
}
The output for the Service Account email looks like this:
output "service_account_email" {
value = values(module.project-factory)[*].service_account_email
description = "The email of the default service account"
}
How can I add additional IAM roles to this Service Account in the same main.tf ? This Stack article comes close to what I wish to achieve:
Want to assign multiple Google cloud IAM roles against a service account via terraform
However, I do not know how to reference my Service Account email addresses from the outputs.tf to make them available to the members = part of the data google_iam_policy. My question is, how to get this to work with the data google_iam_policy, or is there another better way to do this?

How do I get list of all S3 Buckets with given prefix using terraform?

I am writing a Terraform script to setup an event notification on multiple S3 buckets which are starting with given prefix.
For example I want to setup notification for bucket starting with finance-data. With help of aws_s3_bucket datasource, we can configure a multiple S3 buckets which are already present and later we can use them in aws_s3_bucket_notification resource. Example:
data "aws_s3_bucket" "source_bucket" {
# set of buckets on which event notification will be set
# finance-data-1 and finance-data-2 are actual bucket id
for_each = toset(["finance-data-1", "finance-data-2"])
bucket = each.value
}
resource "aws_s3_bucket_notification" "bucket_notification_to_lambda" {
for_each = data.aws_s3_bucket.source_bucket
bucket = each.value.id
lambda_function {
lambda_function_arn = aws_lambda_function.s3_event_lambda.arn
events = [
"s3:ObjectCreated:*",
"s3:ObjectRemoved:*"
]
}
}
In aws_s3_bucket datasource, I am not able to find an option to give a prefix of the bucket and instead I have to enter bucket-id for all the buckets. Is there any way to achieve this?
Is there any way to achieve this?
No there is not. You have to explicitly specify buckets that you want.

Upload multiple files to multiple S3 buckets in Terraform

I am very new to terraform. My requirement is to upload objects to existing s3 buckets. I want to upload one or more objects from my source to one or more buckets utilizing only one resource. Using count and count.index I can create different numbers of resources. However, doing so will prevent me from using fileset which helps to recursively upload all the contents in the folder.
The basic code look like this. This is for multiple file uploads to single bucket but I would like to modify for multiple uploads to different buckets.;
variable "source_file_path"{
type = list(string)
description = "Path from where objects are to be uploaded"
}
variable "bucket_name"{
type = list(string)
description = "Name or ARN of the bucket to put the file in"
}
variable "data_folder"{
type = list(string)
description = "Object path inside the bucket"
}
resource "aws_s3_bucket_object" "upload_object"{
for_each = fileset(var.source_file_path, "*")
bucket = var.bucket_name
key = "${var.data_folder}${each.value}"
source = "${var.source_file_path}${each.value}"
}
I have created a vars.tfvars file with following values;
source_file_path = ["source1","source2"]
bucket_name = ["bucket1","bucket2"]
data_folder = ["path1","path2"]
So, what I need is, terraform to be able to upload all the files from the source1 to bucket1 s3 bucket by creating path1 inside the bucket. And similarly for source2, bucket2, and path2.
Is this something that can be done in terraform?
From your problem description it sounds like a more intuitive data structure to describe what you want to create would be a map of objects where the keys are bucket names and the values describe the settings for that bucket:
variable "buckets" {
type = map(object({
source_file_path = string
key_prefix = string
}))
}
When defining the buckets in your .tfvars file this will now appear as a single definition with a complex type:
buckets = {
bucket1 = {
source_file_path = "source1"
key_prefix = "path1"
}
bucket2 = {
source_file_path = "source2"
key_prefix = "path2"
}
}
This data structure has one element for each bucket, so it is suitable to use directly as the for_each for a resource describing the buckets:
resource "aws_s3_bucket" "example" {
for_each = each.buckets
bucket = each.key
# ...
}
There is a pre-existing official module hashicorp/dir/template which already encapsulates the work of finding files under a directory prefix, assigning each one a Content-Type based on its filename suffix, and optionally rendering templates. (You can ignore the template feature if you don't need it, by making your directory only contain static files.)
We need one instance of that module per bucket, because each bucket will have its own directory and thus its own set of files, and so we can use for_each chaining to tell Terraform that each instance of this module is related to one bucket:
module "bucket_files" {
for_each = aws_s3_bucket.example
base_dir = var.buckets[each.key].source_file_path
}
The module documentation shows how to map the result of the module to S3 bucket objects, but that example is for only a single instance of the module. In your case we need an extra step to turn this into a single collection of files across all buckets, which we can do using flatten:
locals {
bucket_files_flat = flatten([
for bucket_name, files_module in module.bucket_files : [
for file_key, file in files_module.files : {
bucket_name = bucket_name
local_key = file_key
remote_key = "${var.buckets[each.key].key_prefix}${file_key}"
source_path = file.source_path
content = file.content
content_type = file.content_type
etag = file.digests.md5
}
]
])
}
resource "aws_s3_bucket_object" "example" {
for_each = {
for bf in local.bucket_files_flat :
"s3://${bf.bucket_name}/${bf.remote_key}" => bf
}
# Now the rest of this is basically the same as
# the hashicorp/dir/template S3 example, but using
# the local.bucket_files_flat structure instead
# of the module result directly.
bucket = each.value.bucket_name
key = each.value.remote_key
content_type = each.value.content_type
# The template_files module guarantees that only one of these two attributes
# will be set for each file, depending on whether it is an in-memory template
# rendering result or a static file on disk.
source = each.value.source_path
content = each.value.content
# Unless the bucket has encryption enabled, the ETag of each object is an
# MD5 hash of that object.
etag = each.value.etag
}
Terraform needs a unique tracking key for each instance of aws_s3_bucket_object.example, and so I just arbitrarily decided to use the s3:// URI convention here, since I expect that's familiar to folks accustomed to working with S3. This means that the resource block will declare instances with addresses like this:
aws_s3_bucket_object.example["s3://bucket1/path1example.txt"]
aws_s3_bucket_object.example["s3://bucket2/path2other_example.txt"]
Because these objects are uniquely identified by their final location in S3, Terraform will understand changes to the files as updates in-place, but any changes to the location as removing an existing object and adding a new one at the same time.
(I replicated the fact that your example just concatenated the path prefix with the filename without any intermediate separator, and so that's why it appears as path1example.txt above and not path1/example.txt. If you want the slash in there, you can add it to the expression which defined remote_key inside local.bucket_files_flat.)

Terraform overwriting state file on remote backend

Most probably I am doing something wrong or missing something here.
This is how my terraform template looks like:
locals {
credentials_file_path = "~/gcp-auth/account.json"
}
terraform {
backend "gcs" {
bucket = "somebucket-tf-state"
prefix = "terraform/state/"
credentials = "~/gcp-auth/account.json"
}
}
provider "google" {
region = "${var.region}"
credentials = "${file(local.credentials_file_path)}"
}
module "project" {
source = "../modules/gcp-project/"
project_name = "${var.project_name}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
When I run this for multiple times with different parameters, It overwrites the previous state file.
This is what I see in the bucket:
Buckets/somebucket-tf-state/terraform/state/default.tfstate
Is there a way I can create different state files per project I run the template for?
If I understand what you're trying to do correctly, then it sounds like what you need is workspaces.
Just do :
# Select per-project workspace or create new workspace
terraform workspace select $GCE_PROJECT || terraform workspace new $GCE_PROJECT
$ Plan and apply as usual.
terraform plan -out .terraform/.terraform.plan && terraform apply .terraform/.terraform.plan
# Revert to default workspace
terraform workspace select default
The better oprion is to use GitOps. You should create an environment for each branch and for every environment inject the correct value in the bucket name.

How to create a folder in an amazon S3 bucket using terraform

I was able to create a bucket in an amazon S3 using this link.
I used the following code to create a bucket :
resource "aws_s3_bucket" "b" {
bucket = "my_tf_test_bucket"
acl = "private"
}
Now I wanted to create folders inside the bucket, say Folder1.
I found the link for creating an S3 object. But this has a mandatory parameter source. I am not sure what this value have to , since my intent is to create a folder inside the S3 bucket.
For running terraform on Mac or Linux, the following will do what you want
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.b.id}"
acl = "private"
key = "Folder1/"
source = "/dev/null"
}
If you're on windows you can use an empty file.
While folks will be pedantic about s3 not having folders, there are a number of operations where having an object placeholder for a key prefix (otherwise called a folder) make life easier. Like s3 sync for example.
Actually, there is a canonical way to create it, without being OS dependent, by inspecting the Network on a UI put you see the content headers, as stated by : https://stackoverflow.com/users/1554386/alastair-mccormack ,
And S3 does support folders these days as visible from the UI.
So this is how you can achieve it:
resource "aws_s3_bucket_object" "base_folder" {
bucket = "${aws_s3_bucket.default.id}"
acl = "private"
key = "${var.named_folder}/"
content_type = "application/x-directory"
kms_key_id = "key_arn_if_used"
}
Please notice the trailing slash otherwise it creates an empty file
Above has been used with a Windows OS to successfully create a folder using terraform s3_bucket_object.
The answers here are outdated, it's now definitely possible to create an empty folder in S3 via Terraform. Using the aws_s3_object resource, as follows:
resource "aws_s3_bucket" "this_bucket" {
bucket = "demo_bucket"
}
resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.this_bucket.id
key = "demo/directory/"
}
If you don't supply a source for the object then terraform will create an empty directory.
IMPORTANT - Note the trailing slash this will ensure you get a directory and not an empty file
S3 doesn't support folders. Objects can have prefix names with slashes that look like folders, but that's just part of the object name. So there's no way to create a folder in terraform or anything else, because there's no such thing as a folder in S3.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
http://docs.aws.amazon.com/AWSImportExport/latest/DG/ManipulatingS3KeyNames.html
If you want to pretend, you could create a zero-byte object in the bucket named "Folder1/" but that's not required. You can just create objects with key names like "Folder1/File1" and it will work.
old answer but if you specify the key with the folder (that doesn't exist yet) terraform will create the folder automatically for you
terraform {
backend "s3" {
bucket = "mysql-staging"
key = "rds-mysql-state/terraform.tfstate"
region = "us-west-2"
encrypt = true
}
}
I would like to add to this discussion that you can create a set of empty folders by providing the resource a set of strings:
resource "aws_s3_object" "default_s3_content" {
for_each = var.default_s3_content
bucket = aws_s3_bucket.bucket.id
key = "${each.value}/"
}
where var.default_s3_content is a set of strings:
variable "default_s3_content" {
description = "The default content of the s3 bucket upon creation of the bucket"
type = set(string)
default = ["folder1", "folder2", "folder3", "folder4", "folder5"]
}
v0.12.8 introduces a new fileset() function which can be used in combination with for_each to support this natively :
NEW FEATURES:
lang/funcs: New fileset function, for finding static local files that
match a glob pattern. (#22523)
A sample usage of this function is as follows (from here):
# Given the file structure from the initial issue:
# my-dir
# |- file_1
# |- dir_a
# | |- file_a_1
# | |- file_a_2
# |- dir_b
# | |- file_b_1
# |- dir_c
# And given the expected behavior of the base_s3_key prefix in the initial issue
resource "aws_s3_bucket_object" "example" {
for_each = fileset(path.module, "my-dir/**/file_*")
bucket = aws_s3_bucket.example.id
key = replace(each.value, "my-dir", "base_s3_key")
source = each.value
}
At the time of this writing, v0.12.8 is a day old (Released on 2019-09-04) so the documentation on https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html does not yet reference it. I am not certain if that's intentional.
As an aside, if you use the above, remember to update/create version.tf in your project like so:
terraform {
required_version = ">= 0.12.8"
}