Terraform - Updating S3 Access Control: Question on replacing acl with grant - amazon-web-services

I have an S3 bucket which is used as Access logging bucket.
Here is my current module and resource TF code for that:
module "access_logging_bucket" {
source = "../../resources/s3_bucket"
environment = "${var.environment}"
region = "${var.region}"
acl = "log-delivery-write"
encryption_key_alias = "alias/ab-data-key"
name = "access-logging"
name_tag = "Access logging bucket"
}
resource "aws_s3_bucket" "default" {
bucket = "ab-${var.environment}-${var.name}-${random_id.bucket_suffix.hex}"
acl = "${var.acl}"
depends_on = [data.template_file.dependencies]
tags = {
name = "${var.name_tag}"
. . .
}
lifecycle {
ignore_changes = [ "server_side_encryption_configuration" ]
}
}
The default value of variable acl is variable "acl" { default = "private" } in my case. And also as stated in Terraform S3 bucket attribute reference doc.
And for this bucket it is set to log-delivery-write.
I want to update it to add following grants and remove acl as they conflict with each other:
grant {
permissions = ["READ_ACP", "WRITE"]
type = "Group"
uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
grant {
id = data.aws_canonical_user_id.current.id
permissions = ["FULL_CONTROL"]
type = "CanonicalUser"
}
My Questions are:
Is removing the acl attribute and adding the above mentioned grants still maintain the correct access control for the bucket. i.e. is that grant configuration still good to have this as an Access Logging bucket.
If I remove the acl from the resource config, it will make it private which is the default value. Is that the correct thing to do or should it be made null or something?
On checking some documentation for Log Delivery group found this which leads me to think I can go ahead with replacing the acl with the grants I mentioned:
Log Delivery group – Represented by
http://acs.amazonaws.com/groups/s3/LogDelivery . WRITE permission on a
bucket enables this group to write server access logs (see Amazon S3
server access logging) to the bucket. When using ACLs, a grantee can
be an AWS account or one of the predefined Amazon S3 groups.

Based on the grant-log-delivery-permissions-general documentation, I went ahead and ran the terraform apply.
On first run it set the Bucket owner permission correctly but removed the S3 log delivery group. So, I ran the terraform plan again and it showed the following acl grant differences. I am thinking it's most likely that it first updated the acl value which removed the grant for log delivery group.
Thus I re-ran the terraform apply and it worked fine and corrected the log delivery group as well.
# module.buckets.module.access_logging_bucket.aws_s3_bucket.default will be updated in-place
~ resource "aws_s3_bucket" "default" {
acl = "private"
bucket = "ml-mxs-stage-access-logging-9d8e94ff"
force_destroy = false
. . .
tags = {
"name" = "Access logging bucket"
. . .
}
+ grant {
+ permissions = [
+ "READ_ACP",
+ "WRITE",
]
+ type = "Group"
+ uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
+ grant {
+ id = "ID_VALUE"
+ permissions = [
+ "FULL_CONTROL",
]
+ type = "CanonicalUser"
}
. . .
}
Plan: 0 to add, 1 to change, 0 to destroy.

Related

How to create public google bucket with uniform_bucket_level_access enabled?

I want to create publicly accessible Google Cloud Bucket with uniform_bucket_level_access enabled using terraform. All of the examples on provider's docs which are for public bucket does not contain this setting.
When I try to use:
resource "google_storage_bucket_access_control" "public_rule" {
bucket = google_storage_bucket.a_bucket.name
role = "READER"
entity = "allUsers"
}
resource "google_storage_bucket" "a_bucket" {
name = <name>
location = <region>
project = var.project_id
storage_class = "STANDARD"
uniform_bucket_level_access = true
versioning {
enabled = false
}
}
I get the following error:
Error: Error creating BucketAccessControl: googleapi: Error 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access, invalid
If I remove the line for uniform access everything works as expected.
Do I have to use google_storage_bucket_iam resource for achieving this ?
You will have to use google_storage_bucket_iam. I like to use the member one so I don't accidentally clobber other IAM bindings, but you can use whatever your needs dictate.
resource "google_storage_bucket_iam_member" "member" {
bucket = google_storage_bucket.a_bucket.name
role = "roles/storage.objectViewer"
member = "allUsers"
}
EDIT: Use this instead of the google_storage_bucket_access_controls resource that you have.

MissingSecurityHeader error for S3 bucket ACL

I have the following s3 bucket defined:
module "bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.1.0"
bucket = local.test-bucket-name
acl = null
grant = [{
type = "CanonicalUser"
permission = "FULL_CONTROL"
id = data.aws_canonical_user_id.current.id
}, {
type = "CanonicalUser"
permission = "FULL_CONTROL"
id = data.aws_cloudfront_log_delivery_canonical_user_id.cloudfront.id
}
]
object_ownership = "BucketOwnerPreferred"
}
But when I try to terraform apply this, I get the error:
Error: error updating S3 bucket ACL (logs,private): MissingSecurityHeader: Your request was missing a required header status code: 400
This error message is not very specific. Am I missing some type of header?
I came across the same issue.
I was trying to update an ACL on a bucket which had previously had private set as the ACL and then modifying my terraform code to match manually created entries on the ACL that someone had done via the GUI.
To get it working for me, I removed one of the ACL entries from the S3 bucket manually of which I was trying to add to the bucket and then re-ran the terraform and it worked without an error
I see the same error in cloudtrail also.
Its like you cant set private acl to null without adding an ACL entry

Can I grant a service account access to multiple buckets in a single policy?

I'm coming from AWS and still learning how IAM/Policies work in GCP. In AWS, if I wanted to grant a role access to multiple buckets I would do something like this in terraform:
data "aws_iam_policy_document" "policy" {
statement {
actions = [
"s3:Get*"
]
resources = [
"${var.bucket1_arn}/*",
"${var.bucket2_arn}/*",
"${var.bucket3_arn}/*",
]
}
}
resource "aws_iam_policy" "policy" {
name = "my-policy"
policy = data.aws_iam_policy_document.policy.json
}
resource "aws_iam_role_policy_attachment" "policy_attachment" {
policy_arn = aws_iam_policy.policy.arn
role = ${var.role_name}
}
I've been trying to figure out how to do it in GCP, but all I've found so far is that I need to attach a policy to each bucket individually, like so:
data "google_iam_policy" "policy" {
binding {
role = "roles/storage.objectViewer"
members = [
"serviceAccount:${service_account}",
]
}
}
resource "google_storage_bucket_iam_policy" "bucket_1" {
bucket = google_storage_bucket.bucket_1.name
policy_data = data.google_iam_policy.policy.policy_data
}
resource "google_storage_bucket_iam_policy" "bucket_2" {
bucket = google_storage_bucket.bucket_2.name
policy_data = data.google_iam_policy.policy.policy_data
}
resource "google_storage_bucket_iam_policy" "bucket_3" {
bucket = google_storage_bucket.bucket_3.name
policy_data = data.google_iam_policy.policy.policy_data
}
Is this the correct way (or best practice?) to grant a service account access to multiple buckets?
Yes, Google IAM is resource-centric (my understanding that AWS flips this and is identity-centric), you apply policies to resources.
Because the container (i.e. a Project) may contain many Buckets, you're only alternative is to apply the binding to the Project itself but then, every Bucket in the Project will have the binding.
The approach you're taking yields precision (only those buckets granted the role have it) albeit slightly onerous for the role binding phase (something done infrequently).
DazWikin answer is right, but on GCP you can cheat. In fact, you can use IAM conditions and build something like that:
Grant the account (service or user) at the folder or organisation level, to grant it the access to all the resources. For example, grant the role storage Admin
Use condition to enforce this role on only a subset of bucket
Like that
resource "google_organization_iam_binding" "Binding" {
members = ["<ACCOUNT_EMAIL>"]
org_id = "<YOUR_ORG_ID>"
role = "roldes/storage.admin"
condition {
expression = 'resource.name.startsWith("projects/_/buckets/<BUCKET1>") || resource.name.startsWith("projects/_/buckets/<BUCKET2>")'
title = "bucket filter"
}
}
It's not so clean, especially to update when you have new buckets that you want to add in the list, but it's a workaround at your question.

AWS Macie & Terraform - Select all S3 buckets in account

I am enabling AWS Macie 2 using terraform and I am defining a default classification job as following:
resource "aws_macie2_account" "member" {}
resource "aws_macie2_classification_job" "member" {
job_type = "ONE_TIME"
name = "S3 PHI Discovery default"
s3_job_definition {
bucket_definitions {
account_id = var.account_id
buckets = ["S3 BUCKET NAME 1", "S3 BUCKET NAME 2"]
}
}
depends_on = [aws_macie2_account.member]
}
AWS Macie needs a list of S3 buckets to analyze. I am wondering if there is a way to select all buckets in an account, using a wildcard or some other method. Our production accounts contain hundreds of S3 buckets and hard-coding each value in the s3_job_definition is not feasible.
Any ideas?
The Terraform AWS provider does not support a data source for listing S3 buckets at this time, unfortunately. For things like this (data sources that Terraform doesn't support), the common approach is to use the AWS CLI through an external data source.
These are modules that I like to use for CLI/shell commands:
As a data source (re-runs each time)
As a resource (re-runs only on resource recreate or on a change to a trigger)
Using the data source version, it would look something like:
module "list_buckets" {
source = "Invicton-Labs/shell-data/external"
version = "0.1.6"
// Since the command is the same on both Unix and Windows, it's ok to just
// specify one and not use the `command_windows` input arg
command_unix = "aws s3api list-buckets --output json"
// You want Terraform to fail if it can't get the list of buckets for some reason
fail_on_error = true
// Specify your AWS credentials as environment variables
environment = {
AWS_PROFILE = "myprofilename"
// Alternatively, although not recommended:
// AWS_ACCESS_KEY_ID = "..."
// AWS_SECRET_ACCESS_KEY = "..."
}
}
output "buckets" {
// We specified JSON format for the output, so decode it to get a list
value = jsondecode(module.list_buckets.stdout).Buckets
}
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
buckets = [
{
"CreationDate" = "2021-07-15T18:10:20+00:00"
"Name" = "bucket-foo"
},
{
"CreationDate" = "2021-07-15T18:11:10+00:00"
"Name" = "bucket-bar"
},
]

How to add lifecycle rules to an S3 bucket using terraform?

I am using Terraform to create a bucket in S3 and I want to add "folders" and lifecycle rules to it.
I can create the bucket (using an "aws_s3_bucket" resource).
I can create the bucket and define my lifecycle rules within the same "aws_s3_bucket" resource, ie. at creation time.
I can add "folders" to the bucket (I know they aren't really folders, but they are presented to the client systems as if they were... :-) ), using an "aws_s3_bucket_object" resource, ie. after bucket creation.
All good...
But I want to be able to add lifecycle rules AFTER I've created the bucket, but I get an error telling me the bucket already exists. (Actually I want to be able to subsequently add folders and corresponding lifecycle rules as and when required.)
Now, I can add lifecycle rules to an existing bucket in the AWS GUI, so I know it is a reasonable thing to want to do.
But is there a way of doing it with Terraform?
Am I missing something?
resource "aws_s3_bucket" "bucket" {
bucket = "${replace(var.tags["Name"],"/_/","-")}"
region = "${var.aws_region}"
#tags = "${merge(var.tags, map("Name", "${var.tags["Name"]}"))}"
tags = "${merge(var.tags, map("Name", "${replace(var.tags["Name"],"/_/","-")}"))}"
}
resource "aws_s3_bucket" "bucket_quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "quarterly_retention"
prefix = "quarterly/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "bucket_permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#region = "${var.aws_region}"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
resource "aws_s3_bucket_object" "quarterly" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "quarterly"
source = "/dev/null"
}
resource "aws_s3_bucket_object" "permanent" {
bucket = "${aws_s3_bucket.bucket.id}"
#bucket = "${var.bucket_id}"
acl = "private"
key = "permanent"
source = "/dev/null"
}
I expect to have a bucket with 2 lifecycle rules, but I get the following error:
Error: Error applying plan:
2 error(s) occurred:
* module.s3.aws_s3_bucket.bucket_quarterly: 1 error(s) occurred:
* aws_s3_bucket.bucket_quarterly: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: EFE9C62B25341478, host id: hcsCNracNrpTJZ4QdU0AV2wNm/FqhYSEY4KieQ+zSHNsj6AUR69XvPF+0BiW4ZOpfgIoqwFoXkI=
* module.s3.aws_s3_bucket.bucket_permanent: 1 error(s) occurred:
* aws_s3_bucket.bucket_permanent: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: 7DE1B1A36138A614, host id: 8jB6l7d6Hc6CZFgQSLQRMJg4wtvnrSL6Yp5R4RScq+GtuMW+6rkN39bcTUwQhzxeI7jRStgLXSc=
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Lets first break down whats happening and how we can overcome this issue. Each time you define a resource "aws_s3_bucket", terraform will attempt to create a bucket with the parameters specified. If you want to attach a lifecycle policy to a bucket, do it where you define the bucket, e.g.:
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
lifecycle_rule {
id = "quarterly_retention"
prefix = "folder/"
enabled = true
expiration {
days = 92
}
}
}
resource "aws_s3_bucket" "permanent" {
bucket = "perm_bucket_name"
acl = "private"
lifecycle_rule {
id = "permanent_retention"
enabled = true
prefix = "permanent/"
transition {
days = 1
storage_class = "GLACIER"
}
}
}
A bucket can have multiple lifecycle_rule blocks on it.
If you want to define the lifecycle rules as external blocks, you can do it in this way:
// example of what the variable would look like:
variable "lifecycle_rules" {
type = "list"
default = []
}
// example of what the assignment would look like:
lifecycle_rules = [{
id = "cleanup"
prefix = ""
enabled = true
expiration = [{
days = 1
}]
}, {...}, {...} etc...]
// example what the usage would look like
resource "aws_s3_bucket" "quarterly" {
bucket = "quarterly_bucket_name"
#bucket = "${var.bucket_id}"
acl = "private"
source = "/dev/null"
lifecycle_rule = [ "${var.lifecycle_rules}" ]
}
Note: the implementation above of having an external lifecycle policy isn't really the best way to do it, but the only way. You pretty much trick terraform into accepting the list of maps, which happens to be the same type as lifecycle_rule, so it works. Ideally, Terraform should have it's own resource block for lifecycle rules, but it doesn't.
Edit: why have separate resource blocks when we now have dynamic blocks! Woohoo
As far as I am aware, you cannot make a lifecycle policy separately.
Someone raised a PR for a resource to be created to allow you to do so, but looks like it is still open: https://github.com/terraform-providers/terraform-provider-aws/issues/6188
As for your error, I believe the reason you're getting the error is because:
resource "aws_s3_bucket" "bucket"
Creates a bucket with a particular name.
resource "aws_s3_bucket" "bucket_quarterly"
References bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the previous resource (which cannot be done as names are unique).
resource "aws_s3_bucket" "bucket_permanent"
Similarly, this resource references bucket = "${aws_s3_bucket.bucket.id}" and therefore tries to create a bucket with the same name as the first resource (which cannot be done as names are unique).
You mentioned I expect to have a bucket with 2 lifecycle rules but in your above code you are creating 3 separate s3 buckets (one without a lifecycle, and 2 with a lifecycle) and two objects (folders) that are being placed into the s3 bucket without a lifecycle policy.
Thanks for the info (I like the idea of the list to separate the rules from the resource).
The issue was that I didn't appreciate that you could define lifecycle rules within the resource AND change them subsequently, so I was trying to figure out how to define them separately...
All that's required is to specify them in the resource and do terraform apply, then you can edit it and add/amend/remove lifecycle_rules items and just do terraform apply again to apply the changes.
source "aws_s3_bucket" "my_s3_bucket" {
bucket = local.s3_bucket_name
}
resource "aws_s3_bucket_acl" "my_s3_bucket_acl" {
bucket = aws_s3_bucket.my_s3_bucket.arn
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_versioning" "my_s3_bucket_versioning" {
bucket = aws_s3_bucket.my_s3_bucket.arn
versioning_configuration {
status = true
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_s3-bucket_encryption" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "my_s3_bucket_lifecycle_config" {
bucket = aws_s3_bucket.my_s3_bucket.arn
rule {
id = "dev_lifecycle_7_days"
status = true
abort_incomplete_multipart_upload {
days_after_initiation = 30
}
noncurrent_version_expiration {
noncurrent_days = 1
}
transition {
storage_class = "STANDARD_IA"
days = 30
}
expiration {
days = 30
}
}
}