I have the following s3 bucket defined:
module "bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.1.0"
bucket = local.test-bucket-name
acl = null
grant = [{
type = "CanonicalUser"
permission = "FULL_CONTROL"
id = data.aws_canonical_user_id.current.id
}, {
type = "CanonicalUser"
permission = "FULL_CONTROL"
id = data.aws_cloudfront_log_delivery_canonical_user_id.cloudfront.id
}
]
object_ownership = "BucketOwnerPreferred"
}
But when I try to terraform apply this, I get the error:
Error: error updating S3 bucket ACL (logs,private): MissingSecurityHeader: Your request was missing a required header status code: 400
This error message is not very specific. Am I missing some type of header?
I came across the same issue.
I was trying to update an ACL on a bucket which had previously had private set as the ACL and then modifying my terraform code to match manually created entries on the ACL that someone had done via the GUI.
To get it working for me, I removed one of the ACL entries from the S3 bucket manually of which I was trying to add to the bucket and then re-ran the terraform and it worked without an error
I see the same error in cloudtrail also.
Its like you cant set private acl to null without adding an ACL entry
Related
I want to create publicly accessible Google Cloud Bucket with uniform_bucket_level_access enabled using terraform. All of the examples on provider's docs which are for public bucket does not contain this setting.
When I try to use:
resource "google_storage_bucket_access_control" "public_rule" {
bucket = google_storage_bucket.a_bucket.name
role = "READER"
entity = "allUsers"
}
resource "google_storage_bucket" "a_bucket" {
name = <name>
location = <region>
project = var.project_id
storage_class = "STANDARD"
uniform_bucket_level_access = true
versioning {
enabled = false
}
}
I get the following error:
Error: Error creating BucketAccessControl: googleapi: Error 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access, invalid
If I remove the line for uniform access everything works as expected.
Do I have to use google_storage_bucket_iam resource for achieving this ?
You will have to use google_storage_bucket_iam. I like to use the member one so I don't accidentally clobber other IAM bindings, but you can use whatever your needs dictate.
resource "google_storage_bucket_iam_member" "member" {
bucket = google_storage_bucket.a_bucket.name
role = "roles/storage.objectViewer"
member = "allUsers"
}
EDIT: Use this instead of the google_storage_bucket_access_controls resource that you have.
I'm using terraform to deploy the following
resource "google_project_iam_custom_role" "brw-user-function-item-registered-role" {
role_id = "brw_user_function_item_registered_role"
title = "brw-user-function-item-registered-role"
description = "Role used by the brw-user-function item-registered"
permissions = [
"storage.objects.create",
"storage.objects.get",
"storage.objects.list"
]
}
resource "google_service_account" "brw-user-function-item-registered-service-account" {
account_id = "brw-user-function-item-reg-svc"
display_name = "brw-user-function-item-registered-service_account"
}
resource "google_project_iam_member" "brw-user-function-item-registered-service-account-binding" {
project = local.project
role = "roles/${google_project_iam_custom_role.brw-user-function-item-registered-role.role_id}"
member = "serviceAccount:${google_service_account.brw-user-function-item-registered-service-account.email}"
depends_on = [
google_project_iam_custom_role.brw-user-function-item-registered-role,
google_service_account.brw-user-function-item-registered-service-account
]
}
However when I try to deploy this through terraform, I get the following error
Request "Create IAM Members roles/brw_user_function_item_registered_role serviceAccount:brw-user-function-item-reg-svc#brw-user.iam.gserviceaccount.com for \"project \\\"BRW-User\\\"\"" returned error: Error retrieving IAM policy for project "BRW-User": googleapi: Error 400: Request contains an invalid argument., badRequest
I'm not sure what is wrong here, I have added the depends_on as well just to make sure it is created in the correct order. Could the member attribute be wrong, I tried giving account_id as well and I still get the same error.
Only predefined roles have the string roles/ in front of the name.
You are using the string:
role = "roles/${google_project_iam_custom_role.brw-user-function-item-registered-role.role_id}"
Change it to:
role = google_project_iam_custom_role.brw-user-function-item-registered-role.name
Note the removal of roles/, changing role_id to name, and removing string interpolation.
In the resource google_project_iam_member, if you are passing a custom role it must be of the format:
[projects|organizations]/{parent-name}/roles/{role-name}
Here is an example:
resource "google_project_iam_member" "access" {
project = var.project_name
role = "projects/${var.project_name}/roles/${google_project_iam_custom_role.customer_access.role_id}"
member = "serviceAccount:${google_service_account.service_account.email}"
}
Also, as a best practice avoid using dashes in the resources name (better underscore) and try not make it too long. I've run into issues with long names.
I have an S3 bucket which is used as Access logging bucket.
Here is my current module and resource TF code for that:
module "access_logging_bucket" {
source = "../../resources/s3_bucket"
environment = "${var.environment}"
region = "${var.region}"
acl = "log-delivery-write"
encryption_key_alias = "alias/ab-data-key"
name = "access-logging"
name_tag = "Access logging bucket"
}
resource "aws_s3_bucket" "default" {
bucket = "ab-${var.environment}-${var.name}-${random_id.bucket_suffix.hex}"
acl = "${var.acl}"
depends_on = [data.template_file.dependencies]
tags = {
name = "${var.name_tag}"
. . .
}
lifecycle {
ignore_changes = [ "server_side_encryption_configuration" ]
}
}
The default value of variable acl is variable "acl" { default = "private" } in my case. And also as stated in Terraform S3 bucket attribute reference doc.
And for this bucket it is set to log-delivery-write.
I want to update it to add following grants and remove acl as they conflict with each other:
grant {
permissions = ["READ_ACP", "WRITE"]
type = "Group"
uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
grant {
id = data.aws_canonical_user_id.current.id
permissions = ["FULL_CONTROL"]
type = "CanonicalUser"
}
My Questions are:
Is removing the acl attribute and adding the above mentioned grants still maintain the correct access control for the bucket. i.e. is that grant configuration still good to have this as an Access Logging bucket.
If I remove the acl from the resource config, it will make it private which is the default value. Is that the correct thing to do or should it be made null or something?
On checking some documentation for Log Delivery group found this which leads me to think I can go ahead with replacing the acl with the grants I mentioned:
Log Delivery group – Represented by
http://acs.amazonaws.com/groups/s3/LogDelivery . WRITE permission on a
bucket enables this group to write server access logs (see Amazon S3
server access logging) to the bucket. When using ACLs, a grantee can
be an AWS account or one of the predefined Amazon S3 groups.
Based on the grant-log-delivery-permissions-general documentation, I went ahead and ran the terraform apply.
On first run it set the Bucket owner permission correctly but removed the S3 log delivery group. So, I ran the terraform plan again and it showed the following acl grant differences. I am thinking it's most likely that it first updated the acl value which removed the grant for log delivery group.
Thus I re-ran the terraform apply and it worked fine and corrected the log delivery group as well.
# module.buckets.module.access_logging_bucket.aws_s3_bucket.default will be updated in-place
~ resource "aws_s3_bucket" "default" {
acl = "private"
bucket = "ml-mxs-stage-access-logging-9d8e94ff"
force_destroy = false
. . .
tags = {
"name" = "Access logging bucket"
. . .
}
+ grant {
+ permissions = [
+ "READ_ACP",
+ "WRITE",
]
+ type = "Group"
+ uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
+ grant {
+ id = "ID_VALUE"
+ permissions = [
+ "FULL_CONTROL",
]
+ type = "CanonicalUser"
}
. . .
}
Plan: 0 to add, 1 to change, 0 to destroy.
New to Terraform. Trying to apply a lifecycle rule to an existing s3 bucket declared as a datasource, but I guess I can't do that with a datasource - throws an error. Here's the gist of what I'm trying to achieve:
data "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...and if this were a resource, not a datasource, then it would work. How can I apply a lifecycle rule to an s3 bucket declared as a data source? Google Fu has yielded little in the way of results. Thanks!
The best way to solve this is to import your bucket to terraform state instead of using it as data.
For that try to put this on your terraform code:
resource "aws_s3_bucket" "test-bucket" {
bucket = "bucket_name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
And then run on the terminal:
terraform import aws_s3_bucket.test-bucket bucket_name
This will import the bucket to your state and now you can make the changes or add new things to your bucket using terraform.
Last step just run terraform apply and the lifecycle rule will be added.
I am trying to create a very simple structure on GCP using Terraform: a compute instance + storage bucket. I did some research across GCP documentation, Terraform documentation, SO questions as well and still can't understand what's the trick here. There is one suggestion to use google_project_iam_binding, but reading thruogh some articles it seems to be dangerous (read: insecure solution). There's also a general answer with only GCP descriptions, nit using tf terms here, which is still a bit confusing. And also concluding the similar question here, I confirm that the domain name ownership was verified via Google Console.
So, I ended up with the following:
data "google_iam_policy" "admin" {
binding {
role = "roles/iam.serviceAccountUser"
members = [
"user:myemail#domain.name",
"serviceAccount:${google_service_account.serviceaccount.email}",
]
}
}
resource "google_service_account" "serviceaccount" {
account_id = "sa-1"
}
resource "google_service_account_iam_policy" "admin-acc-iam" {
service_account_id = google_service_account.serviceaccount.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_storage_bucket_iam_policy" "policy" {
bucket = google_storage_bucket.storage_bucket.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_compute_network" "vpc_network" {
name = "vpc-network"
auto_create_subnetworks = "true"
}
resource "google_compute_instance" "instance_1" {
name = "instance-1"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "cos-cloud/cos-stable"
}
}
network_interface {
network = google_compute_network.vpc_network.self_link
access_config {
}
}
}
resource "google_storage_bucket" "storage_bucket" {
name = "bucket-1"
location = "US"
force_destroy = true
website {
main_page_suffix = "index.html"
not_found_page = "404.html"
}
cors {
origin = ["http://the.domain.name"]
method = ["GET", "HEAD", "PUT", "POST", "DELETE"]
response_header = ["*"]
max_age_seconds = 3600
}
}
but if I terraform apply, logs show me an error like that
Error: Error setting IAM policy for service account 'trololo': googleapi: Error 403: Permission iam.serviceAccounts.setIamPolicy is required to perform this operation on service account trololo., forbidden
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
on main.tf line 35, in resource "google_service_account_iam_policy" "admin-acc-iam":
35: resource "google_service_account_iam_policy" "admin-acc-iam" {
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
Error: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
on main.tf line 82, in resource "google_storage_bucket" "storage_bucket":
and some useless debug info. What's wrong? What account is missing what permissions and how to assign them securely?
I found the problem. As always, in 90% of cases, the issue is sitting in front of the computer.
Here are the steps that helped me to understand and to resolve the problem:
I read few more articles and especially this and this answer were very helpful to understand relations between users, service accounts, permissions
I understood that doing terraform destroy is also very important since there is no rollback of unsuccessful deploy of a new infrastructure changes (like with DB migrations for example) - thus you have to clean up either with destroy or manually
completely removed the "user:${var.admin_email}" user account IAM policy since it useless; everything has to be managed by the newly created service account
left the main service account with most permissions untouched (the one which was created manually and downloaded the access key) since Terraform used it's credentials
and changed the IAM policy for the new service account as roles/iam.serviceAccountAdmin instead of a User - thanks #Wojtek_B for the hint
After this everything works smooth!