Can I grant a service account access to multiple buckets in a single policy? - google-cloud-platform

I'm coming from AWS and still learning how IAM/Policies work in GCP. In AWS, if I wanted to grant a role access to multiple buckets I would do something like this in terraform:
data "aws_iam_policy_document" "policy" {
statement {
actions = [
"s3:Get*"
]
resources = [
"${var.bucket1_arn}/*",
"${var.bucket2_arn}/*",
"${var.bucket3_arn}/*",
]
}
}
resource "aws_iam_policy" "policy" {
name = "my-policy"
policy = data.aws_iam_policy_document.policy.json
}
resource "aws_iam_role_policy_attachment" "policy_attachment" {
policy_arn = aws_iam_policy.policy.arn
role = ${var.role_name}
}
I've been trying to figure out how to do it in GCP, but all I've found so far is that I need to attach a policy to each bucket individually, like so:
data "google_iam_policy" "policy" {
binding {
role = "roles/storage.objectViewer"
members = [
"serviceAccount:${service_account}",
]
}
}
resource "google_storage_bucket_iam_policy" "bucket_1" {
bucket = google_storage_bucket.bucket_1.name
policy_data = data.google_iam_policy.policy.policy_data
}
resource "google_storage_bucket_iam_policy" "bucket_2" {
bucket = google_storage_bucket.bucket_2.name
policy_data = data.google_iam_policy.policy.policy_data
}
resource "google_storage_bucket_iam_policy" "bucket_3" {
bucket = google_storage_bucket.bucket_3.name
policy_data = data.google_iam_policy.policy.policy_data
}
Is this the correct way (or best practice?) to grant a service account access to multiple buckets?

Yes, Google IAM is resource-centric (my understanding that AWS flips this and is identity-centric), you apply policies to resources.
Because the container (i.e. a Project) may contain many Buckets, you're only alternative is to apply the binding to the Project itself but then, every Bucket in the Project will have the binding.
The approach you're taking yields precision (only those buckets granted the role have it) albeit slightly onerous for the role binding phase (something done infrequently).

DazWikin answer is right, but on GCP you can cheat. In fact, you can use IAM conditions and build something like that:
Grant the account (service or user) at the folder or organisation level, to grant it the access to all the resources. For example, grant the role storage Admin
Use condition to enforce this role on only a subset of bucket
Like that
resource "google_organization_iam_binding" "Binding" {
members = ["<ACCOUNT_EMAIL>"]
org_id = "<YOUR_ORG_ID>"
role = "roldes/storage.admin"
condition {
expression = 'resource.name.startsWith("projects/_/buckets/<BUCKET1>") || resource.name.startsWith("projects/_/buckets/<BUCKET2>")'
title = "bucket filter"
}
}
It's not so clean, especially to update when you have new buckets that you want to add in the list, but it's a workaround at your question.

Related

How can I add multiple inline policy on AWS permission set using terraform?

I've created 2 policies and tried to attach as an inline policy on AWS SSO permission sets. However, it only applies either one of policy. How can I apply both policies as inline policy on SSO permission set?
resource "aws_iam_policy" "DenyAccess_nonUSRegions" {
name = "DenyAccess_nonUSRegions"
description = "DenyAccess_nonUSRegions"
policy = data.aws_iam_policy_document.DenyAccess_nonUSRegions.json
}
resource "aws_iam_policy" "role" {
name = "Deny_Specific_IAM_Actions"
description = "Deny_Specific_IAM_Actions"
policy = data.aws_iam_policy_document.Deny_Specific_IAM_Actions.json
}
resource "aws_ssoadmin_permission_set_inline_policy" "role" {
inline_policy = data.aws_iam_policy_document.Deny_Specific_IAM_Actions.json
instance_arn = aws_ssoadmin_permission_set.permission.instance_arn
permission_set_arn = aws_ssoadmin_permission_set.permission.arn
}
resource "aws_ssoadmin_permission_set_inline_policy" "DenyAccess_nonUSRegions" {
inline_policy = data.aws_iam_policy_document.DenyAccess_nonUSRegions.json
instance_arn = aws_ssoadmin_permission_set.permission.instance_arn
permission_set_arn = aws_ssoadmin_permission_set.permission.arn
}
In order to apply both policies as inline policies on an AWS SSO permission set, you can use the aws_ssoadmin_permission_set_inline_policy resource to create two separate inline policies, one for each of your existing policies.
You would need to update your Terraform configuration to create two aws_ssoadmin_permission_set_inline_policy resources, one for each of your existing policies.
For example, you can create the first inline policy using the aws_ssoadmin_permission_set_inline_policy resource, and reference the DenyAccess_nonUSRegions policy that you have created.
resource "aws_ssoadmin_permission_set_inline_policy"
"DenyAccess_nonUSRegions" {
inline_policy =
data.aws_iam_policy_document.DenyAccess_nonUSRegions.json
instance_arn =
aws_ssoadmin_permission_set.permission.instance_arn
permission_set_arn = aws_ssoadmin_permission_set.permission.arn
}
Then, you can create the second inline policy, using the aws_ssoadmin_permission_set_inline_policy resource, and reference the Deny_Specific_IAM_Actions policy that you have created.
resource "aws_ssoadmin_permission_set_inline_policy" "role" {
inline_policy =
data.aws_iam_policy_document.Deny_Specific_IAM_Actions.json
instance_arn =
aws_ssoadmin_permission_set.permission.instance_arn
permission_set_arn = aws_ssoadmin_permission_set.permission.arn
}
It's important to note that you should use different names for each aws_ssoadmin_permission_set_inline_policy resource , as they need to be unique across the same permission set.
With these two inline policies in place, both of your existing policies will be applied to the SSO permission set, and users assigned to that permission set will be subject to the restrictions defined in both policies.
You can have only one inline policy. So in your case the policies overwrite each other, and you end up with only one. So you either create a single inline policy combining the two that you have, or create two managed policies (not inline).

How to create public google bucket with uniform_bucket_level_access enabled?

I want to create publicly accessible Google Cloud Bucket with uniform_bucket_level_access enabled using terraform. All of the examples on provider's docs which are for public bucket does not contain this setting.
When I try to use:
resource "google_storage_bucket_access_control" "public_rule" {
bucket = google_storage_bucket.a_bucket.name
role = "READER"
entity = "allUsers"
}
resource "google_storage_bucket" "a_bucket" {
name = <name>
location = <region>
project = var.project_id
storage_class = "STANDARD"
uniform_bucket_level_access = true
versioning {
enabled = false
}
}
I get the following error:
Error: Error creating BucketAccessControl: googleapi: Error 400: Cannot use ACL API to update bucket policy when uniform bucket-level access is enabled. Read more at https://cloud.google.com/storage/docs/uniform-bucket-level-access, invalid
If I remove the line for uniform access everything works as expected.
Do I have to use google_storage_bucket_iam resource for achieving this ?
You will have to use google_storage_bucket_iam. I like to use the member one so I don't accidentally clobber other IAM bindings, but you can use whatever your needs dictate.
resource "google_storage_bucket_iam_member" "member" {
bucket = google_storage_bucket.a_bucket.name
role = "roles/storage.objectViewer"
member = "allUsers"
}
EDIT: Use this instead of the google_storage_bucket_access_controls resource that you have.

Cloud Function always using default service account (Terraform)

I am creating a cloud function resource with Terraform and wanted to overwrite the default service account '#appspot.gserviceaccount.com' to use a custom service account with least privileges.
I've done the following but once my Terraform resources are created and I check the cloud function permissions tab, it's still defaulting to the original one '#appspot.gserviceaccount.com'
resource "google_service_account" "service_account" {
account_id = "mysa"
display_name = "Service Account"
}
data "google_iam_policy" "cfunction_iam" {
binding {
role = google_project_iam_custom_role.cfunction_role.id
members = [
"serviceAccount:${google_service_account.service_account.email}",
]
}
binding {
role = "roles/cloudfunctions.developer"
members = [
"serviceAccount:${google_service_account.service_account.email}",
]
}
}
resource "google_cloudfunctions_function_iam_policy" "policy" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
policy_data = data.google_iam_policy.cfunction_iam.policy_data
}
resource "google_project_iam_custom_role" "cfunction_role" {
role_id = "customCFunctionRole"
title = "Custom Cloud Function Role"
description = "More granular permissions other than default #appspot SA"
permissions = [
"storage.objects.create",
"storage.multipartUploads.create",
"storage.objects.get",
"bigquery.tables.create",
"bigquery.tables.list",
"bigquery.tables.updateData",
"logging.logEntries.create",
]
}
#Update, I've set the service account parameter within the Cloud Function resource as well:
service_account_email = "${google_service_account.service_account.email}"
What am I missing here?
Thanks!
Adding my own answer here. After deleting the previous state and let Terraform re-create all the resources, it picked up the correct service account as defined in the description.

Terraform - Updating S3 Access Control: Question on replacing acl with grant

I have an S3 bucket which is used as Access logging bucket.
Here is my current module and resource TF code for that:
module "access_logging_bucket" {
source = "../../resources/s3_bucket"
environment = "${var.environment}"
region = "${var.region}"
acl = "log-delivery-write"
encryption_key_alias = "alias/ab-data-key"
name = "access-logging"
name_tag = "Access logging bucket"
}
resource "aws_s3_bucket" "default" {
bucket = "ab-${var.environment}-${var.name}-${random_id.bucket_suffix.hex}"
acl = "${var.acl}"
depends_on = [data.template_file.dependencies]
tags = {
name = "${var.name_tag}"
. . .
}
lifecycle {
ignore_changes = [ "server_side_encryption_configuration" ]
}
}
The default value of variable acl is variable "acl" { default = "private" } in my case. And also as stated in Terraform S3 bucket attribute reference doc.
And for this bucket it is set to log-delivery-write.
I want to update it to add following grants and remove acl as they conflict with each other:
grant {
permissions = ["READ_ACP", "WRITE"]
type = "Group"
uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
grant {
id = data.aws_canonical_user_id.current.id
permissions = ["FULL_CONTROL"]
type = "CanonicalUser"
}
My Questions are:
Is removing the acl attribute and adding the above mentioned grants still maintain the correct access control for the bucket. i.e. is that grant configuration still good to have this as an Access Logging bucket.
If I remove the acl from the resource config, it will make it private which is the default value. Is that the correct thing to do or should it be made null or something?
On checking some documentation for Log Delivery group found this which leads me to think I can go ahead with replacing the acl with the grants I mentioned:
Log Delivery group – Represented by
http://acs.amazonaws.com/groups/s3/LogDelivery . WRITE permission on a
bucket enables this group to write server access logs (see Amazon S3
server access logging) to the bucket. When using ACLs, a grantee can
be an AWS account or one of the predefined Amazon S3 groups.
Based on the grant-log-delivery-permissions-general documentation, I went ahead and ran the terraform apply.
On first run it set the Bucket owner permission correctly but removed the S3 log delivery group. So, I ran the terraform plan again and it showed the following acl grant differences. I am thinking it's most likely that it first updated the acl value which removed the grant for log delivery group.
Thus I re-ran the terraform apply and it worked fine and corrected the log delivery group as well.
# module.buckets.module.access_logging_bucket.aws_s3_bucket.default will be updated in-place
~ resource "aws_s3_bucket" "default" {
acl = "private"
bucket = "ml-mxs-stage-access-logging-9d8e94ff"
force_destroy = false
. . .
tags = {
"name" = "Access logging bucket"
. . .
}
+ grant {
+ permissions = [
+ "READ_ACP",
+ "WRITE",
]
+ type = "Group"
+ uri = "http://acs.amazonaws.com/groups/s3/LogDelivery"
}
+ grant {
+ id = "ID_VALUE"
+ permissions = [
+ "FULL_CONTROL",
]
+ type = "CanonicalUser"
}
. . .
}
Plan: 0 to add, 1 to change, 0 to destroy.

Want to assign multiple Google cloud IAM roles to a service account via terraform

I want to assign multiple IAM roles to a single service account through terraform. I prepared a TF file to do that, but it has an error. With a single role it can be successfully assigned but with multiple IAM roles, it gave an error.
data "google_iam_policy" "auth1" {
binding {
role = "roles/cloudsql.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/datastore.owner"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/storage.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
}
How can I assign multiple roles against a single service account?
I did something like this
resource "google_project_iam_member" "member-role" {
for_each = toset([
"roles/cloudsql.admin",
"roles/secretmanager.secretAccessor",
"roles/datastore.owner",
"roles/storage.admin",
])
role = each.key
member = "serviceAccount:${google_service_account.service_account_1.email}"
project = my_project_id
}
According with the documentation
Each document configuration must have one or more binding blocks, which each accept the following arguments: ....
You have to repeat the binding, like this
data "google_iam_policy" "auth1" {
binding {
role = "roles/cloudsql.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
binding {
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
binding {
role = "roles/datastore.owner"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
binding {
role = "roles/storage.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
}
It's the same thing with you use the gcloud command, you can add only 1 role at the time on a list of email.
I can't comment or upvote yet so here's another answer, but #intotecho is right.
I'd say do not create a policy with Terraform unless you really know what you're doing! In GCP, there's only one policy allowed per project. If you apply that policy, only the service accounts will have access, no humans. :) Even though we don't want humans to do human things, it's helpful to at least have view access to the GCP project you own.
Especccciallyy if you use the model that there are multiple Terraform workspaces performing iam operations on the project. If you use policies it will be similar to how wine is made, it will be a stomping party! The most recently applied policy will win (if the service account TF is using is included in that policy, otherwise it will lock itself out!)
It's possible humans get an inherited viewer role from a folder or the org itself, but assigning multiple roles using the google_project_iam_member is a much much better way and how 95% of the permissions are done with TF in GCP.