google storage transfer service account does not exist in new project - google-cloud-platform

I am trying to create resources using Terraform in a new GCP project. As part of that I want to set roles/storage.legacyBucketWriter to the Google managed service account which runs storage transfer service jobs (the pattern is project-[project-number]#storage-transfer-service.iam.gserviceaccount.com) for a specific bucket. I am using the following config:
resource "google_storage_bucket_iam_binding" "publisher_bucket_binding" {
bucket = "${google_storage_bucket.bucket.name}"
members = ["serviceAccount:project-${var.project_number}#storage-transfer-service.iam.gserviceaccount.com"]
role = "roles/storage.legacyBucketWriter"
}
to clarify, I want to do this so that when I create one off transfer jobs using the JSON APIs, it doesn't fail prerequisite checks.
When I run Terraform apply, I get the following:
Error applying IAM policy for Storage Bucket "bucket":
Error setting IAM policy for Storage Bucket "bucket": googleapi:
Error 400: Invalid argument, invalid
I think this is because the service account in question does not exist yet as I can not do this via the console either.
Is there any other service that I need to enable for the service account to be created?

it seems I am able to create/find the service account once I run this:
https://cloud.google.com/storage/transfer/reference/rest/v1/googleServiceAccounts/get
for my project to get the email address.
not sure if this is the best way but it works..

Soroosh's reply is accurate, after querying the API as per this DOC: https://cloud.google.com/storage-transfer/docs/reference/rest/v1/googleServiceAccounts/ will enable the service account and terraform will run, but now you have to create an api call in terraform for that to work, ain't nobody got time for that.

Related

Ensure Google service accounts

In Terraform I enable services like so:
resource "google_project_service" "apigateway" {
service = "apigateway.googleapis.com"
}
Afterwards I ensure that I am referencing the service account of apigateway (service-123#gcp-sa-apigateway.iam.gserviceaccount.com) only after the resource was created.
Now it does happen sometimes that when using the email of sa, I get an error that the service account is not present:
Error 400: Service account service-123#gcp-sa-apigateway.iam.gserviceaccount.com does not exist.
I double checked in API Explorer that the API is enabled!
This in turn does happen for apigateway the same way as for others (e.g. cloudfunctions).
So I am wondering how do I ensure that the service account is created?
Naively I assumed creating google_project_services should do the trick but that seems not be true in every case. Documentation around Google service account is pretty sparse it seems :(
As John Hanley remarks, you can create this dependency in terraform with depends_on.
As you can see on the following comment, the service account will be created but the key will be assigned until the first sentence is done.
resource "google_service_account" "service_account" {
account_id = "terraform-test"
display_name = "Service Account"
}
resource "google_service_account_key" "mykey" {
service_account_id = google_service_account.service_account.id
public_key_type = "TYPE_X509_PEM_FILE"
depends_on = [google_service_account.service_account]
}
Also, if the service account is already created on the GCP platform only is executed the key statement.
It is important noticed that the account that you are using for this configuration needs to have the required IAM permission to create an account.
Found out about google_project_service_identity.
So since I saw this problem with cloudfunctions you could create a google_project_service_identity.cloudfunctions and hope for a detailed error message.
Sadly this is not available for all, e.g. apigateway.
For apigateway specifically, Google Support confirmed that undocumented behavior is the SA gets created lazily when creating first resource.

adding existing GCP service account to Terraform root module for cloudbuild to build Terraform configuration

Asking the community if it's possible to do the following. (had no luck in finding further information)
I create a ci/cd pipeline with Github/cloudbuild/Terraform. I have cloudbuild build terraform configuration upon github pull request and merge to new branch. However, I have cloudbuild service account (Default) use with least privilege.
Question adheres, I would like terraform to pull permission from an existing service account with least privilege to prevent any exploits, etc. once cloudbuild gets pull build triggers to init terraform configuration. At this time, i.e terraform will extract existing external SA to obtain permission to build TF.
I tried to use service account, and binding roles to that service account but error happens that
states service account already existences.
Next step, is for me to use a module but I think this is also going to create a new SA with replicated roles.
If this is confusing I do apologize, I will help in refining the question to be more concise.
You have 2 solutions:
Use the Cloud Build service account when you execute your Terraform. Your provider look like this:
provider "google" {
// Useless with Cloud Build
// credentials = file("${var.CREDENTIAL_FILE}}")
project = var.PROJECT_ID
region = "europe-west1"
}
But this solution implies to grant several roles to Cloud Build only for Terraform process. A custom role is a good choice for granting only what is required.
The second solution is to use a service account key file. Here again 2 solutions:
Cloud Build creates the service account, grant all the role on it, generates a key and passes it to terraform. After the terraform execution, the service account is deleted by Cloud Build. Good solution, but you have to grant Cloud Build service account the capability to grant itself any roles and to generate a json Key file. That's a lot a responsibility!
Use an existing service account and the key generated on it. But you have to secure the key and to rotate it regularly. I recommend you to securely store it in secret manager, but for the rotation, you have to manage it by yourselves, today. With this process, Cloud Build download the key (in secret manager) and pass it to terraform. Here again, the Cloud Build service account has the right to access to secrets, that is a critical privilege. The step in Cloud Build is something like this:
steps:
- name: gcr.io/cloud-builders/gcloud:latest
entrypoint: "bash"
args:
- "-c"
- |
gcloud beta secrets versions access --secret=test-secret latest > my-secret-file.txt

Vault GCP Project Level Role Binding

I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?
I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.

GCP IAM roles for sonatype-nexus-community/nexus-blobstore-google-cloud

Trying o build sonatype-nexus-community/nexus-blobstore-google-cloud but cannot succeed without Project Owner iam role in GCP.
If I understand everything correctly Storage Admin IAM role should be sufficient, at least according to the documentation:
https://github.com/sonatype-nexus-community/nexus-blobstore-google-cloud
Also tried Storage Admin + Service Account User + Service Account Token Creator but could not succeed either.
Integration test fails with a message:
org.sonatype.nexus.blobstore.api.BlobStoreException: BlobId: e0eb4ae2-f425-4598-aa42-fc03fb2e53b2, com.google.cloud.datastore.DatastoreException: Missing or insufficient permissions.
In details, the integration test creates a blob storage than tries to delete than undelete it, using two different methods:
def "undelete successfully makes blob accessible"
def "undelete does nothing when dry run is true"
This is where the issue starts. Execution fails on delete:
assert blobStore.delete(blob.id, 'testing')
It's another question how to undelete something in Google Storage that does not support undelete but versioning only.
_
Here is what the documentation says about permissions:
Google Cloud Storage Permissions
Next, you will need to create an account with appropriate permissions.
Of the predefined account roles, Storage Admin will grant the plugin to > create any Google Cloud Storage Buckets you require and administer all of the objects within, but it will also have access to manage any other Google Cloud Storage Buckets associated with the project.
If you are using custom roles, the account will need:
(required) storage.objects.*
(required) storage.buckets.get
or storage.buckets.*.
Storage Admin IAM role covers both storage.objects.* and storage.buckets.* so not sure what causes the issue.
References:
https://cloud.google.com/storage/docs/access-control/iam-roles
https://cloud.google.com/storage/docs/access-control/iam-json
The integration test fails at a blob storage delete attempt:
15:27:10.042 [main] DEBUG o.s.n.b.g.i.GoogleCloudBlobStore - Writing blob 2e22e0e9-1fef-4620-a66e-d672b75ef924 to content/vol-18/chap-33/2e22e0e9-1fef-4620-a66e-d672b75ef924.bytes
15:27:24.430 [main] DEBUG o.s.n.b.g.i.GoogleCloudBlobStore - Soft deleting blob 2e22e0e9-1fef-4620-a66e-d672b75ef924
at
org.sonatype.nexus.blobstore.gcloud.internal.GoogleCloudBlobStoreIT.undelete successfully makes blob accessible(GoogleCloudBlobStoreIT.groovy:164)
Caused by: org.sonatype.nexus.blobstore.api.BlobStoreException: BlobId: 2e22e0e9-1fef-4620-a66e-d672b75ef924, com.google.cloud.datastore.DatastoreException: Missing or insufficient permissions., Cause: Missing or insufficient permissions.
... 1 more
at
org.sonatype.nexus.blobstore.gcloud.internal.DeletedBlobIndex.add(DeletedBlobIndex.java:55)
at
org.sonatype.nexus.blobstore.gcloud.internal.GoogleCloudBlobStore.delete(GoogleCloudBlobStore.java:276)
Could you please help me out if I overlook something?
A Datastore database needs to be created and Datastore Owner role need to be added besides Storage Admin, Service Account User, and Service Account Token Creator

Why do I get access denied when trying to list the content of a bucket in a different project than my dataflow job?

I have two different project, A and B. In A I have a google cloud function running that triggers on messages on a pubsub topic and creates a dataflow job. This dataflow jobs list and reads the items from a specific bucket in B, and this is where my problem starts.
I have followed the instructions here: https://cloud.google.com/dataflow/security-and-permissions#accessing-cloud-storage-buckets-across-cloud-platform-projects regarding ACL and I can see that my project user has been added as OWNER to the bucket I try to read from.
The error message I get is:
403 Forbidden\n{\n \"code\" : 403,\n \"errors\" : [ {\n \"domain\" : \"global\",\n \"message\" : \"Caller does not have storage.objects.list access to bucket bucketName.\",\n \"reason\" : \"forbidden\"\n } ],\n \"message\" : \"Caller does not have storage.objects.list access to bucket bucketName.\"\n}
Why doesn't the function have list access when project A has OWNER rights of the bucket on B. Does the cloud function runs with a different set of credentials than those used it the linked tutorial?
If I trigger it manually from the cli it works as expected, but then it probably uses my credentials I guess.
There are 2 things here. Are you listing the files from the cloud function and than launching the dataflow job on the files? If yes, please see that the user/service account under which your cloud function is triggered has correct permission on the bucket. If no, please make sure both service accounts (cloudservices and compute engine, mentioned at https://cloud.google.com/dataflow/security-and-permissions#accessing-cloud-storage-buckets-across-cloud-platform-projects) of project A has the OWNERs permissions on project B.