I've created Service Account A and granted roles Service Account Admin and Service Account Key Admin. I did this work in the GCP Console.
Service Account A's function is to create other service accounts programmatically, using the GCP Java SDK. It successfully creates new service accounts, but when it goes to create a key for the newly created service account, I get the following response:
{
"code": 403,
"errors": [
{
"domain": "global",
"message": "Permission iam.serviceAccountKeys.create is required to perform this operation on service account projects/-/serviceAccounts/<new_service_account_name>#<project_id>.iam.gserviceaccount.com.",
"reason": "forbidden"
}
],
"message": "Permission iam.serviceAccountKeys.create is required to perform this operation on service account projects/-/serviceAccounts/<new_service_account_name>#<project_id>.iam.gserviceaccount.com.",
"status": "PERMISSION_DENIED"
}
I've tried waiting to see if perhaps I tried to create the key too soon after creating the service account, but waiting hours resulted in no change.
Service Account A can successfully create a key for itself, just not for other service accounts it creates.
How do I resolve?
You have one of three problems:
Service Account A actually does not have the IAM role Service Account Key Admin in the project. Use the CLI command gcloud projects get-iam-policy and double-check.
Your code is using the wrong identity. You believe that you are using the service account but instead, another identity is being loaded by ADC (Application Default Credentials), or you made a mistake in your code.
You assign the correct role but on the service account instead of the project. Use the CLI command gcloud iam service-accounts get-iam-policy. If you find the role listed in the output, you assigned the role in the wrong place. Use the CLI command gcloud projects add-iam-policy-binding instead.
Note: There is a fourth method to prevent you from creating service account keys. Constraints might be enabled:
Restricting service account usage
Related
So I'm pulling my hair out over this and reaching out here for help. I'm trying to set up a service account with Cloud Translation, and Text-to-speech enabled, but we keep getting this response:
[error] {
"message": "Cloud IAM permission 'cloudtranslate.generalModels.predict' denied. ",
"code": 7,
"status": "PERMISSION_DENIED",
"details": []
}
I have confirmed that the service account has the "cloudtranslate.generalModels.predict" permission, and showing the "Cloud Translation API User" role. We've also confirmed that it works with a different Service account that my colleague set up in his personal Google console profile. But, we need this setup with an account through our org.
I did verify that the service account has the permission from the IAM Policy Troubleshooter so and that my organization's admin sees that the service account is granted access through ancestor policies.
So what else can we check?
Edit: Ok, turned out we had a hard-coded value for resource location, which was set to the wrong project. So of course it was coming back as permission denied.
Ok, turned out we had a hard-coded value for resource location, which was set to the wrong project. So of course it was coming back as permission denied.
A workflow fails to start due to permission denied error when trying to impersonate a service account from different project
given:
Projects:
project1
project2
Service Accounts:
sa1#project1 with roles:
Workflows Admin
Cloudrun Admin
Service Account Token Creator
Service Account User
sa2#project2
Workflows:
A workflow1 in project1 (creates a cloudrun instance with serviceAccountName=sa2#project2)
Result:
{
"body": {
"error": {
"code": 403,
"message": "Permission 'iam.serviceaccounts.actAs' denied on service account sa2#project2 (or it may not exist).",
"status": "PERMISSION_DENIED"
}
},
"code": 403,
"headers": {
"Alt-Svc": "h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"",
"Cache-Control": "private",
"Content-Length": "244",
"Content-Type": "application/json; charset=UTF-8",
"Date": "Wed, 14 Sep 2022 10:53:24 GMT",
"Server": "ESF",
"Vary": "Origin",
"X-Content-Type-Options": "nosniff",
"X-Frame-Options": "SAMEORIGIN",
"X-Xss-Protection": "0"
},
"message": "HTTP server responded with error code 403",
"tags": [
"HttpError"
]
}
The error message "Permission 'iam.serviceaccounts.actAs' denied on service account sa2#project2 indicates that users need permission to impersonate a service account in order to attach that service account to a resource. This means that the user needs the iam.serviceAccounts.actAs permission on the service account.
There are several predefined roles that allow a principal to impersonate a service account:
Service Account User
Service Account Token Creator
Workload Identity User
Alternatively, you can grant a different predefined role, or a custom role, that includes permissions to impersonate service accounts.
Service Account User (roles/iam.serviceAccountUser): This role includes the iam.serviceAccounts.actAs permission, which allows principals to indirectly access all the resources that the service account can access. For example, if a principal has the Service Account User role on a service account, and the service account has the Cloud SQL Admin role (roles/cloudsql.admin) on the project, then the principal can impersonate the service account to create a Cloud SQL instance.
You can try giving a service account User role on the service account which is trying to create a cloud run instance.
Refer to the link for more information on impersonating service accounts.
My client is a huge corporate.
Therefore the project level configuration to switch service accounts across projects is disabled (iam.disableCrossProjectServiceAccountUsage is enforced )
This is the root cause of my problem and I cannot change it.
More information is available here:
https://cloud.google.com/iam/docs/impersonating-service-accounts#attaching-different-project
My work around:
I needed this this as it seemed the simplest way to access external BigQuery Dataset & Project.
Solution:
Export private key for sa2#project2 and pass it as a secret to application layer.
Use the key file to imperosnatesa2#project2 service account.
example:
engine = create_engine('bigquery://project2', location="asia-northeast1")
I am getting the following error (Please see below) when I run my terraform apply.
I am running Terraform 12.x.
GCP Cloud Build runs in a different project other than project-abcd (where these accounts are)
My terraform code tries execute a gcloud command in a GCP cloud build container. It does so by impersonating as composer-bq-sa#prj-abcd.iam.gserviceaccount.com
The service account that terraform runs as is:
terraform_service_account = "org-terraform#abcd.iam.gserviceaccount.com"
(before impersonating)
This IAM account (org-terraform#abcd.iam.gserviceaccount.com) (NOT service account) has the following role bindings (TOTAL 9):
(There is no Service Account with that email)
Composer Administrator
Compute Network Admin
Service Account Token Creator
Owner
Access Context Manager Admin
Security Admin
Service Account Admin
Logs Configuration Writer
Security Center Notification Configurations Editor
The service account (composer-bq-sa#prj-abcd.iam.gserviceaccount.com) has as one of its members: org-terraform#abcd.iam.gserviceaccount.com
When I look at the screen titled "Members with access to this service account" and look at org-terraform#abcd.iam.gserviceaccount.com , I see that it has the following role-bindings (ONLY 4):
Service Account Token Creator
Owner
Security Admin
Service Account Admin
Why am I getting the error below even though IAM account has apparently the right roles and it is one of the members of the service account it is impersonating as?
ERROR
module.gcloud_composer_bucket_env_var.null_resource.run_command[0] (local-exec): WARNING: This command
is using service account impersonation. All API calls will be executed as [**composer-bq-sa#prj-abcd.iam.gserviceaccount.com**].
module.gcloud_composer_bucket_env_var.null_resource.run_command[0] (local-exec): ERROR:
(gcloud.composer.environments.update) Failed to impersonate [**composer-bq-sa#prj-abcd.iam.gserviceaccount.com**]. Make sure the account that's trying to impersonate it has access to the service account itself and the "roles/iam.serviceAccountTokenCreator" role.
Recapping:
In order to grant user permission to impersonate a Service Account follow instructions listed in this document.
Depending on the use case, you may grant user following roles:
roles/iam.serviceAccountUser
roles/iam.serviceAccountTokenCreator
roles/iam.workloadIdentityUser
I am trying to apply the role binding below to grant the Storage Admin Role to a GCP roleset in Vault.
resource "//cloudresourcemanager.googleapis.com/projects/{project_id_number}" {
roles = [
"roles/storage.admin"
]
}
I want to grant access to the project level, not a specific bucket so that the GCP roleset can access and read/write to the Google Container Registry.
When I try to create this roleset in Vault, I get this error:
Error writing data to gcp/roleset/my-roleset: Error making API request.
URL: PUT http://127.0.0.1:8200/v1/gcp/roleset/my-roleset
Code: 400. Errors:
* unable to set policy: googleapi: Error 403: The caller does not have permission
My Vault cluster is running in a GKE cluster which has OAuth Scopes for all Cloud APIs, I am the project owner, and the service account Vault is using has the following permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Service Account Actor
Service Account Admin
Service Account Key Admin
Service Account Token Creator
Logs Writer
Storage Admin
Storage Object Admin
I have tried giving the service account both Editor and Owner roles, and I still get the same error.
Firstly, am I using the correct resource to create a roleset for the Storage Admin Role at the project level?
Secondly, if so, what could be causing this permission error?
I had previously recreated the cluster and skipped this step:
vault write gcp/config credentials=#credentials.json
Adding the key file fixed this.
There is also a chance that following the steps to create a custom role here and adding that custom role played a part.
I've exported MySQL Database following the MySQL Export Guide successfully.
Now, I'm trying to import MySQL Database following the MySQL Import Guide.
I've checked the permissions for the service_account_email I'm using, and I have allowed both Admin SQL and Admin Storage permissions.
I was able to successfully activate my service account using this command locally:
gcloud auth activate-service-account <service_account_email> --key-file=<service_account_json_file>
After I ran the command:
gcloud sql import sql <instance> <gstorage_file> --database=<db_name> --async
I got this information:
{
"error": {
"errors": Array[1][
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization"
}
],
"code": 401,
"message": "Login Required"
}
}
Other Things I've Tried
I also tried using the service_account_email of my SQL instance, which came from:
gcloud sql instances describe <instance_name>
But, it seems to have the same error.
Question
Based on the REST API JSON error I'm given, how do I "login" using the service_account_email so I wouldn't get the 401 Error?
Problem is about the permission of database instance service account to write on created bucket. Steps to solve this issue
1) Go to your Cloud SQL Instance and copy service account of instance (Cloud SQL->{instance name}->OVERVIEW->Service account)
2) After copy the service account, go the Cloud Storage Bucket where to want to dump and set desired permission to that account (Storage->{bucket name}->permissions->add member).
The cloud SQL instance is running under a Google service account that is not a part of your project. You will need to grant this user permissions on the file in Cloud Storage that you want to import. Here is a handy dandy bash snippet that will do that.
SA_NAME=$(gcloud sql instances describe YOUR_DB_INSTANCE_NAME --project=YOUR_PROJECT_ID --format="value(serviceAccountEmailAddress)")
gsutil acl ch -u ${SA_NAME}:R gs://YOUR_BUCKET_NAME;
gsutil acl ch -u ${SA_NAME}:R gs://${YOUR_BUCKET_NAME}/whateverDirectory/fileToImport.sql;
The first line gets the service account email address.
The next line gives this service account read permissions on the bucket.
The last line gives the service account read permissions on the file.
Google also has some of the worst error reporting around. If you get this error message it might also be that you entered a PATH incorrectly. In my case it was my path to my bucket directory. Go figure, I don't have permissions to access a bucket that doesn't exist. Technically correct but hardly useful.
After performing some research, and based in the permission error, these are the steps that I find more useful for you to troubleshoot the issue:
In order to easier test ACLs and permissions, you can:
Create and download a key for a service account in question
Use 'gcloud auth activate-service-account' to obtain credentials of service account
Use gsutil as usual to see if you can access the object in question
You might need to grant additional IAM role such as 'roles/storage.admin' to service account in question, see more information here.
According to the google Docs
Describe the instance you are importing to:
gcloud sql instances describe INSTANCE_NAME
Copy the serviceAccountEmailAddress field.
Use gsutil iam to grant the storage.objectAdmin IAM role to the service account for the bucket.
gsutil iam ch serviceAccount:SERVICE-ACCOUNT:objectAdmin gs://BUCKET-NAME
Then Import the database