There are several ways my user can get privileges in a Google Cloud Platform project. Direct role and privilege assignment, act as service accounts, different group membership.
So given a GCP project, how can I list the active privileges for my user?
Usually, in GCP, they are called "Permissions". For ease of use, those permissions are grouped in "Roles".
Each user can have different roles in your project. To get a full list of the accounts having each role in a project, you can use the Resource Manager API to get the IAM policies.
Long story short, make sure that gcurl is properly configured and just run the following command, filtering the output according to your needs:
curl -XPOST https://cloudresourcemanager.googleapis.com/v1/projects/$(gcloud config get-value project):getIamPolicy -d'{}' -H"Authorization: Bearer $(gcloud auth print-access-token)" -H'content-type:application/json'
gcloud projects get-iam-policy $PROJ \
--flatten="bindings[].members" \
--format='table(binding.role)' \
--filter="bindings.members:user:$USER"
USER is like email (me#org.com), PROJ is like project-123456.
UPDATE To search across all resources:
gcloud asset search-all-iam-policies --query=policy:$EMAIL
Related
When using os-login is it possible to grant granular access to a particular server?
From the documentation it seems that once you give the roles roles/compute.osLogin or roles/compute.osAdminLogin you get access to everything in the project that has metadata tag for enable-oslogin=true.
I tried setting conditions on the role I am granting to set type name=instance-1 but this doesnt seem to work the way I am expecting.
Are my assumptions correct about not being able to grant granular access at a resource level?
You can grant users permissions to specific Instances by granting Users access at the Instance rather than the Project (== all Project's Instances) level. E.g:
gcloud compute instances add-iam-policy-binding ${INSTANCE} \
--member=${MEMBER} \
--role=${ROLE} \
--project=${PROJECT}
If you grant a User permission at the Project level, the permission is inherited by Instances in the Project. E.g:
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=${MEMBER} \
--role=${ROLE}
You can add OS Login metadata at the Organization, Project and Instance level. If you enable e.g. at the Project level, OS Login is inherited by Instances in the Project.
I'm trying to build a flex-template image using a service account:
gcloud dataflow flex-template build "$TEMPLATE_PATH" \
--image-gcr-path "$TEMPLATE_IMAGE" \
--sdk-language "JAVA" \
--flex-template-base-image JAVA11 \
--metadata-file "metadata.json" \
--jar "target/XXX.jar" \
--env FLEX_TEMPLATE_JAVA_MAIN_CLASS="XXX"
The service account has the following roles:
"roles/appengine.appAdmin",
"roles/bigquery.admin",
"roles/cloudfunctions.admin",
"roles/cloudtasks.admin",
"roles/compute.viewer",
"roles/container.admin",
"roles/dataproc.admin",
"roles/iam.securityAdmin",
"roles/iam.serviceAccountAdmin",
"roles/iam.serviceAccountUser",
"roles/iam.roleAdmin",
"roles/resourcemanager.projectIamAdmin",
"roles/pubsub.admin",
"roles/serviceusage.serviceUsageAdmin",
"roles/servicemanagement.admin",
"roles/spanner.admin",
"roles/storage.admin",
"roles/storage.objectAdmin",
"roles/firebase.admin",
"roles/cloudconfig.admin",
"roles/vpcaccess.admin",
"roles/compute.instanceAdmin.v1",
"roles/dataflow.admin",
"roles/dataflow.serviceAgent"
However, even with the dataflow.admin and dataflow.serviceAgent roles, my service account is still unable to perform this task.
The documentation https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates advises to grant the roles/owner role to the service account, but I'm hesitant to do that as this is meant to be part of a CI/CD pipeline and giving a service account an owner role doesn't really make sense to me unless I'm completely wrong.
Is there any way to circumvent this issue without granting the owner role to the service account?
I just ran into the exact same issue and spent a few hours figuring this out. We use terraform service account as well. As you mentioned there are 2 main issues: service account access and the build logs access.
By default, cloud build will use a default service account of form [project_number]#cloudbuild.gserviceaccount.com so you need to grant permissions to this service account to write to your gcs bucket backing the gcr container registry. I granted roles/storage.admin to my service account.
Like you mentioned, by default again, cloud build saves the logs at gs://[project_number].cloudbuild-logs.googleusercontent.com. This seems to be a hidden bucket in the project, at least I could not see it. In adddition, can't configure google_storage_bucket_iam_member for it, instead the recommendation as per this doc is to give roles/viewer at the project level to the service account running the gcloud dataflow ... command.
I was able to run the command successfully after the above changes.
tl;dr: Cannot trigger an export with gcloud sql export sql ... on VM which always leads into a PERMISSION_DENIED even though I think that I have set all permissions for its Service Account.
The whole problem actually sounds relatively simple. I want to trigger an export of my Cloud SQL database in my Google Cloud Compute VM at certain times.
What I did so far:
Added the Cloud SQL Admin (just for the sake of testing) permission to the VMs service account in the IAM section.
Created and downloaded the service account key and used gcloud auth activate-service-account --key-file cert.json
Ran the following command:
gcloud sql export sql "${SQL_INSTANCE}" "gs://${BUCKET}/${FILENAME}" -d "${DATABASE}"
(this works without a problem with my own, personal account)
The command resulted in the following error:
ERROR: (gcloud.sql.export.sql) PERMISSION_DENIED: Request had insufficient authentication scopes.
What else I tried
I found this article from Google and used the Compute Service Account instead of creating a Cloud Function Service Account. The result is sadly the same.
You do not have the roles assigned to the service account that you think you have.
You need one of the following roles assigned to the service account:
roles/owner (Not recommended)
roles/viewer (Not recommended)
roles/cloudsql.admin (Not recommended unless required for other SQL operations)
roles/cloudsql.editor (Not recommended unless required for other SQL operations)
roles/cloudsql.viewer (Recommended)
Go to the Google Cloud Console -> Compute Engine.
Click on your VM instance. Scroll down and find the service account assigned to your VM instance. Copy the service account email address.
Run the following command (replace \ with ^ for Windows in the following command and specify your PROJECT ID (not PROJECT NAME) and the service account email address):
gcloud projects get-iam-policy <PROJECT_ID> \
--flatten="bindings[].members" \
--format="table(bindings.role)" \
--filter="bindings.members:<COMPUTE_ENGINE_SERVICE_ACCOUNT>"
Double-check that the roles you require are present in the output.
To list your projects to obtain the PROJECT ID:
gcloud projects list
Note: Do not assign permissions directly to the service account. Assign permissions to the project granting the required role to the service account IAM member.
gcloud projects add-iam-policy-binding <PROJECT_ID> \
--member serviceAccount:<COMPUTE_ENGINE_SERVICE_ACCOUNT> \
--role roles/cloudsql.viewer
I encounter the following warning:
WARNING: You do not appear to have access to project [$PROJECT] or it does not exist.
after running the following commands locally:
Activate and set a service account:
gcloud auth activate-service-account \
$SERVICE_ACCOUNT \
--key-file=key.json
#=>
Activated service account credentials for: [$SERVICE_ACCOUNT]
Select $PROJECT as the above service account:
gcloud config set project $PROJECT
#=>
Updated property [core/project].
WARNING: You do not appear to have access to project [$PROJECT] or it does not exist.
My own GCP account is associated with the following roles:
App Engine Admin
Cloud Build Editor
Cloud Scheduler Admin
Storage Object Creator
Storage Object Viewer
Why is this service account unable to set $PROJECT? Is there a role or permission I am missing?
The solution to this issue might be to enable the Cloud Resource Manager API in your Google Cloud Console here by clicking enable.
I believe this is an erroneous warning message. I see the same warning message on my service account despite the fact that the account has permissions on my GCP project and can successfully perform necessary actions.
You might be seeing this error due to an unrelated problem. In my case, I was trying to deploy to AppEngine from a continuous integration environment (Circle CI), but I hadn't enabled the App Engine Admin API. Once I enabled the API, I was able to deploy successfully.
I encountered this error when I started out with Google CLoud Platform.
The issue was that I configured/set a non-existing project (my-kube-project)
as my default project using the command below:
gcloud config set project my-kube-project
Here's how I solved it:
I had to list my existing projects first:
gcloud projects list
And then I copied the ID of the project that I wanted, and rannthe command again this time:
gcloud config set project gold-magpie-258213
And it worked fine.
Note: You cannot change the ID of a project's ID or Number,you can only change the Name.
That's all.
I hope this helps
I was encountering the same error when trying to deploy an app to Google App Engine via a service account configured in CircleCI and resolved it by having the following roles (permissions) attached to my service role:
App Engine Deployer
App Engine Service Admin
Cloud Build Editor
Storage Object Creator
Storage Object Viewer
I also had the App Engine Admin API enabled, but not the Cloud Resource Manager API.
The
WARNING: You do not appear to have access to project [$PROJECT_ID] or it does not exist.
warning will appear if there isn't at least one role granted to the service account that contains the resourcemanager.projects.get permission.
In other words, the warning will appear if the result of the following commands is blank:
Gather all roles for a given $SERVICE_ACCOUNT (this works for any account, not just service accounts):
gcloud projects get-iam-policy $PROJECT_ID \
--flatten='bindings[].members' \
--format='table(bindings.role)' \
--filter="bindings.members:${SERVICE_ACCOUNT}"
#=>
ROLE
. . .
For each $ROLE gathered above, either:
gcloud iam roles describe $ROLE \
--flatten='includedPermissions' \
--format='value(includedPermissions)' \
--project=$PROJECT_ID | grep \
--regexp '^resourcemanager.projects.get$'
if the $ROLE is a custom (projects/$PROJECT_ID/roles/$ROLE), or:
gcloud iam roles describe roles/$ROLE \
--flatten='includedPermissions' \
--format='value(includedPermissions)' | grep \
--regexp '^resourcemanager.projects.get$'
if the $ROLE is a curated (roles/$ROLE).
Note: the difference between gcloud command formatting for custom and curated roles is what makes listing all permissions associated with all roles associated with a single account difficult.
If you have confirmed that none of the roles associated with a service account contain the resourcemanager.projects.get permission, then either:
Update at least one of the custom roles associated with the service account with the resourcemanager.projects.get permission:
gcloud iam roles update $ROLE \
--add-permissions=resourcemanager.projects.get \
--project=$PROJECT_ID
#=>
description: $ROLE_DESCRIPTION
etag: . . .
includedPermissions:
. . .
- resourcemanager.projects.get
. . .
name: projects/$PROJECT_ID/roles/$ROLE
stage: . . .
title: $ROLE_TITLE
Warning: make sure to use the --add-permissions flag here when updating, as the --permissions flag will remove any other permissions the custom role used to have.
Create a custom role:
gcloud iam roles create $ROLE \
--description="$ROLE_DESCRIPTION" \
--permissions=resourcemanager.projects.get \
--project=$PROJECT_ID \
--title='$ROLE_TITLE'
#=>
Created role [$ROLE].
description: $ROLE_DESCRIPTION
etag: . . .
includedPermissions:
- resourcemanager.projects.get
name: projects/$PROJECT_ID/roles/$ROLE
stage: . . .
title: $ROLE_TITLE
and associate it with the service account:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$SERVICE_ACCOUNT \
--role=projects/$PROJECT_ID/roles/$ROLE
#=>
Updated IAM policy for project [$PROJECT_ID].
auditConfigs:
. . .
Associate the service account with a curated role that already contains the resourcemanager.projects.get permission, which has been discussed above.
If you want to know which curated roles already contain the resourcemanager.projects.get permission and don't want to craft a complex shell loop, it might be easier to go here and filter all roles by Permission:resourcemanager.projects.get.
Note: if you are running into issues, be sure to read the requirements for granting access to resources here.
I attempting to use an activated service account scoped to create and delete gcloud container clusters (k8s clusters), using the following commands:
gcloud config configurations create my-svc-account \
--no-activate \
--project myProject
gcloud auth activate-service-account my-svc-account#my-project.iam.gserviceaccount.com \
--key-file=/path/to/keyfile.json \
--configuration my-svc-account
gcloud container clusters create a-new-cluster \
--configuration my-svc-account \
--project= my-project
--zone "my-zone"
I always receive the error:
...ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=The user does not have access to service account "default".
How do I grant my-svc-account access to the default service account for GKE?
After talking to Google Support, the issue was that the service account did not have a "Service Account User" permissions activated. Adding "Service Account User" resolves this error.
Add the following role to the service account who makes the operation:
Service Account User
Also see:
https://cloud.google.com/kubernetes-engine/docs/how-to/iam#service_account_user
https://cloud.google.com/iam/docs/service-accounts#the_service_account_user_role
https://cloud.google.com/iam/docs/understanding-roles
For those that ended up here trying to do an Import of Firebase Firestore documents with a command such as:
gcloud beta firestore import --collection-ids='collectionA','collectionB' gs://YOUR_BUCKET
I got around the issue by doing the following:
From the Google Cloud Console Storage Bucket Browser, add the service account completing the operation to the list of members with a role of Storage Admin.
Re-attempt the operation.
For security, I revoked the role after the operation completed, but that's optional.
iam.serviceAccounts.actAs is the exact permission you need from Service Account User
I was getting the The user does not have access to service account... error even though I added the Service Account User role as others have suggested. What I was missing was the organization policy that prevented service account impersonation across projects. This is explained in the docs: https://cloud.google.com/iam/docs/impersonating-service-accounts#enabling-cross-project
Added Service Account User role to service account and it worked for me.