GCP workload identity federation - Github provider - 'Unable to acquire impersonated credentials' - google-cloud-platform

I've followed these instructions to the letter to allow me to use the short lived token authentication method to access gcloud resources from our github actions workflow.
I've created the service account, workload identity pool and github provider pool using the exact instructions above, but it doesn't appear that the auth step is getting the correct token (or any token at all). The GCP service account has the correct IAM permissions.
On the gcloud compute instances list step, I'm receiving the error:
ERROR: (gcloud.compute.instances.list) There was a problem refreshing your current auth tokens: ('Unable to acquire impersonated credentials: No access token or invalid expiration in response.', '{\n "error": {\n "code": 403,\n "message": "The caller does not have permission",\n "status": "PERMISSION_DENIED"\n }\n}\n')
Please run:
$ gcloud auth login
to obtain new credentials.
My github actions file is as follows:
jobs:
backup:
permissions:
contents: 'read'
id-token: 'write'
runs-on: ubuntu-latest
steps:
- name: 'Checkout code'
uses: actions/checkout#v3
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: 'google-github-actions/auth#v0'
with:
workload_identity_provider: 'projects/*******/locations/global/workloadIdentityPools/**REDACTED**/providers/github-provider'
service_account: '**REDACTED**#**REDACTED**.iam.gserviceaccount.com'
# Install gcloud, `setup-gcloud` automatically picks up authentication from `auth`.
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v0'
- name: 'Use gcloud CLI'
run: gcloud compute instances list
I enabled logging for the token exchange and I can see it occurring (with no obvious errors) in GCP logs either. So I'm completely stumped.
Any ideas?

So I later found out what this was. Despite running:
gcloud iam service-accounts add-iam-policy-binding SERVICE_ACCOUNT_EMAIL \
--role=roles/iam.workloadIdentityUser \
--member="MEMBER_EXPRESSION"
As per the docs, it had not granted permission - I went into the console and checked the workload identity pool under "connected service accounts" menu (to the left) and the service account wasn't in there, so I added it manually.

In addition to the OP's own answer of the service account not being connected (bound) at all, this can result from the service account binding being constrained using attribute mappings.
In the default setup for GitHub Actions discussed here on the google blog for WIF, the provider is set up with a set of attribute mappings:
--attribute-mapping="google.subject=assertion.sub,attribute.repository=assertion.repository"
Those attributes can be used to restrict access when you connect (bind) the service account... in the following, the member uses the repository attribute to constrain it so that only actions executing in my-org/my-repo on GitHub will be permitted.
gcloud iam service-accounts add-iam-policy-binding "my-service-account#${PROJECT_ID}.iam.gserviceaccount.com" \
--project="${PROJECT_ID}" \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/attribute.repository/my-org/my-repo"
A cross-repository provider
Of course, you want to use strong restrictions here. Restricting to a repository, or even a particular branch (so only main branch actions have privilege to deploy to production, for example). No restrictions allows anything, which is absolutely not what you want!!!
In my case, I set up my WIF provider, then tried to reuse it from another repository, resulting in the error experienced by the OP.
I chose to add the repository_owner attribute mapping from the list of all possible attributes that are in GitHub's OIDC token (the attribute mappings are editable in the google cloud console), then bind my service account to that rather than the repository-specific principal:
--attribute-mapping="google.subject=assertion.sub,attribute.repository_owner=assertion.repository_owner"
and
gcloud iam service-accounts add-iam-policy-binding "my-service-account#${PROJECT_ID}.iam.gserviceaccount.com" \
--project="${PROJECT_ID}" \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/attribute.repository_owner/my-org"
Bingo, it works a charm now.
Take care to think about your attack surface though, loosening this constraint too widely creates real vulnerability.

Related

Accessing Resource ID in GCP without gcloud

I want to get resource ids of resources in google cloud platform.
I have enough access to google cloud console to access those resources.
However I can't access through gcloud shell utility and need additional permissions.
$ gcloud alpha monitoring policies list --project kbc-env --filter="userLabels.alerting_db_type:*" --format json > ~/Downloads/kbc-env.json
ERROR: (gcloud.alpha.monitoring.policies.list) User [sjain#kbc.com] does not have permission to access projects instance [kbc-env] (or it may not exist): Caller does not have required permission to use project kbc-env. Grant the caller the roles/serviceusage.serviceUsageConsumer role, or a custom role with the serviceusage.services.use permission, by visiting https://console.developers.google.com/iam-admin/iam/project?project=kbc-env and then retry. Propagation of the new permission may take a few minutes.
- '#type': type.googleapis.com/google.rpc.Help
links:
- description: Google developer console IAM admin
url: https://console.developers.google.com/iam-admin/iam/project?project=kbc-env
- '#type': type.googleapis.com/google.rpc.ErrorInfo
domain: googleapis.com
metadata:
consumer: projects/kbc-env
service: monitoring.googleapis.com
reason: USER_PROJECT_DENIED
How can i get resource ids without gcloud permissions ?
To get resource id without gcloud
Step 1 :
Go to GCP console and open the resource directly
For e.g. You can go to Cloud Monitoring then click on Alerting and then click to one of alerts.
Step 2 :
We can get resource id from the URL like shown on the screenshot

How do I deploy a Google Cloud Function (2nd generation)?

I've previously only used Cloud Functions of gen. 1 but now plan to move to 2nd generation and is just trying to deploy/test a first basic function. I'm just taking the Google sample for a storage triggered function and try to deploy it, but it keeps failing.
This is what it looks like:
> gcloud functions deploy nodejs-finalize-function --gen2 --runtime=nodejs16 --project myproject --region=europe-west3 --source=. --entry-point=handleImage --trigger-event-filters='type=google.cloud.storage.object.v1.finalized' --trigger-event-filters='bucket=se_my_images'
Preparing function...done.
X Deploying function...
✓ [Build] Logs are available at [https://console.cloud.google.com/cloud-build/builds;region=europe-west3/a8355043-adf0-4485-a510-1d54b7e11111?project=123445666123]
✓ [Service]
✓ [Trigger]
- [ArtifactRegistry] Deleting function artifacts in Artifact Registry...
. [Healthcheck]
. [Triggercheck]
Failed.
ERROR: (gcloud.functions.deploy) OperationError: code=7, message=Creating trigger failed for projects/myproject/locations/europe-west3/triggers/nodejs-finalize-function-898863: The Cloud Storage service account for your bucket is unable to publish to Cloud Pub/Sub topics in the specified project.
To use GCS CloudEvent triggers, the GCS service account requires the Pub/Sub Publisher (roles/pubsub.publisher) IAM role in the specified project. (See https://cloud.google.com/eventarc/docs/run/quickstart-storage#before-you-begin)
The error looks easy to understand, but I have aded the Pub/Sub Publisher role to all my service accounts now (the ones listed below) and I still keep getting the same error.
>gcloud iam service-accounts list --project myproject
DISPLAY NAME EMAIL DISABLED
firebase-adminsdk firebase-adminsdk-u2x33#myproject.iam.gserviceaccount.com False
Default compute service account 930445666575-compute#developer.gserviceaccount.com False
backend-dev backend-dev#myproject.iam.gserviceaccount.com False
App Engine default service account myproject#appspot.gserviceaccount.com False
I don't know how to move forward from here so I hope someone can help.
*** EDIT ***.
I added the role to the listed service accounts in the GCP console, IAM > Permissions > View By Principal page/view where I used the Edit Principal button to assign an additional role (Pub/Sub Publisher) to the service accounts (note that I added the role to all my listed service accounts since I'm not 100% sure which one is used by GCP for cloud deployment).
since you are already using gcloud cli, i suggest you follow the step 2 which says:
PROJECT_ID=$(gcloud config get-value project)
PROJECT_NUMBER=$(gcloud projects list --filter="project_id:$PROJECT_ID" --format='value(project_number)')
SERVICE_ACCOUNT=$(gsutil kms serviceaccount -p $PROJECT_NUMBER)
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT \
--role roles/pubsub.publisher
After these 4 cmd's, you should have no problems.. I don't use gcp interface for iam purposes, since all my iam policies are uploaded by terraform/terragrunt.

(gcloud.dataflow.flex-template.build) PERMISSION_DENIED: The caller does not have permission

I'm trying to build a flex-template image using a service account:
gcloud dataflow flex-template build "$TEMPLATE_PATH" \
--image-gcr-path "$TEMPLATE_IMAGE" \
--sdk-language "JAVA" \
--flex-template-base-image JAVA11 \
--metadata-file "metadata.json" \
--jar "target/XXX.jar" \
--env FLEX_TEMPLATE_JAVA_MAIN_CLASS="XXX"
The service account has the following roles:
"roles/appengine.appAdmin",
"roles/bigquery.admin",
"roles/cloudfunctions.admin",
"roles/cloudtasks.admin",
"roles/compute.viewer",
"roles/container.admin",
"roles/dataproc.admin",
"roles/iam.securityAdmin",
"roles/iam.serviceAccountAdmin",
"roles/iam.serviceAccountUser",
"roles/iam.roleAdmin",
"roles/resourcemanager.projectIamAdmin",
"roles/pubsub.admin",
"roles/serviceusage.serviceUsageAdmin",
"roles/servicemanagement.admin",
"roles/spanner.admin",
"roles/storage.admin",
"roles/storage.objectAdmin",
"roles/firebase.admin",
"roles/cloudconfig.admin",
"roles/vpcaccess.admin",
"roles/compute.instanceAdmin.v1",
"roles/dataflow.admin",
"roles/dataflow.serviceAgent"
However, even with the dataflow.admin and dataflow.serviceAgent roles, my service account is still unable to perform this task.
The documentation https://cloud.google.com/dataflow/docs/guides/templates/using-flex-templates advises to grant the roles/owner role to the service account, but I'm hesitant to do that as this is meant to be part of a CI/CD pipeline and giving a service account an owner role doesn't really make sense to me unless I'm completely wrong.
Is there any way to circumvent this issue without granting the owner role to the service account?
I just ran into the exact same issue and spent a few hours figuring this out. We use terraform service account as well. As you mentioned there are 2 main issues: service account access and the build logs access.
By default, cloud build will use a default service account of form [project_number]#cloudbuild.gserviceaccount.com so you need to grant permissions to this service account to write to your gcs bucket backing the gcr container registry. I granted roles/storage.admin to my service account.
Like you mentioned, by default again, cloud build saves the logs at gs://[project_number].cloudbuild-logs.googleusercontent.com. This seems to be a hidden bucket in the project, at least I could not see it. In adddition, can't configure google_storage_bucket_iam_member for it, instead the recommendation as per this doc is to give roles/viewer at the project level to the service account running the gcloud dataflow ... command.
I was able to run the command successfully after the above changes.

Google Cloud Platform Service Account is Unable to Access Project

I encounter the following warning:
WARNING: You do not appear to have access to project [$PROJECT] or it does not exist.
after running the following commands locally:
Activate and set a service account:
gcloud auth activate-service-account \
$SERVICE_ACCOUNT \
--key-file=key.json
#=>
Activated service account credentials for: [$SERVICE_ACCOUNT]
Select $PROJECT as the above service account:
gcloud config set project $PROJECT
#=>
Updated property [core/project].
WARNING: You do not appear to have access to project [$PROJECT] or it does not exist.
My own GCP account is associated with the following roles:
App Engine Admin
Cloud Build Editor
Cloud Scheduler Admin
Storage Object Creator
Storage Object Viewer
Why is this service account unable to set $PROJECT? Is there a role or permission I am missing?
The solution to this issue might be to enable the Cloud Resource Manager API in your Google Cloud Console here by clicking enable.
I believe this is an erroneous warning message. I see the same warning message on my service account despite the fact that the account has permissions on my GCP project and can successfully perform necessary actions.
You might be seeing this error due to an unrelated problem. In my case, I was trying to deploy to AppEngine from a continuous integration environment (Circle CI), but I hadn't enabled the App Engine Admin API. Once I enabled the API, I was able to deploy successfully.
I encountered this error when I started out with Google CLoud Platform.
The issue was that I configured/set a non-existing project (my-kube-project)
as my default project using the command below:
gcloud config set project my-kube-project
Here's how I solved it:
I had to list my existing projects first:
gcloud projects list
And then I copied the ID of the project that I wanted, and rannthe command again this time:
gcloud config set project gold-magpie-258213
And it worked fine.
Note: You cannot change the ID of a project's ID or Number,you can only change the Name.
That's all.
I hope this helps
I was encountering the same error when trying to deploy an app to Google App Engine via a service account configured in CircleCI and resolved it by having the following roles (permissions) attached to my service role:
App Engine Deployer
App Engine Service Admin
Cloud Build Editor
Storage Object Creator
Storage Object Viewer
I also had the App Engine Admin API enabled, but not the Cloud Resource Manager API.
The
WARNING: You do not appear to have access to project [$PROJECT_ID] or it does not exist.
warning will appear if there isn't at least one role granted to the service account that contains the resourcemanager.projects.get permission.
In other words, the warning will appear if the result of the following commands is blank:
Gather all roles for a given $SERVICE_ACCOUNT (this works for any account, not just service accounts):
gcloud projects get-iam-policy $PROJECT_ID \
--flatten='bindings[].members' \
--format='table(bindings.role)' \
--filter="bindings.members:${SERVICE_ACCOUNT}"
#=>
ROLE
. . .
For each $ROLE gathered above, either:
gcloud iam roles describe $ROLE \
--flatten='includedPermissions' \
--format='value(includedPermissions)' \
--project=$PROJECT_ID | grep \
--regexp '^resourcemanager.projects.get$'
if the $ROLE is a custom (projects/$PROJECT_ID/roles/$ROLE), or:
gcloud iam roles describe roles/$ROLE \
--flatten='includedPermissions' \
--format='value(includedPermissions)' | grep \
--regexp '^resourcemanager.projects.get$'
if the $ROLE is a curated (roles/$ROLE).
Note: the difference between gcloud command formatting for custom and curated roles is what makes listing all permissions associated with all roles associated with a single account difficult.
If you have confirmed that none of the roles associated with a service account contain the resourcemanager.projects.get permission, then either:
Update at least one of the custom roles associated with the service account with the resourcemanager.projects.get permission:
gcloud iam roles update $ROLE \
--add-permissions=resourcemanager.projects.get \
--project=$PROJECT_ID
#=>
description: $ROLE_DESCRIPTION
etag: . . .
includedPermissions:
. . .
- resourcemanager.projects.get
. . .
name: projects/$PROJECT_ID/roles/$ROLE
stage: . . .
title: $ROLE_TITLE
Warning: make sure to use the --add-permissions flag here when updating, as the --permissions flag will remove any other permissions the custom role used to have.
Create a custom role:
gcloud iam roles create $ROLE \
--description="$ROLE_DESCRIPTION" \
--permissions=resourcemanager.projects.get \
--project=$PROJECT_ID \
--title='$ROLE_TITLE'
#=>
Created role [$ROLE].
description: $ROLE_DESCRIPTION
etag: . . .
includedPermissions:
- resourcemanager.projects.get
name: projects/$PROJECT_ID/roles/$ROLE
stage: . . .
title: $ROLE_TITLE
and associate it with the service account:
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$SERVICE_ACCOUNT \
--role=projects/$PROJECT_ID/roles/$ROLE
#=>
Updated IAM policy for project [$PROJECT_ID].
auditConfigs:
. . .
Associate the service account with a curated role that already contains the resourcemanager.projects.get permission, which has been discussed above.
If you want to know which curated roles already contain the resourcemanager.projects.get permission and don't want to craft a complex shell loop, it might be easier to go here and filter all roles by Permission:resourcemanager.projects.get.
Note: if you are running into issues, be sure to read the requirements for granting access to resources here.

gcloud: The user does not have access to service account "default"

I attempting to use an activated service account scoped to create and delete gcloud container clusters (k8s clusters), using the following commands:
gcloud config configurations create my-svc-account \
--no-activate \
--project myProject
gcloud auth activate-service-account my-svc-account#my-project.iam.gserviceaccount.com \
--key-file=/path/to/keyfile.json \
--configuration my-svc-account
gcloud container clusters create a-new-cluster \
--configuration my-svc-account \
--project= my-project
--zone "my-zone"
I always receive the error:
...ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=The user does not have access to service account "default".
How do I grant my-svc-account access to the default service account for GKE?
After talking to Google Support, the issue was that the service account did not have a "Service Account User" permissions activated. Adding "Service Account User" resolves this error.
Add the following role to the service account who makes the operation:
Service Account User
Also see:
https://cloud.google.com/kubernetes-engine/docs/how-to/iam#service_account_user
https://cloud.google.com/iam/docs/service-accounts#the_service_account_user_role
https://cloud.google.com/iam/docs/understanding-roles
For those that ended up here trying to do an Import of Firebase Firestore documents with a command such as:
gcloud beta firestore import --collection-ids='collectionA','collectionB' gs://YOUR_BUCKET
I got around the issue by doing the following:
From the Google Cloud Console Storage Bucket Browser, add the service account completing the operation to the list of members with a role of Storage Admin.
Re-attempt the operation.
For security, I revoked the role after the operation completed, but that's optional.
iam.serviceAccounts.actAs is the exact permission you need from Service Account User
I was getting the The user does not have access to service account... error even though I added the Service Account User role as others have suggested. What I was missing was the organization policy that prevented service account impersonation across projects. This is explained in the docs: https://cloud.google.com/iam/docs/impersonating-service-accounts#enabling-cross-project
Added Service Account User role to service account and it worked for me.