Google Cloud VM Instance is not authorized with service account - google-cloud-platform

I have dist-upgraded my Debian VM instance to current version after which I have re-installed the google cloud host environment. This is the state of google-related services:
$ systemctl list-unit-files | grep google
google-cloud-ops-agent-diagnostics.service enabled enabled
google-cloud-ops-agent-fluent-bit.service static -
google-cloud-ops-agent-opentelemetry-collector.service static -
google-cloud-ops-agent.service enabled enabled
google-guest-agent.service enabled enabled
google-osconfig-agent.service enabled enabled
google-oslogin-cache.service static -
google-shutdown-scripts.service enabled enabled
google-startup-scripts.service enabled enabled
google-oslogin-cache.timer enabled enabled
I cannot see any errors with them in system logs. My gcloud config list looks like:
[core]
account = my-project-id-compute#developer.gserviceaccount.com
disable_usage_reporting = True
project = my-project
Your active configuration is: [default]
Everything seems to be fine, and yet I cannot access any gcloud resources, eg.
# gcloud compute instances list
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
- Request had insufficient authentication scopes.
I have some other instance, which was not upgraded. The commands above give the same result, but the gcloud compute instances list works correctly. GOOGLE_APPLICATION_CREDENTIALS is undefined on both instances and none of the instances has $HOME/.config/gcloud/application_default_credentials.json file.
How can I authorize the service account so that it is usable on the broken instance?

Related

What are access scopes in gcloud auth, and why do they differ in Cloud Shell vs. my local machine?

I'm seeing a permissions bug when using docker push as described in the Google Artifact Registry Quickstart. As noted in that question, the problem seems to come down to missing scopes on the access token. In my local shell, the scopes are these (as indicated by https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=<token>):
openid https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/accounts.reauth
When I run the same sequence of steps in Cloud Shell, I have many more scopes on the access token:
https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/appengine.admin https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/compute https://www.googleapis.com/auth/devstorage.full_control https://www.googleapis.com/auth/devstorage.read_only https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/ndev.cloudman https://www.googleapis.com/auth/cloud-platform https://www.googleapis.com/auth/sqlservice.admin https://www.googleapis.com/auth/prediction https://www.googleapis.com/auth/projecthosting https://www.googleapis.com/auth/source.full_control https://www.googleapis.com/auth/source.read_only https://www.googleapis.com/auth/source.read_write openid"
I'm not able to pinpoint what differences between my Cloud Shell configuration and my local one might cause this difference in scopes. These commands all have the same output on both:
$ gcloud auth list
Credentialed Accounts
ACTIVE: *
ACCOUNT: <my email address>
$ cat ~/.docker/config.json
{
"credHelpers": {
"gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"eu.gcr.io": "gcloud",
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"us-central1-docker.pkg.dev": "gcloud"
}
}
gcloud config list shows these differences:
// in Cloud Shell
[accessibility]
screen_reader = True
[component_manager]
disable_update_check = True
[compute]
gce_metadata_read_timeout_sec = 30
[core]
account = <my email address>
disable_usage_reporting = True
project = <my project>
[metrics]
environment = devshell
// on my local machine
[core]
account = <my email address>
disable_usage_reporting = True
pass_credentials_to_gsutil = false
project = <my project>
Questions:
What are scopes here anyway? What is their relationship to the roles assigned to the project principal (example#stackoverflow.com)?
What could be causing my scopes to differ in Cloud Shell vs on my local machine? How do I fix it so I can correctly access the Artifact Registry locally?
EDIT:
To clarify, here are the commands I'm running and the error I'm seeing, which exactly duplicates the SO question referenced above. Commands are taken directly from the [Artifact Registry Quickstart]
(https://cloud.google.com/artifact-registry/docs/docker/quickstart#gcloud). This question was intended to be about scopes, but seems like those may not be my issue.
$ gcloud auth configure-docker us-central1-docker.pkg.dev
WARNING: Your config file at [~/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"eu.gcr.io": "gcloud",
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"us-central1-docker.pkg.dev": "gcloud"
}
}
Adding credentials for: us-central1-docker.pkg.dev
gcloud credential helpers already registered correctly.
$ sudo docker tag us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0 \
us-central1-docker.pkg.dev/<my project>/quickstart-docker-repo/quickstart-image:tag1
$ sudo docker push us-central1-docker.pkg.dev/<my project>/quickstart-docker-repo/quickstart-image:tag1
The push refers to repository [us-central1-docker.pkg.dev/<my project>/quickstart-docker-repo/quickstart-image]
260c3e3f1e70: Preparing
e2eb06d8af82: Preparing
denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/qwanto/locations/us-central1/repositories/quickstart-docker-repo" (or it may not exist)
What are scopes here anyway? What is their relationship to the roles
assigned to the project principal (example#stackoverflow.com)?
In Google Cloud, permissions for an identity are determined by Google Cloud IAM roles. This is an important point to understand.
OAuth Scopes are using when requesting authorization. Scopes can limit permissions granted to a subset of the permissions granted by an IAM role. Scopes cannot grant permissions that exceed or are not included by an IAM role.
Think of the resulting permissions being the intersection of IAM Roles and OAuth Scopes.
Note: You have the scope https://www.googleapis.com/auth/cloud-platform which is sufficient. The other scopes are just extras. Ignore scopes and make sure your IAM roles are correct.
What could be causing my scopes to differ in Cloud Shell vs on my
local machine? How do I fix it so I can correctly access the Artifact
Registry locally?
You are chasing the wrong details (scopes) in solving your problem. Provided that you have the correct IAM roles granted to your identity, you can push to Container Registry and Artifact Registry.
Presumably running in the same issue, I've found the solution somewhat hidden in the docs (at the end of the linked section):
Note: If you normally run Docker commands on Linux with sudo, Docker
looks for Artifact Registry credentials in /root/.docker/config.json
instead of $HOME/.docker/config.json. If you want to use sudo with
docker commands instead of using the Docker security group, configure
credentials with sudo gcloud auth configure-docker instead.
So basically, the quick-start works only if you don't use sudo for running docker.

How to install monitoring agent on GCP Compute VM that is set to a Service Account?

I have a GCP VM set to use a service account so in the VM instance details on the console:
Service account
blarg#MYPROJECT.iam.gserviceaccount.com
When I run the command for installing the monitoring agent I saw this:
Updating project ssh metadata...failed.
Updating instance ssh metadata...failed.
ERROR: (gcloud.beta.compute.ssh) Could not add SSH key to instance metadata:
Required 'compute.instances.setMetadata' permission for 'projects/MYPROJECT/zones/us-central1-a/instances/MYVM'
I gave the service account the Compute Admin role on the instance (not the whole project) and re-ran. The results are then more confusing:
Updating project ssh metadata...failed.
Updating instance ssh metadata...failed.
ERROR: (gcloud.beta.compute.ssh) Could not add SSH key to instance metadata:
The user does not have access to service account > 'blarg#MYPROJECT.iam.gserviceaccount.com'. User: 'blarg#MYPROJECT.iam.gserviceaccount.com'. Ask a project owner to grant you the iam.serviceAccountUser role on the service account
Do I really grant the iam.serviceAccountUser role on the service account so it can use itself? Is there another way I can run the script as me rather than the service account since I am a project admin/owner?
That's correct, per the official documentation of the compute admin role:
Full control of all Compute Engine resources.
If the user will be managing virtual machine instances that are
configured to run as a service account, you must also grant the
roles/iam.serviceAccountUser role.
Link: https://cloud.google.com/compute/docs/access/iam

How to view external ip on a google cloud platform compute engine?

I am logged into a compute engine via ssh. I am using my personal ssh keys to ssh in. ie. I did gcloud compute ssh --project someproj --zone somezone someserver to login. Then I tried to do gcloud compute instances list to view the external ip. It says I have insufficient privileges. My understanding is that although I have ssh login as my self, I am actually using the service account. So I edited the service account to have Role: Compute Viewer but I still get an error. What am I doing wrong?
Please be advised I know I can view the external IP from console or via my pc using the cli. I'm more interested in why the compute instance can not see it given the IAM settings.
Here is the actual error:
$ gcloud compute instances list
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
- Insufficient Permission: Request had insufficient authentication scopes.
Here is my gcloud sdk config:
$ gcloud config list
[core]
account = some-service-account#developer.gserviceaccount.com
disable_usage_reporting = True
project = some-project
Your active configuration is: [default]
When you create a Compute Engine instance, you have the opportunity to specify "scopes". Scopes are an older technology where we can constrain requests based on allowed execution scopes. The default is to "Allow default access" which allows some GCP services and not others. The other two options are "Allow Full Access" and "Set Access for each API". If you specify "Allow Full Access" then access to GCP services is exclusively IAM control. If it is either default or access for each API then you will be governed by BOTH scopes and IAM permissions.
It is likely that you are using default access which prevents the gcloud command you want to run. Either set "Allow Full Access" or change the specific scope to allow "Compute" scopes.

What predefined IAM roles does a service account need to complete the Google Cloud Run Quickstart: Build and Deploy?

I want to compare Google Cloud Run to both Google App Engine and Google Cloud Functions. The Cloud Run Quickstart: Build and Deploy seems like a good starting point.
My Application Default Credentials are too broad to use during development. I'd like to use a service account, but I struggle to configure one that can complete the quickstart without error.
The question:
What is the least privileged set of predefined roles I can assign to a service account that must execute these commands without errors:
gcloud builds submit --tag gcr.io/{PROJECT-ID}/helloworld
gcloud beta run deploy --image gcr.io/{PROJECT-ID}/helloworld
The first command fails with a (seemingly spurious) error when run via a service account with two roles: Cloud Build Service Account and Cloud Run Admin. I haven't run the second command.
Edit: the error is not spurious. The command builds the image and copies it to the project's container registry, then fails to print the build log to the console (insufficient permissions).
Edit: I ran the second command. It fails with Permission 'iam.serviceaccounts.actAs' denied on {service-account}. I could resolve this by assigning the Service Account User role. But that allows the deploy command to act as the project's runtime service account, which has the Editor role by default. Creating a service account with (effectively) both Viewer and Editor roles isn't much better than using my Application Default Credentials.
So I should change the runtime service account permissions. The Cloud Run Service Identity docs have this to say about least privileged access configuration:
This changes the permissions for all services in a project, as well
as Compute Engine and Google Kubernetes Engine instances. Therefore,
the minimum set of permissions must contain the permissions required
for Cloud Run, Compute Engine, and Google Kubernetes Engine in a
project.
Unfortunately, the docs don't say what those permissions are or which set of predefined roles covers them.
What I've done so far:
Use the dev console to create a new GCP project
Use the dev console to create a new service account with the Cloud Run Admin role
Use the dev console to create (and download) a key for the service account
Create (and activate) a gcloud configuration for the project
$ gcloud config list
[core]
account = {service-account-name}#{project-id}.iam.gserviceaccount.com
disable_usage_reporting = True
project = {project-id}
[run]
region = us-central1
Activate the service account using the downloaded key
Use the dev console to enable the Cloud Run API
Use the dev console to enable Container Registry→Settings→Container Analysis API
Create a sample application and Dockerfile as instructed by the quickstart documentation
Run gcloud builds submit --tag gcr.io/[PROJECT-ID]/helloworld
...fails due to missing cloud build permissions
Add the Cloud Build Editor role to service account and resubmit build
...fails due to missing storage permissions. I didn't pay careful attention to what was missing.
Add the Storage Object Admin role to service account and resubmit build
...fails due to missing storage bucket permissions
Replace service account's Storage Object Admin role with the Storage Admin role and resubmit build
...fails with
Error: (gcloud.builds.submit) HTTPError 403:
<?xml version='1.0' encoding='UTF-8'?>
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
{service-account-name} does not have storage.objects.get access to
{number}.cloudbuild-logs.googleusercontent.com/log-{uuid}.txt.</Details>
</Error>
Examine the set of available roles and the project's automatically created service accounts. Realize that the Cloud Build Service Account role has many more permissions that the Cloud Build Editor. This surprised me; the legacy Editor role has "Edit access to all resources".
Remove the Cloud Build Editor and Storage Admin roles from service account
Add the Cloud Build Service Account role to service account and resubmit build
...fails with the same HTTP 403 error (missing get access for a log file)
Check Cloud Build→History in the dev console; find successful builds!
Check Container Registry→Images in the dev console; find images!
At this point I think I could finish Google Cloud Run Quickstart: Build and Deploy. But I don't want to proceed with (seemingly spurious) error messages in my build process.
Cloud Run PM here:
We can break this down into the two sets of permissions needed:
# build a container image
gcloud builds submit --tag gcr.io/{PROJECT_ID}/helloworld
You'll need:
Cloud Build Editor and Cloud Build Viewer (as per #wlhee)
# deploy a container image
gcloud beta run deploy --image gcr.io/{PROJECT_ID}/helloworld
You need to do two things:
Grant your service account the Cloud Run Deployer role (if you want to change the IAM policy, say to deploy the service publicly, you'll need Cloud Run Admin).
Follow the Additional Deployment Instructions to grant that service account the ability to deploy your service account
#1
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:{service-account-name}#{project-id}.iam.gserviceaccount.com" \
--role="roles/run.developer"
#2
gcloud iam service-accounts add-iam-policy-binding \
PROJECT_NUMBER-compute#developer.gserviceaccount.com \
--member="serviceAccount:{service-account-name}#{project-id}.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
EDIT: As noted, the latter grants your service account the ability to actAs the runtime service account. What role this service account has is dependent on what it needs to access: if the only thing Run/GKE/GCE accesses is GCS, then give it something like Storage Object Viewer instead of Editor. We are also working on per-service identities, so you can create a service account and "override" the default with something that has least-privilege.
According to https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions
"Cloud Build Service Account" - Cloud Build executes your builds using a service account, a special Google account that executes builds on your behalf.
In order to call
gcloud builds submit --tag gcr.io/path
Edit:
Please "Cloud Build Editor" and "Viewer" your service account that starts the build, it's due to the current Cloud Build authorization model.
Sorry for the inconvenience.

gcloud - no permissions for any API even though I am owner and works fine through web UI

I am the owner of my newly created organization, I created a project under this organization and linked it to the organization billing account where I have 1000$ in credits. Through the web UI, I am able to spin up clusters, VMs, networks... But when I want to do so through gcloud, I am getting permissions denied. E.g.:
$ gcloud compute networks list
API [compute.googleapis.com] not enabled on project [XXX].
Would you like to enable and retry (this will take a few minutes)?
(y/N)? y
ERROR: (gcloud.compute.networks.create) PERMISSION_DENIED: The caller does not have permission
but I can see in the web UI GCP that the API is clearly enabled (and can be used), it's just the gcloud not letting me work with them. The account under gcloud is exactly the same I am using in the web console - validated by gcloud auth list and:
$ gcloud config configurations describe myproject
is_active: true
name: myproject
properties:
compute:
region: europe-west1
zone: europe-west1-b
core:
account: <my-email>
project: <the-project-I-want>
or
$ gcloud services list
ERROR: (gcloud.services.list) User [<myusername>] does not have permission to access project [myproject] (or it may not exist): The caller does not have permission
It works totally fine with a different account (and different organization/projects), but I didn't set up that one in the past. What should I do? Thanks a lot!
UPDATE:
After gcloud init, at least the gcloud services list started to work. But the rest did not:
$ gcloud services list
NAME TITLE
bigquery-json.googleapis.com BigQuery API
cloudapis.googleapis.com Google Cloud APIs
clouddebugger.googleapis.com Stackdriver Debugger API
cloudtrace.googleapis.com Stackdriver Trace API
compute.googleapis.com Compute Engine API
container.googleapis.com Kubernetes Engine API
containerregistry.googleapis.com Container Registry API
datastore.googleapis.com Cloud Datastore API
logging.googleapis.com Stackdriver Logging API
monitoring.googleapis.com Stackdriver Monitoring API
oslogin.googleapis.com Cloud OS Login API
pubsub.googleapis.com Cloud Pub/Sub API
servicemanagement.googleapis.com Service Management API
serviceusage.googleapis.com Service Usage API
sql-component.googleapis.com Cloud SQL
storage-api.googleapis.com Google Cloud Storage JSON API
storage-component.googleapis.com Google Cloud Storage
$ gcloud compute networks create testing-net --subnet-mode=custom '--description=Network to host testing kubernetes cluster'
API [compute.googleapis.com] not enabled on project [{PROJECT_ID}].
Would you like to enable and retry (this will take a few minutes)?
(y/N)? y
ERROR: (gcloud.compute.networks.create) PERMISSION_DENIED: The caller does not have permission
^ the PROJECT_ID above shows my organization's ID, not the actual project under this org.
So the problem was that I used the wrong project_id when gcloud config set project and gcloud defaulted to organization for some reason.
So I had to find correct project id using gcloud projects list and then use gcloud config set project {PROJECT-ID} (not the project name!)
gcloud init - if you wanted to switch gcloud to work between projects which will configure its settings to point to the right project.