I'm trying to upload a file from a Google Cloud VM into a cloud storage bucket.
Expectedly it fails because the service account associated with the VM doesn't have permissions:
$ gsutil cp file.png gs://bucket/
Copying file://file.png [Content-Type=image/png]...
AccessDeniedException: 403 Insufficient Permission
From what I understand there are two ways to fix this:
modify the scopes from the VM web admin panel
change the permissions of the bucket and add the service account with write access (I'd prefer this because the other option seems to give access to all buckets in the same project)
However, it seems that both solutions require the VM to be stopped, which is problematic as it is a production server.
Is there any way to fix this without stopping the VM?
There are two methods of controlling permissions granted to a Compute Engine VM.
Access Scopes
Service Account assigned to the instance.
Both of these methods work together. The total permissions available to a Compute Engine instance is controlled by the service account. Access Scopes then limit the permissions assigned to the VM.
You must shutdown a VM to change the Access Scopes. Changing the service account roles does not require rebooting the VM.
For this question regarding Cloud Storage Access:
If the service account has a Cloud Storage role granting access to cloud storage but the Access Scope for Storage is set to None, then the VM will not have access to Cloud Storage even though the service account has the required role. In this case you must shutdown the VM to change the Access Scope to enable access to Cloud Storage.
If the VM Access Scope has Storage enabled, but the service account does not have a Cloud Storage role, the VM will not be able to access Cloud Storage. In this case, adding a Cloud Storage role to the service account will grant access to Cloud Storage without requiring a VM reboot.
Access Scopes (OAuth Scopes) are a legacy mechanism that existed prior to Google Cloud IAM. Given that you are using this VM in a production environment and shutting down the instance is not desired, I recommend the following:
Set the VM Access Scopes to "Allow full access to all Cloud APIs".
Create a new service account with the required roles and assign that service account to the Compute Engine VM instance.
Related
I am using the spring-cloud-gcp library running in the google kubernetes engine for the access of google cloud storage buckets. It should be using the default credentials of the kubernetes engine service account.
The access of creation of buckets or files inside existing buckets fails with 403 Access denied. forbidden.
The storage access works fine when run locally with a different user account, by specific access credentials pointed to by spring.cloud.gcp.credentials.location. Both compute engine as well as the user account have editor permissions.
As the documentation explains, the spring cloud GCP starter should auto-configure the credentials - by default the compute engine service account should be used.
Workload-identity of the cluster does not change anything.
So how could I debug this?
In the process I would also like to verify, which account is used for the storage access.
Although I've gave Owner role to that specific service, I can't use the permissions from my instances that I connect with SSH from my local.
Also can't upload my files to Storage bucket which I've created in cloud platform.
Here is the screenshots of the problem:
The problem might be caused by the access token not having the appropriate permission scopes to conduct the required activity. To make sure you're using the auth scope of this service account appropriately, I recommend doing the following:
Run the command in the Google documentation inside the VM to
create a new key for the service account. This will create a .json
file inside the current directory containing the private
authentication key for the service account.
Run the command in the Google documentation to activate the
service account.
Run the command: $gcloud auth list to check if this worked.
In the output you should see an asterisk before the service
account’s name, indicating that this is the service account you are
currently using.
Now refer to the Google documentation and run the $env:GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH"
Google Cloud Compute VMs have a setting for Access Scopes. This feature can limit the permissions that a service account has when attached to a virtual machine.
Go to the Google Cloud Console GUI, select your VM, stop the VM and then edit Acess Scopes to grant the permissions you require.
Access scopes
I am using Ubuntu in a VM on Google cloud.
I have a .sh script which backs up files to a bucket. When I attempt to run the script, it throws me an error:
AccessDeniedException: 403 blank#blank.iam.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket.
I gave the service account admin permissions for storage. The account is activated and everything. How do I fix this?
Since you are executing the script from the Ubuntu VM and Ubuntu VM usually have the access scope to read only. This might be blocking to upload backup file to GCS bucket.
To change an instance's service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance. Use one of the following methods to the change service account or access scopes of the stopped instance.
Also using the gcloud command you can change the access scope.
gcloud compute instances set-service-account [INSTANCE_NAME] \
[--service-account [SERVICE_ACCOUNT_EMAIL] | --no-service-account] \
[--no-scopes | --scopes [SCOPES,...]]
Once your instance turned off you can set the access scope for Storage to Full and I think it will work for you as you have assigned Storage Admin roles to Service Account.
I'm using Google Secrets to store API keys and other application specific "secrets". When I deploy my application to a VM instance using a docker container, I want it to access the Google Secrets using the VM's associated service account.
I have gotten this working using the following:
Assigning the "Secret Manager Secret Accessor" permission to the Service Account.
Giving my VM access to all APIs:
From a security perspective and recommended best practice, I don't want to give access to all APIs. The default access option doesn't work and I can't figure out from the list which options to enable to allow access to Google Secrets from my VM.
TLDR - Which Cloud API(s) do I need to give my Google Compute VM instance access to so that it can access Google Secrets without giving it access to all of the Cloud APIs?
According to the Secret Manager documentation, you need the cloud-platform OAuth scope.
Note: To access the Secret Manager API from within a Compute Engine instance or a Google Kubernetes Engine node (which is also a Compute Engine instance), the instance must have the cloud-platform OAuth scope. For more information about access scopes in Compute Engine, see Service account permissions in the Compute Engine documentation.
I'm not sure you can set this scope in the web UI, though you can set it through the command line.
Possible Alternative
What I do, rather than setting scopes on VMs, is create a service account specifically for the VMs (instead of using the default service account) and then give this service account access to specific resources (like the specific Secrets it should have access to, rather than all of them). When you do this in the web UI, the access scopes disappear and you are instructed to use IAM roles to control VM access like so:
default service account does not have access to cloud sql and has only read only access to storage.
I tried adding cloud sql admin and storage admin permission to defautl service account but that does not seems to work.
I know it can be solved by using another service account that have these permission and using that when creating compute instance.
I am just curious to know why updating permission of default compute does not work?
It seems that updating the permissions on the Compute Engine default service account is not enough to set the correct level of access you are trying to give to your Compute Engine instance, since, as described here:
When you set up an instance to run as a service account, the level of access the service account has is determined by the combination of access scopes granted to the instance and IAM roles granted to the service account.
From my understanding you are only granting IAM roles to the service account, so, in order to give the desired access level, you should also update the Access scopes for your Compute Engine instance.
When you create a new Compute Engine instance, under Access scopes, it is selected "Allow default access" by default as you can see here New instance. This default access has Cloud SQL access disabled and Cloud Storage access as read-only.
You can refer to this documentation which explains how to change the access scopes for a Compute Engine instance:
To change an instance's service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance.
Once you stop your instance, you can change the Access scopes to either "Set access for each API" or to "Allow full access to all Cloud APIs".
If you choose to set access for each API, you will have to search for "Cloud SQL" and then select "Enabled" and also for "Storage" and select the desired option (Read Only, Write Only, Read Write, Full)
For more information on Access Scopes please refer to this doc and for more information on running Compute Engine instances as service account (including the default service account) please see this doc.
In the Cloud IAM Admin you have to select your Default Service Account by hitting on that pen to the right; then a side.bar will pop up, where you can assign the following roles: Cloud SQL Admin, Cloud SQL Client, Cloud SQL Editor, Cloud SQL Viewer. it's the default role is Editor.