Enable storage permission on Google Cloud VM instance - google-cloud-platform

I have a Google Cloud VM instance, but I overlooked setting the Storage permission to read-write when creating it.
Now further down the line, I'm looking to experiment with cloud storage, but my instance is read-only.
How can this be changed? I understand it may be possible by relaxing the storage buckets policy, but I'd prefer my instance had write access to all future project buckets.
I presume there is an option in gcloud to change the devstorage parameter?

So, you can't change the option to grant the VM permission, but I did find that you can just run
gcloud auth login
then you can sign in with your management account from the VM and you'll be able to run commands that way.

Unfortunately you can't change the scopes of the VM. You will have to create a new one to change them.
When you create a new one you can reuse the disk of the old VM, if that helps avoid the pain.

You can update the permissions of your VM instance now but only when it is shut down. Check the following documentation:
Changing the service account and access scopes for an instance
If you want to update the API permissions for the kuberntes clusters VM instance then you cannot do that unless you create a new cluster and give the API access to the nodes associated with the clusters.

I believe they have added the option to change it now, without creating another VM.
Once you have stopped the instances, click on the instances you want to change. On the top there is a Edit button, you can click on it and change any the permissions.
Hope the image helps
edit button
Once you have changed the permission to read_write and it still says Access Denied. Go in your instance SSH browser window and enter 'gcloud auth login', follow the steps and hopefully it works!

You need to stop your vm first, then click on edit and change the cloud api access scopes for storage.
You may find more information here: https://ismailyenigul.medium.com/setting-access-scope-of-google-cloud-vm-instances-c8637718f453

Related

How to access-protect to data on my Google Compute Engine VM?

I want to work with sensitive data that should not be seen by other members on a specific VM instance in GCP that my organization has contracted.
Usually, if I just set up a VM instance, other members of your organization are free to create a user to connect to her VM with SSH and have sudo privileges.
So, I'm wondering if I shouldn't have sensitive data inside the VM.
Q. Is there a way to prevent other users from accessing my VM instance data?
Q. Is the OS login suitable for the above purposes?If there is a simpler and more typical method, I would like to adopt it.
I currently have the role of "editor" on the GCP project.
Thanks.

GCP IAM Set of Permissions for GKE clusterNode resizing

We are in process of improving the IAM roles in our project and we need to enable dev team to only resize their cluster to save the cost.
We are struggling to get the exact set of permissions needed to enable user to only scale up and scale down cluster nodes (i.e. resizing). We referred below GCP IAM documentation but it didn't help either to get this information.
https://cloud.google.com/iam/docs/permissions-reference
Currently, we have given below set of permissions(some of them may not required) however we are not able to do cluster resizing with this. And one more issue is GKE does not give any permission error, we see the "Node Pool Resized Successfully" notification but nodepool size doesn't change.
Is there any documentation or link which has the mapping of set of permissions vs user activity type of mapping for GCP IAM.
The GKE cluster will be created with the permissions that is set on the 'Access scopes' section in the 'Advanced edit' tab. So only the APIs with the access enabled in this section will be shown as enabled. These permissions denote the type and level of API access granted to the VM in the node pool. Scopes inform the access level your cluster nodes will have to specific GCP services as a whole. Please see this link for more information about accesss scopes.
In the tab of 'Create a Kubernetes cluster', click 'Advanced edit'. Then you will see another tab called 'Edit node pool' pops up with more options. If you click 'Set access for each API', you will see the option to set these permissions.
'Permissions' are defined when the cluster is created. You can not edit it directly on the cluster after the creation. You may want to create a new cluster with appropriate permissions or create a new Node Pool with the new scopes you need and then delete your old 'default' Node Pool as specified in this link .
When you add or remove nodes in your cluster, Google Kubernetes Engine (GKE) adds or removes the associated virtual machine (VM) instances from the underlying Compute Engine Managed Instance Groups (MIGs) provisioned for your node pools.
Please see this link for more information about resizing the cluster.

Terraform Google Cloud: Executing a Remote Script on a VM

I'm trying to execute a Script on a Google VM through Terraform.
First I tried it via Google Startup Scripts. But since the metadata is visible in the Google Console (startup scripts count as metadata) and that would mean that anybody with read access can see that script which is not acceptable.
So i tried to get the script from a Storage Account. But for that i need to attach a service account to the VM so the VM has the rights to access the Storage Account. Now people that have access to the VM also have access to my script as long as the service account is attached to the VM. In order to "detach" the service account i would have to stop the VM. Also if i don't want to permanently keep the attachment of the service account i would have to attach the service account via a script which requires another stop and start of the VM. This is probably not possible and also really ugly.
I don't understand how the remote-exec ressource works on GCP VMs. Because i have to specify a user and a userpassword to connect to the VM and then execute the script. But the windows password needs to be set manually via the google console, so i can't specify those things at this point in time.
So does anybody know how I can execute a Script where not anybody has access to my script via Terraform?
Greetings :) and Thanks in advance
I ended up just running a gcloud script in which i removed the Metadata from the VM after the Terraform apply was finished. In my Gitlab pipeline i just called the script in the "after_script"-section. Unfortunately the credentials are visible for approximately 3min.

Compute instances got deleted after hours of 100% CPU usage

We noticed that multiple compute instances got deleted at the same time after hours of 100% CPU usage. Because of this deletion, the hours of computation was lost.
Can anyone tell us why they got deleted?
I have created a gist with the only log we could find in Stackdriver logging around the time of deletion.
The log files show the following pieces of information:
The deleter's source IP address 34.89.101.139. Check if this matches the public IP address of the instance that was deleted. This IP address is within Google Cloud.
The User-Agent specifies that the Google Cloud SDK CLI gcloud is the program that deleted the instance.
The Compute Engine Default Service Account provided the permissions to delete the instance.
In summary, a person or script ran the CLI and deleted the instance using your project's Compute Engine Default Service Account key from a Google Cloud Compute service.
Future Suggestions:
Remove the permission to delete instances from the Compute Engine Default Service Account or (better) create a new service account that only has the required permissions for this instance.
Do not share service accounts in different Compute Engine instances.
Create separate SSH keys for each user that can SSH into the instance.
Enable Stackdriver logging of the SSH Server auth.log file. You will then know who logged into the instance.

How to choose permissions for a Google Container Engine cluster?

I'm trying to setup a GKE cluster and I want to enable all permissions to other services (since apparently you can't change the permissions after the cluster has been created). This ought to be straight-forward but either I'm doing something wrong or something is broken. I select the following for my project access:
But when the cluster is created a see this:
I.e. everything is disabled. Why is this? How do I set the permissions?
There was a bug in the UI that was showing that all scopes were disabled. I just created a new cluster and the UI is showing the correct scopes.
If this happens again, you can also see the scopes that are enabled on your VMs using the command line by running gcloud container clusters describe NAME --zone=europe-west1-c and looking at the scopes under oauthScopes:.