How do I know what service account is tied to which server? - google-cloud-platform

I see notifications about actions being performed by 123456-compute#service-account but how do I know what servers performed these actions? When I search 123456 in GCE I don't see any servers come up

You can use gcloud tool to list service accounts assigned to compute instances:
gcloud --format='table(name,serviceAccounts.email)' compute instances list
The output would be something like that:
NAME EMAIL
instance-1 [u'123456-compute#developer.gserviceaccount.com']
instance-2 [u'234567-compute#developer.gserviceaccount.com']
It can be that all instances run with the same service account. For example compute engine instances created using GCP console or gcloud tool have default compute engine service account assigned.
In this case service account wouldn't help in identifying which server performed an action.

Related

Deleted default Compute Engine service account prevents creation of GKE Autopilot Cluster

For some reason it seems my project no longer has a default Compute Engine service account. I might of deleted some time ago and forgotten.
That's fine, as I usually assign specific service accounts when needed and rarely depend on the default one.
However, I am now trying to create an Autopilot GKE cluster, and I continue to get the annoying error:
Service account "1673******-compute#developer.gserviceaccount.com" does not exist.
In the advanced options there is no possibility to select another service account.
I have seen other answers on StackOverflow regarding recreating the default account. I have tried those answers, as well as attempting to undelete. So far I have not had success with any.
How can I do one of the following:
Create a new default Compute Engine service account
Tell GKE which service account to use when creating an Autopilot cluster
When creating your cluster you just need to add this flag to specify your own SA
--service-account=XXXXXXXX
eg
gcloud beta container --project "xxxxxx" clusters create-auto
"autopilot-cluster-1" --region "us-central1" --release-channel
"regular" --network "projects/xxxxxxx/global/networks/default"
--subnetwork "projects/xxxxxx/regions/us-central1/subnetworks/default" --cluster-ipv4-cidr "/17" --services-ipv4-cidr "/22" --service-account=xxxxxxxxxxxxx.iam.gserviceaccount.com

Dataproc cluster underlying VMs using default service account

I created a Dataproc cluster using a service account via a Terraform script. The cluster has 1 master and 2 workers, so three Compute Engine instances got created as a part of this cluster creations. My questions are-
Why these VMs have default service accounts? Shouldn't they use the same service account that I used to create the dataproc cluster?
Edited: Removed one question as suggested in comment (as topic became too broad)
Here is how you can specify the service account used by the cluster VMs. If you are sure they still use the default service account, it might be a mistake in the Terraform script. You can test with gcloud without Terraform to confirm.

Billing in GCD: finding the sku ID in the vm instance

I have checked the billing in GCD and i just find as reference service ID and a sku ID. I can't even click on the sku ID. Is there a more "direct" way to get to the product i used, if it's still running? I know there is a billing api for this, but this does not direct me to the actual product.
If there is no link in the billing report, where can i see the skuID in my for example VM instance?
You can try to use labels to break down your billing charges per resources (per VM instance for example). Have a look at the documentation Creating and managing labels:
A label is a key-value pair that helps you organize your Google Cloud
instances. You can attach a label to each resource, then filter the
resources based on their labels. Information about labels is forwarded
to the billing system, so you can break down your billing charges by
label.
You can create labels by using Resource Manager API, Cloud Console or gcloud command. For GCE resources you should follow the documentation Labeling resources, for example:
$ gcloud compute instances create example-instance --image-family=rhel-8 --image-project=rhel-cloud --zone=us-central1-a --labels=k0=value1,k1=value2
$ gcloud compute instances add-labels example-instance --labels=k0=value1,k1=value2
$ gcloud compute instances update example-instance --zone=us-central1-a --update-labels=k0=value1,k1=value2 --remove-labels=k3
$ gcloud compute instances remove-labels example-instance --labels=k0,k1
In addition, have a look at the documentation View your billing reports and cost trends and Export Cloud Billing data to BigQuery.

Compute instances got deleted after hours of 100% CPU usage

We noticed that multiple compute instances got deleted at the same time after hours of 100% CPU usage. Because of this deletion, the hours of computation was lost.
Can anyone tell us why they got deleted?
I have created a gist with the only log we could find in Stackdriver logging around the time of deletion.
The log files show the following pieces of information:
The deleter's source IP address 34.89.101.139. Check if this matches the public IP address of the instance that was deleted. This IP address is within Google Cloud.
The User-Agent specifies that the Google Cloud SDK CLI gcloud is the program that deleted the instance.
The Compute Engine Default Service Account provided the permissions to delete the instance.
In summary, a person or script ran the CLI and deleted the instance using your project's Compute Engine Default Service Account key from a Google Cloud Compute service.
Future Suggestions:
Remove the permission to delete instances from the Compute Engine Default Service Account or (better) create a new service account that only has the required permissions for this instance.
Do not share service accounts in different Compute Engine instances.
Create separate SSH keys for each user that can SSH into the instance.
Enable Stackdriver logging of the SSH Server auth.log file. You will then know who logged into the instance.

Service Account does not exists on GCP

While trying for the first time to use Google Kubernetes Cloud solution, and according to the tutorial... I am trying to create new cluster.
But after pressing Create i receive
The request contains invalid arguments: "EXTERNAL: service account
"****#developer.gserviceaccount.com" does not exist.". Error code: "7"
in a red circle near the Kubernetes cluster name.
After some investigations it's looks like the default service account which google generated for my account.
I've looked over the create cluster options, but there isn't any option to change the service account.
Do I need to change Google Compute Engine default service account? how i can do it?
How I can overcome this issue?
Thank you
Default Compute Engine Service Account is essential for functions related to Compute Engine and is being generated automatically. Kubernetes Engine utilizes Compute Engine VM Instances as Nodes used for the cluster. GKE uses the Compute Engine Service Account to authorize the creation of these nodes.
In order to regenerate default service there are two options:
Regenerate by Disabling and Re-enabling the Google Compute Engine API. In the "API's & Services" dashboard. If for some reason performing this option encountering errors when disabling the API, then try option 2.
run command gcloud services enable compute.googleapis.com in Cloud SDK or Cloud Shell which is in the header of the page.
Looks like you either do not have any default service account or have more than one.
Simply go to the "Service Accounts" section "IAM & Admin" and select the app engine default service account, and provide this as an argument while creating cluster from gcloud or gshell as below:
gcloud container clusters create my-cluster --zone=us-west1-b --machine-type=n1-standard-1 --disk-size=100 --service-account=abc#appspot.gserviceaccount.com
To initialize GKE, go to the GCP Console. Wait for the "Kubernetes Engine is getting ready. This may take a minute or more" message to disappear.
Please open the page and wait for a while