I'm trying to find an easy way to periodically run a check on unattached disks across all my projects.
All I am doing is just open the cloud shell terminal in GCP and use the following CL:
gcloud compute disks list --filter="-users:*"
Is there a way to run this across all projects and then output to csv file?
Stack overflow encourages questions where an attempt is document to solve the problem for yourself.
If you Google gcloud csv, the results include several explanations of how to do this.
Google documents gcloud topic formats and this includes csv.
To understand the shape of Google's resources, check out APIs Explorer. It documents every Google service. For Compute Engine, you can find the equivalent gcloud compute disks list by appending --log-http and observing which API calls are made... it's disks.list and this is its response body. This describes the output of the gcloud disks list command when you apply --format.
To run the command for all (accessible to your credentials) Google Projects, you must enumerate the Projects. Unfortunately, if it is not enabled, the Compute Engine service prompts. To avoid that, I've added a check.
# List Projects accessible to these credentials
PROJECTS=$(\
gcloud projects list \
--format="value(projectId)")
# Iterate over each Project
for PROJECT in ${PROJECTS}
do
echo "Project: ${PROJECT}"
# Check Compute Engine service
ENABLED="$(\
gcloud services list \
--project=${PROJECT} \
--filter=config.name=compute.googleapis.com \
--format='value(state)')"
# Is it enabled?
if [ "${ENABLED}" = "ENABLED" ]
then
# Enumerate Disks that have `users` and output `name`
gcloud compute disks list \
--project=${PROJECT} \
--filter="-users:*" \
--format="csv(name)"
fi
done
Related
When I do something in GCP console (by clicking in GUI), I imagine some gcloud command is executed underneath. Is it possible to view this command?
(I created a notebooks instance on Vertex AI and wanted to know what exactly I should put after gcloud notebooks instances create... to get the same result)
I think it's not possible to view a gcloud command from GUI.
You should test your gcloud command to create another instance alongside the current with all the needed parameters.
When the 2 instances are the same, you know that your gcloud command is ready.
The documentation seems to be clear and complete for this :
https://cloud.google.com/vertex-ai/docs/workbench/user-managed/create-new#gcloud
If it's possible for you, you can also think about Terraform to automate this creation for you with a state management.
Try this for a Python based User Managed Notebook (GUI version of Python instance is using the base image as boot disk, which does not containg Pythong.
The Python suite is installed explicitly via Metadata parameters):
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
export INSTANCE_NAME="instance-name-of-your-liking"
export VM_IMAGE_PROJECT="deeplearning-platform-release"
export VM_IMAGE_FAMILY="common-cpu-notebooks-debian-10"
export MACHINE_TYPE="n1-standard-4"
export LOCATION="europe-west3-b"
gcloud notebooks instances create $INSTANCE_NAME \
--no-public-ip \
--vm-image-project=$VM_IMAGE_PROJECT \
--vm-image-family=$VM_IMAGE_FAMILY \
--machine-type=$MACHINE_TYPE \
--location=$LOCATION \
--network=$NETWORK_URI \
--subnet=$SUBNET_URI \
--metadata=framework=NumPy/SciPy/scikit-learn,report-system-health=true,proxy-mode=service_account,shutdown-script=/opt/deeplearning/bin/shutdown_script.sh,notebooks-api=PROD,enable-guest-attributes=TRUE
To get a list of Network URIs in your project:
gcloud compute networks list --uri
To get a list of Subnet URIs in your project:
gcloud compute networks subnets list --uri
Put the corresponding URIs in between the quotation marks in the first two variables:
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
Name the instance (keep the quotation marks):
export INSTANCE_NAME="instance-name-of-your-liking"
When done copy paste the complete block in your Google Cloud Shell (assuming you are in a correct project).
To additionally enable secure boot (which is a thick box in the GUI setup):
gcloud compute instances stop $INSTANCE_NAME
gcloud compute instances update $INSTANCE_NAME --shielded-secure-boot
Hope it works for you, as it does for me.
we have 600+ projects which were created manually, I want to get list of cross project service accounts in those projects,
gcloud iam service-accounts list --project=myproject
command just showing all the service accounts, and I believe some service accounts has permissions in several projects.I have checked gcloud alpha and gcloud beta, seems like there is no such functionality.
Any help will be appreciated!
There are better ways to do this!
You want to use an account that has sufficient access (all the projects and their IAM policies).
Enumerate all the projects. For each project enumerate its IAM policy. Identify Service Accounts (members:serviceAccount) that have an email address of the form {anything}#{project}.iam.gserviceaacount.com. List the values of {project} for this project.
Because of the inherent complexity, I think this would benefit from being written in a language other than the shell. But, for convenience, here's a (hacky) Bash script:
PROJECTS=$(gcloud projects list --format="value(projectId)")
for PROJECT in ${PROJECTS}
do
printf "\nProject: ${PROJECT}"
gcloud projects get-iam-policy ${PROJECT} \
--flatten="bindings[].members[]" \
--filter="bindings.members:serviceAccount*" \
--format="value(bindings.members)" \
| grep -E -o "[a-z][-a-z0-9]{4,28}[a-z0-9]#[a-z][-a-z0-9]{4,28}[a-z0-9].iam.gserviceaccount.com" \
| grep -E -wv "service-[0-9]{12}#[a-z0-9][a-z0-9-]{4,28}[a-z0-9].iam.gserviceaccount.com" \
| grep -E -wv "#${PROJECT}.iam.gserviceaccount.com"
done
This:
Get the policy of each accessible (!) project
Flattens the results to filter by serviceAccount's
Outputs only the (Service Account) members
Filters (grep) by those that are {foo}#{bar}.iam.gserviceaccount.com
Filters (grep) by those that aren't Google-managed (service-[0-9]{14})
Filters (grep) by those that aren't owned by the current project
NOTE [a-z][-a-z0-9]{4,28}[a-z0-9] matches Google Project and Service Account IDs (I think!)
The result includes Google-managed accounts (*.iam.gserviceaccount.com). One way to exclude these would be to only include Service Accounts that have one of your projects in the domain ({project}.iam.gserviceaccount.com).
Is it possible to list, through the Google Cloud Platform (GCP) SDK CLI (gcloud), all active resources under a given GCP project?
You can use search-all-resources to search all the resources across services (or APIs) and projects for a given organization, folder, or project.
To search all the resources in a project with number 123:
$ gcloud asset search-all-resources --scope=projects/123
See the other post for more details:
How to find, list, or search resources across services (APIs) and projects in Google Cloud Platform?
IIUC there's no general-purpose type for "things that live in projects" so you'd need to enumerate all the types (that interest you) specifically.
Also, some resources (e.g. keys) are owned by service accounts that are owned by projects.
for PROJECT in $(\
gcloud projects list \
--format="value(projectId)")
do
echo "Project: ${PROJECT}"
echo "Services"
gcloud services list --project=${PROJECT}
echo "Kubernetes Clusters"
gcloud container clusters list --project=${PROJECT}
echo "Compute Engine instances"
gcloud compute instances list --project=${PROJECT}
echo "Service Accounts"
for ACCOUNT in $(\
gcloud iam service-accounts list \
--project=${PROJECT} \
--format="value(email)")
do
echo "Service Account keys: ${ACCOUNT}"
gcloud iam service-accounts keys list --iam-account=${ACCOUNT} --project=${PROJECT}
done
done
Various challenges with this approach though:
Some enumerations may require more details (e.g. regions|zones)
You'd need to be exhaustive (it won't list what you don't request)
it gets nested|messy quickly
Some services prompt if they're not enabled (e.g. Compute Engine)
NB
You can apply --filter=... to each of the above commands
You could wrap the entire loop into one that enumerates gcloud auth list accounts
If you want to list the resources on basis of their state then you can use --filter= option and this will the active state resources
Use Case:- If you want to list all the projects with pending deletion state then you will use:
gcloud projects list --filter='lifecycleState:DELETE_REQUESTED'
I am new to Google Cloud and am facing a challenge while adding ssh-keys to google metadata (project-wide) with gcloud command line.
When I try to add ssh-key into Google metadata (with command :: gcloud compute project-info add-metadata --metadata-from-file ssh-keys=[LIST_PATH]) along with the new ssh-key which I am trying to add, I also have to specify all existing ssh-keys in the source file. (the source file is the file where we store ssh-key value). because I will add all the ssh-keys which are present in source file so if I do not keep existing ssh-keys in source file and keep only one key, it will add only this single key into metadata and rest of the existing keys will be removed.
So what I am trying to achieve is to add any single ssh-key to the metadata without affecting existing keys. Because this will be a repeated process for many of the machines in my environment, and I cannot track existing keys every time.
I've had the same question.
According to the the official doc (https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys), it is not possible to manipulate individual keys from the gcloud tool.
Here is an example shell to add a key:
gcloud compute project-info add-metadata \
--metadata-from-file ssh-keys=<(\
gcloud compute project-info describe --format json \
| jq -r '.commonInstanceMetadata.items[]
| if .key == "ssh-keys" then .value else empty end';
echo "john:ssh-rsa mykey john")
It:
grabs the existing values (gcloud describe | jq).
adds a key (echo "john...").
feeds it as a pseudo-file to gcloud add-metadata.
Up to you to separate the steps, keep a local list of your keys, or whatever suits your need.
This example lacks a few features, like key de-duplication. That's just an experiment at the moment, I'll have to create a more robust script for real use.
I'm new to google cloud and i try to experiment it.
I can see that preparing scripts is some kind of vital if i want to create and delete clusters every days.
For dataproc clusters, it's easy :
gcloud dataproc clusters create spark-6-m \
--async \
--project=my-project-id \
--region=us-east1 \
--zone=us-east1-b \
--bucket=my-project-bucket \
--image-version=1.2 \
--num-masters=1 \
--master-boot-disk-size=10GB \
--master-machine-type=n1-standard-1 \
--worker-boot-disk-size=10GB \
--worker-machine-type=n1-standard-1 \
--num-workers=6 \
--initialization-actions=gs://dataproc-initialization-actions/jupyter2/jupyter2.sh
Now, i'd like to create a cassandra cluster. I see that the code launcher allows to do that easily too but I can't find a gcloud command to automate it.
Is there a way to create cloud launcher products clusters via gcloud ?
Thanks
Cloud Launcher deployments can be replicated from the Cloud Shell using Custom Deployments [1].
Once the Cloud Launcher deployment (in this case a Cassandra cluster) is finished the details of the deployment can be seen in the Deployment Manager [2].
The deployment details have an Overview section with the configuration and the imported files used for the deployment process. Download the “Expanded Config” file, this will be the .yaml file for the custom deployment [3]. Download the imports files to the same directory as the .yaml file to be able to deploy correctly [4].
This files and configuration will create an equivalent deployment as the Cloud Launcher.