I am new to Google Cloud and am facing a challenge while adding ssh-keys to google metadata (project-wide) with gcloud command line.
When I try to add ssh-key into Google metadata (with command :: gcloud compute project-info add-metadata --metadata-from-file ssh-keys=[LIST_PATH]) along with the new ssh-key which I am trying to add, I also have to specify all existing ssh-keys in the source file. (the source file is the file where we store ssh-key value). because I will add all the ssh-keys which are present in source file so if I do not keep existing ssh-keys in source file and keep only one key, it will add only this single key into metadata and rest of the existing keys will be removed.
So what I am trying to achieve is to add any single ssh-key to the metadata without affecting existing keys. Because this will be a repeated process for many of the machines in my environment, and I cannot track existing keys every time.
I've had the same question.
According to the the official doc (https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys), it is not possible to manipulate individual keys from the gcloud tool.
Here is an example shell to add a key:
gcloud compute project-info add-metadata \
--metadata-from-file ssh-keys=<(\
gcloud compute project-info describe --format json \
| jq -r '.commonInstanceMetadata.items[]
| if .key == "ssh-keys" then .value else empty end';
echo "john:ssh-rsa mykey john")
It:
grabs the existing values (gcloud describe | jq).
adds a key (echo "john...").
feeds it as a pseudo-file to gcloud add-metadata.
Up to you to separate the steps, keep a local list of your keys, or whatever suits your need.
This example lacks a few features, like key de-duplication. That's just an experiment at the moment, I'll have to create a more robust script for real use.
Related
what is the GCP CLI Command to remove (detach) a Data Catalog Tag from a BigQuery Dataset, and also CLI Command to Update Tag.
I am able to do it manually how to do it using Cloud Shell CLI gcloud command?
You can use gcloud data-catalog tags update and gcloud data-catalog tags delete commands.
The tricky part here is obtaining values for --entry-group and --entry parameters - BigQuery entries are automatically ingested to Data Catalog, and have automatically assigned entry group id and entry id. To get these values use gcloud data-catalog entries lookup command.
When I do something in GCP console (by clicking in GUI), I imagine some gcloud command is executed underneath. Is it possible to view this command?
(I created a notebooks instance on Vertex AI and wanted to know what exactly I should put after gcloud notebooks instances create... to get the same result)
I think it's not possible to view a gcloud command from GUI.
You should test your gcloud command to create another instance alongside the current with all the needed parameters.
When the 2 instances are the same, you know that your gcloud command is ready.
The documentation seems to be clear and complete for this :
https://cloud.google.com/vertex-ai/docs/workbench/user-managed/create-new#gcloud
If it's possible for you, you can also think about Terraform to automate this creation for you with a state management.
Try this for a Python based User Managed Notebook (GUI version of Python instance is using the base image as boot disk, which does not containg Pythong.
The Python suite is installed explicitly via Metadata parameters):
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
export INSTANCE_NAME="instance-name-of-your-liking"
export VM_IMAGE_PROJECT="deeplearning-platform-release"
export VM_IMAGE_FAMILY="common-cpu-notebooks-debian-10"
export MACHINE_TYPE="n1-standard-4"
export LOCATION="europe-west3-b"
gcloud notebooks instances create $INSTANCE_NAME \
--no-public-ip \
--vm-image-project=$VM_IMAGE_PROJECT \
--vm-image-family=$VM_IMAGE_FAMILY \
--machine-type=$MACHINE_TYPE \
--location=$LOCATION \
--network=$NETWORK_URI \
--subnet=$SUBNET_URI \
--metadata=framework=NumPy/SciPy/scikit-learn,report-system-health=true,proxy-mode=service_account,shutdown-script=/opt/deeplearning/bin/shutdown_script.sh,notebooks-api=PROD,enable-guest-attributes=TRUE
To get a list of Network URIs in your project:
gcloud compute networks list --uri
To get a list of Subnet URIs in your project:
gcloud compute networks subnets list --uri
Put the corresponding URIs in between the quotation marks in the first two variables:
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
Name the instance (keep the quotation marks):
export INSTANCE_NAME="instance-name-of-your-liking"
When done copy paste the complete block in your Google Cloud Shell (assuming you are in a correct project).
To additionally enable secure boot (which is a thick box in the GUI setup):
gcloud compute instances stop $INSTANCE_NAME
gcloud compute instances update $INSTANCE_NAME --shielded-secure-boot
Hope it works for you, as it does for me.
I'm trying to find an easy way to periodically run a check on unattached disks across all my projects.
All I am doing is just open the cloud shell terminal in GCP and use the following CL:
gcloud compute disks list --filter="-users:*"
Is there a way to run this across all projects and then output to csv file?
Stack overflow encourages questions where an attempt is document to solve the problem for yourself.
If you Google gcloud csv, the results include several explanations of how to do this.
Google documents gcloud topic formats and this includes csv.
To understand the shape of Google's resources, check out APIs Explorer. It documents every Google service. For Compute Engine, you can find the equivalent gcloud compute disks list by appending --log-http and observing which API calls are made... it's disks.list and this is its response body. This describes the output of the gcloud disks list command when you apply --format.
To run the command for all (accessible to your credentials) Google Projects, you must enumerate the Projects. Unfortunately, if it is not enabled, the Compute Engine service prompts. To avoid that, I've added a check.
# List Projects accessible to these credentials
PROJECTS=$(\
gcloud projects list \
--format="value(projectId)")
# Iterate over each Project
for PROJECT in ${PROJECTS}
do
echo "Project: ${PROJECT}"
# Check Compute Engine service
ENABLED="$(\
gcloud services list \
--project=${PROJECT} \
--filter=config.name=compute.googleapis.com \
--format='value(state)')"
# Is it enabled?
if [ "${ENABLED}" = "ENABLED" ]
then
# Enumerate Disks that have `users` and output `name`
gcloud compute disks list \
--project=${PROJECT} \
--filter="-users:*" \
--format="csv(name)"
fi
done
If I create a new GKE cluster called cluster-1, the VMs in the cluster will all have an auto-generated network tag, e.g. gke-cluster-1-d4732bcc-node.
Is it possible, using gcloud CLI or something else, to programmatically retrieve this network tag using the cluster name?
I achieved this by getting one of the auto-generated firewall rules for the GKE cluster and pulling out the target tag:
CLUSTER_NAME=<cluster-name>
PROJECT_NAME=<project-name>
NODE_NETWORK_TAG=$(gcloud compute firewall-rules list --project $PROJECT_NAME --filter="name~gke-$CLUSTER_NAME-[0-9a-z]*-master" --format="value(targetTags[0])")
echo "$NODE_NETWORK_TAG"
You can only get a VM network tag with gcloud using the command
gcloud compute instances describe INSTANCE-NAME --project=PROJECT-ID --zone=INSTANCE-ZONE
The network tag information will be at the bottom and the output will be similar to:
tags:
fingerprint: xxxx
items:
- tag1
- tag2
- tag3
All the VMs created by the cluster will have the same prefix.
gke-CLUSTER_NAME-NODE_POOL_NAME-RANDOM_STRING.
For example I created cluster “test-cluster” and I’m using only “default-pool”. One of my instances is [gke-test-cluster-default-pool-xxxxxxx-xxxxxxx]
You can get all the instances names created by your clusters and put them in a variable similar to
name=`gcloud compute instances list --project=PROJECT-ID | grep gke | awk '{print $1}'`
Now you can use a FOR loop to run the command
for tags in $name; do gcloud compute instances describe $tags --project=PROJECT-ID --zone=ZONE; done
You can add a GREP at the end of the command just to fetch the network tag information , store the output in a file or parsed anyway you need it.
I have two projects in Google Cloud and I need to copy files from an instance in one project to an instance in another project. I tried to using the 'gcloud compute copy-files' command but I'm getting this error:
gcloud compute copy-files test.tgz --project stack-complete-343 instance-IP:/home/ubuntu --zone us-central1-a
ERROR: (gcloud.compute.copy-files) Could not fetch instance: - Insufficient Permission
I was able to replicate your issue with a brand new VM instance, getting the same error. Here are a few steps that I took to correct the problem:
Make sure you are authenticated and have rights to both projects with the same account!
$ gcloud config list (if you see the service account #developer.gserviceaccount.com, you need to switch to the account that is enabled on both projects. you can check that from the Devlopers Console > Permissions)
$ gcloud auth login (copy the link to a new window, login, copy the code and paste it back in the prompt)
$ gcloud compute scp test.tgz --project stack-complete-343 instance-IP:/home/ubuntu --zone us-central1-a (I would also use the instance name instead of the IP)
This last command should also generate your ssh keys. You should see something like this, but do not worry about entering a passphrase :
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair
Enter passphrase (empty for no passphrase):
Go to the permissions tab on the remote instance(i.e. the instance you WON'T be running gcloud compute copy-files on). Then go to service accounts and create a new one, give it a name and check the box to get a key file for it and leave JSON selected. Upload that key file from your personal machine using gcloud compute copy-files and your personal account to the local instance(i.e. the machine you're SSHing into and running the gcloud compute copy-files command on.) Then run this from the local instance via SSH. gcloud auth activate-service-account ACCOUNT --key-file KEY-FILE replacing ACCOUNT with the email like address that was generated and KEY-FILE with the path to the key file you uploaded from your personal machine earlier. Then you should be able to access the instance that setup the account. These steps have to be repeated on every instance you want to copy files between. If these instructions weren't clear let me know and I'll try to help out.
It's not recommended to auth your account on Compute Engine instances because that can expose your credentials to anybody with access to the machine.
Instead, you can let your service accounts use the Compute Engine API. First, stop the instance. Once stopped you can edit Cloud API access scopes from the console. Modify the Compute Engine scope from Disabled to Read Only.
You should be able to just use the copy-files command now. This lets your service account access the Compute Engine API.
The most simple way to to this will be using 'scp' command and .pem file. Here's as example
sudo scp -r -i your/path_to/.pem your_username#ip_address_of_instance:path/to/copy/file
If both of them are in the same project this is the simplest way
gcloud compute copy-files yourFileName --project yourProjectName instance-name:~/folderInInstance --zone europe-west1-b
Obviously you should edit the zone according to your instances.
One of the approaches to get permissions is to enable Cloud API access scopes. You may set them to Allow full access to all Cloud APIs.
In console click on the instance and use EDIT button above. Scroll to the bottom and change Cloud API access scopes. See also this answer.