When I do something in GCP console (by clicking in GUI), I imagine some gcloud command is executed underneath. Is it possible to view this command?
(I created a notebooks instance on Vertex AI and wanted to know what exactly I should put after gcloud notebooks instances create... to get the same result)
I think it's not possible to view a gcloud command from GUI.
You should test your gcloud command to create another instance alongside the current with all the needed parameters.
When the 2 instances are the same, you know that your gcloud command is ready.
The documentation seems to be clear and complete for this :
https://cloud.google.com/vertex-ai/docs/workbench/user-managed/create-new#gcloud
If it's possible for you, you can also think about Terraform to automate this creation for you with a state management.
Try this for a Python based User Managed Notebook (GUI version of Python instance is using the base image as boot disk, which does not containg Pythong.
The Python suite is installed explicitly via Metadata parameters):
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
export INSTANCE_NAME="instance-name-of-your-liking"
export VM_IMAGE_PROJECT="deeplearning-platform-release"
export VM_IMAGE_FAMILY="common-cpu-notebooks-debian-10"
export MACHINE_TYPE="n1-standard-4"
export LOCATION="europe-west3-b"
gcloud notebooks instances create $INSTANCE_NAME \
--no-public-ip \
--vm-image-project=$VM_IMAGE_PROJECT \
--vm-image-family=$VM_IMAGE_FAMILY \
--machine-type=$MACHINE_TYPE \
--location=$LOCATION \
--network=$NETWORK_URI \
--subnet=$SUBNET_URI \
--metadata=framework=NumPy/SciPy/scikit-learn,report-system-health=true,proxy-mode=service_account,shutdown-script=/opt/deeplearning/bin/shutdown_script.sh,notebooks-api=PROD,enable-guest-attributes=TRUE
To get a list of Network URIs in your project:
gcloud compute networks list --uri
To get a list of Subnet URIs in your project:
gcloud compute networks subnets list --uri
Put the corresponding URIs in between the quotation marks in the first two variables:
export NETWORK_URI="NETWORK URI"
export SUBNET_URI="SUBNET URI"
Name the instance (keep the quotation marks):
export INSTANCE_NAME="instance-name-of-your-liking"
When done copy paste the complete block in your Google Cloud Shell (assuming you are in a correct project).
To additionally enable secure boot (which is a thick box in the GUI setup):
gcloud compute instances stop $INSTANCE_NAME
gcloud compute instances update $INSTANCE_NAME --shielded-secure-boot
Hope it works for you, as it does for me.
Related
I'm trying to find an easy way to periodically run a check on unattached disks across all my projects.
All I am doing is just open the cloud shell terminal in GCP and use the following CL:
gcloud compute disks list --filter="-users:*"
Is there a way to run this across all projects and then output to csv file?
Stack overflow encourages questions where an attempt is document to solve the problem for yourself.
If you Google gcloud csv, the results include several explanations of how to do this.
Google documents gcloud topic formats and this includes csv.
To understand the shape of Google's resources, check out APIs Explorer. It documents every Google service. For Compute Engine, you can find the equivalent gcloud compute disks list by appending --log-http and observing which API calls are made... it's disks.list and this is its response body. This describes the output of the gcloud disks list command when you apply --format.
To run the command for all (accessible to your credentials) Google Projects, you must enumerate the Projects. Unfortunately, if it is not enabled, the Compute Engine service prompts. To avoid that, I've added a check.
# List Projects accessible to these credentials
PROJECTS=$(\
gcloud projects list \
--format="value(projectId)")
# Iterate over each Project
for PROJECT in ${PROJECTS}
do
echo "Project: ${PROJECT}"
# Check Compute Engine service
ENABLED="$(\
gcloud services list \
--project=${PROJECT} \
--filter=config.name=compute.googleapis.com \
--format='value(state)')"
# Is it enabled?
if [ "${ENABLED}" = "ENABLED" ]
then
# Enumerate Disks that have `users` and output `name`
gcloud compute disks list \
--project=${PROJECT} \
--filter="-users:*" \
--format="csv(name)"
fi
done
I'm trying to create a Compute Engine VM instance sample in Google Cloud that has an associated startup script startup_script.sh. On startup, I would like to have access to files that I have stored in a Cloud Source Repository. As such, in this script, I clone a repository using
gcloud source repos clone <repo name> --project=<project name>
Additionally, startup_script.sh also runs commands such as
gcloud iam service-accounts keys create key.json --iam-account <account>
which creates .json credentials, and
EXTERNAL_IP = $(gcloud compute instances describe sample --format='get(networkInterfaces[0].accessConfigs[0].natIP)' --zone=us-central1-a)
to get the external IP of the VM within the VM. To run these commands without any errors, I found that I need partial or full access to multiple Cloud API access scopes.
If I manually edit the scopes of the VM after I've already created it to allow for this and restart it, startup_script.sh runs fine, i.e. I can see the results of each command completing successfully. However, I would like to assign these scopes upon creation of the VM and not have to manually edit scopes after the fact. I found in the documentation that in order to do this, I can run
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --metadata-from-file=startup-script=startup_script.sh --zone=us-central1-a --scopes=[cloud-platform, cloud-source-repos, default]
When I run this command in the Cloud Shell, however, I can either only add one scope at a time, i.e. --scopes=cloud_platform, or if I try to enter multiple scopes as shown in the command above, I get
ERROR: (gcloud.compute.instances.create) unrecognized arguments:
cloud-source-repos,
default]
Adding multiple scopes as the documentation suggests doesn't seem to work. I get a similar error when use the scope's URI instead of it's alias.
Any obvious reasons as to why this may be happening? I feel this may have to do with the service account (or lack thereof) associated with the sample VM, but I'm not entirely familiar with this.
BONUS: Ideally I would like to run the VM creation cloud shell command in a cloudbuild.yaml file, which I have as
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args: ['compute', 'instances', 'create', 'sample', '--image-family=ubuntu-1804-lts', '--image-project=ubuntu-os-cloud', '--metadata-from-file=startup-script=startup_sample.sh', '--zone=us-central1-a', '--scopes=[cloud-platform, cloud-source-repos, default]']
I can submit the build using
gcloud builds submit --config cloudbuild.yaml .
Are there any issues with the way I've setup this cloudbuild.yaml?
Adding multiple scopes as the documentation suggests doesn't seem to work
Please use the this command with --scopes=cloud-platform,cloud-source-reposCreated and not --scopes=[cloud-platform, cloud-source-repos, default]:
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --zone=us-central1-a --scopes=cloud-platform,cloud-source-reposCreated
[https://www.googleapis.com/compute/v1/projects/wave25-vladoi/zones/us-central1-a/instances/sample].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
sample us-central1-a n1-standard-1 10.128.0.17 35.238.166.75 RUNNING
Also consider #John Hanley comment.
In AWS I will create an AMI, copy to other regions and make them all public. My customers can then choose from Community AMIs.
I have been trying to replicate this workflow in GCP and I found that GCP does not have an option of community images. And you cannot make it 'public' either. But you can use gcloud compute images export command to export an image to an external file and upload it to a bucket.
But how to use this to create an instance? I checked console to 'create VM Instance' but it does not have an option to upload or choose from drive. Only public and custom images already in your account.
To share custom image publicly.
Make custom image public using below command
gcloud compute images add-iam-policy-binding [image-name] \
--member='allAuthenticatedUsers' \
--role='roles/compute.imageUser'
Get public URL of custom image
gcloud compute images list --uri | grep [image-name]
It will be in https://www.googleapis.com/compute/v1/projects/[project-name]/global/images/[image-name] format
Create VM using public image URL
gcloud beta compute instances create instance-1 --zone=us-central1-a \
--machine-type=n1-standard-1 --subnet=default \
--image=https://www.googleapis.com/compute/v1/projects/[project-name]/global/images/[image-name] \
--boot-disk-size=10GB --boot-disk-device-name=instance-1
For details, gcp manage images here
I want to use Cloud Build with a trigger on commit to automatically fetch updated repo and run sudo supervisorctl restart on a Compute Engine instance.
On the Cloud Build settings page, there is an option to connect Compute Engine, but so far I only found examples including Kubernetes Engine and App Engine here.
Is it possible to accomplish? Is it the right way to make updates? Or should I instead restart the instance(s) with a startup-script?
There's a repo in Github from the cloud-builders-community that may be what you are looking for.
As specified in the aforementioned link, it does connect cloud Build to Compute Engine with the following steps:
A temporary SSH key will be created in your Container Builder workspace
A instance will be launched with your configured flags
The workpace will be copied to the remote instance
Your command will be run inside that instance's workspace
The workspace will be copied back to your Container Builder workspace
You will need to create an appropriate IAM role with create and destroy Compute Engine permissions:
export PROJECT=$(gcloud info --format='value(config.project)')
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT --format 'value(projectNumber)')
export CB_SA_EMAIL=$PROJECT_NUMBER#cloudbuild.gserviceaccount.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable compute.googleapis.com
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:$CB_SA_EMAIL --role='roles/iam.serviceAccountUser' --role='roles/compute.instanceAdmin.v1' --role='roles/iam.serviceAccountActor'
And then you can configure your build step with something similar to this:
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND=sudo supervisorctl restart
You can also find more information in the examples section of the Github repo.
I have an application running in Compute Engine on Google Cloud Platform which reads system environmental variables.
I wonder what is the way to put them in my instance so that the application will be able to read them in runtime.
Here is how I create an instance:
gcloud compute instances create ${PROJECT_ID} \
--image-family debian-9 \
--image-project debian-cloud \
--machine-type g1-small \
--scopes "userinfo-email,cloud-platform" \
--metadata-from-file startup-script=${SCRIPT} \
--metadata release-url=${BUCKET_URL} \
--zone ${ZONE} \
--tags http-server
I have some security credentials, e.g. API keys, passwords, etc. which I want to upload to my instance and expose them as env vars to be read by my application.
Is there any console available for that, flag or command to automate this?
You can do it by connecting over SSH once you have created the instance.
It is explained in set default values in environment variables.
For example, use the export command to set the zone and region variables like:
$ export CLOUDSDK_COMPUTE_ZONE="us-central1-a"
$ export CLOUDSDK_COMPUTE_REGION="us-central1"
To make these environment variables permanent:
Alternative 1: Using bashrc file
include these export commands in your ~/.bashrc file
you can use nano or vim to put the variables
sudo nano ~/.bashrc
then restart your terminal and check
$ env
Alternative 2: Using start up script
You can also use the export command within a start up script to let your metadata to become the environment variables.
Upon creating your instance you may put it directly or via a file like this:
gcloud compute instances create vm-1 \
--metadata-from-file startup-script=$HOME/startup.sh \
--zone=us-west1-a
If the instance is already running, follow the instructions to set a startup script on a running instance .
Please remember that if you use the method of this start up script then you will need to run the script manually each time you set new variables.
Whatever method you choose, make sure your $ env setting is working correctly.
Better cek it again by restarting your instance within the shell or using the stop and start button in your console.