How to change the scope of existing VM in GCP? - google-cloud-platform

I created a windows VM where I have the BERT master, SQUAD, and BERT-large model. I tried to run the squad using this:
python run_squad.py \
--vocab_file=$BERT_LARGE_DIR/vocab.txt \
--bert_config_file=$BERT_LARGE_DIR/bert_config.json \
--init_checkpoint=$BERT_LARGE_DIR/bert_model.ckpt \
--do_train=True \
--train_file=$SQUAD_DIR/train-v2.0.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v2.0.json \
--train_batch_size=24 \
--learning_rate=3e-5 \
--num_train_epochs=2.0 \
--max_seq_length=384 \
--doc_stride=128 \
--output_dir=gs://some_bucket/squad_large/ \
--use_tpu=True \
--tpu_name=$TPU_NAME \
--version_2_with_negative=True
It threw an error: googleapiclient.errors.HttpError: <HttpError 403 when requesting https://tpu.googleapis.com/v1alpha1/projects/projectname/locations/us-central1-a/nodes/testnode?alt=json returned "Request had insufficient authentication scopes.">
Is there a way to change the scope of existing VM to cloud-platform after VM is created?

Is there a way to change the scope of existing VM to cloud-platform
after VM is created?
Yes you can. Go to the Google Cloud Console. Select your instance and stop it. Then edit your instance and change the scopes, etc. The restart your instance.

Related

Google cloud compute VM startup script got interrupted and does not finish

I followed this guide in order to create self deleting virtual machine after 60 seconds with the following script calling it from a python script. Bellow you can find the startup script:
#!/bin/bash
echo Start the startup script
sleep 60s
echo BEFORE Deleting the VMs after max running time
export NAME="$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')"
export ZONE="$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')"
echo AFTER Deleting the VMs after max running time
gcloud --quiet compute instances delete $NAME --zone=$ZONE
Here is how it was triggered from the python code:
cmd = """gcloud compute instances create-with-container \
{0} \
--project={1} \
--zone=us-central1-c \
--container-image=gcr.io/project/image \
--machine-type={2} \
--scopes "bigquery","gke-default","storage-full","compute-rw" \
--boot-disk-size {3} \
--boot-disk-type "pd-ssd" \
--container-env YAML={4},DATE={5},BUCKET={6} \
--service-account "{7}" \
--metadata-from-file=startup-script=startup.sh \
--description="{8}"
""".format(vm,
gcp,
machine,
disk_size,
yamlup,
self.partition,
bucket_name,
serviceaccount,
description
)
On the google cloud compute engine, I can see the first echo appear in the logs: "Start the startup script" but after the sleep nothing happens. I am also not even sure if the sleep command works. Is there anything missing?

How to pause a Google Cloud Composer Environment?

We had spinned a google cloud composer environment, but need to use it only for testing purpose. Is there a way to pause the environment and only use it when needed?
I am unable to find a way to do it.
Please suggest if any solutions possible to pause or diable it, rather than deleting it.
Thanks!
I tried to find a way to disable/pause the environment but could not find any.
You can't do that but if you are using Cloud Composer 2, it uses a GKE with autopilot mode.
The autopilot mode is optimized if there is no DAGs executions in the cluster.
If it's used for testing purpose, I recommend you using the small environment size and a cheap and small configuration regarding worker and webserver : cpu, memory and storage..., example :
gcloud composer environments create example-environment \
--location us-central1 \
--image-version composer-2.0.31-airflow-2.2.5 \
--environment-size small \
--scheduler-count 1 \
--scheduler-cpu 0.5 \
--scheduler-memory 2.5 \
--scheduler-storage 2 \
--web-server-cpu 1 \
--web-server-memory 2.5 \
--web-server-storage 2 \
--worker-cpu 1 \
--worker-memory 2 \
--worker-storage 2 \
--min-workers 1 \
--max-workers 2
Check the documentation for the best sizing in your case.

Why is Rsync failing to sync a specific folder?

We are using rsync when syncing to our dev server on GCP. Recently ive noticed that it does not sync some of our files in a specific folder. The others work well like before.
This is how our trigger looks like:
First one is working
rsync -aqe ssh \
--no-g \
--no-p \
--delete \
--force \
--dirs \
--perms \
--no-owner \
--no-group \
--exclude-from="/opt/projects/puppet/devtools/sync_script/support_files/sync_web.exclude" \
--log-file="/tmp/rsync.log" \
/opt/projects/web/src/ \
"${DEV_IP}":/var/www/src/
Not working example:
rsync -aqe ssh \
--no-g \
--no-p \
--delete \
--force \
--dirs \
--perms \
--no-owner \
--no-group \
--exclude-from="/opt/projects/puppet/devtools/sync_script/support_files/sync_web.exclude" \
--log-file="/tmp/rsync.log" \
/opt/projects/web/ \
"${DEV_IP}":/var/www/
So when we specifinc that folder it works. But if use one level up from it thoose files will not be synced.
In my opinion, it is best to turn off the -q option, which would indicate if there is an error. In case there is no error you can always try to debug with
-v, --verbose increase verbosity
--info=FLAGS fine-grained informational verbosity
--debug=FLAGS fine-grained debug verbosity
you can also check the rsync.log log file.

using gcloud beta builds triggers create cloud-source-repositories doesn't working with --dockerfile-image

I'm working on a auto devops workflow only based on the dockerfile using Cloud Build on GCP, when I try to use the following command it seems is not using the flag: --dockerfile-image
gcloud beta builds triggers create cloud-source-repositories \
--name="test-trigger-2" \
--repo="projects/nodrize-dev/repos/b722166a-56e0-46af-bd0d-42af8d37c570/bf11672f-34d5-4d8c-80cb-31120f39251a/quirino-backend" \
--branch-pattern="^master$" \
--dockerfile="Dockerfile" \
--dockerfile-dir="" \
--dockerfile-image="gcr.io/nodrize-dev/test-backend"
Created [https://cloudbuild.googleapis.com/v1/projects/nodrize-dev/triggers/896f8ac8-397c-464a-84f7-43e69f1bc6cb].
NAME CREATE_TIME STATUS
test-trigger-2 2021-06-02T21:06:54+00:00
I want to create trigger to run it later but the last flag isnt working I asume is using the default or fallback, because as you can see in the image name is:
gcr.io/nodrize-dev/b722166a-56e0-46af-bd0d-42af8d37c570/bf11672f-34d5-4d8c-80cb-31120f39251a/quirino-backend:$COMMIT_SHA:
dockerimage-name in gcp concole:
I hope someone can help me or at least know what is happening.
This works for me.
I suspect perhaps that the trigger is incorrect or is not being triggered and|or the image is not what was generated by the trigger.
PROJECT=...
REPO=...
gcloud source repos create ${REPO} \
--project=${PROJECT}
gcloud beta builds triggers create cloud-source-repositories \
--name="trigger" \
--project=${PROJECT} \
--repo=${REPO} \
--branch-pattern="^master$" \
--dockerfile="Dockerfile" \
--dockerfile-dir="." \
--dockerfile-image="gcr.io/${PROJECT}/freddie-01"
NAME CREATE_TIME STATUS
trigger 2021-06-03T15:24:27+00:00
git push google master
gcloud builds list \
--project=${PROJECT} \
--format="value(images)"
gcr.io/${PROJECT}/freddie-01:7dcf74e126af711d24bb2b652d86f0d28bbe3bd9
gcloud container images list \
--project=${PROJECT}
NAME
gcr.io/${PROJECT}/freddie-01

Google Cloud Genomics Pipeline Zone and Region Specification Error

I am new to google cloud and was told to use Variant Transforms in order to get .vcf files into Big Query. I did everything specified on the Variant Transforms read me and copy and pasted the first block of code in to a bash file:
#!/bin/bash
# Parameters to replace:
GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
INPUT_PATTERN=gs://BUCKET/*.vcf
OUTPUT_TABLE=GOOGLE_CLOUD_PROJECT:BIGQUERY_DATASET.BIGQUERY_TABLE
TEMP_LOCATION=gs://BUCKET/temp
COMMAND="/opt/gcp_variant_transforms/bin/vcf_to_bq \
--project ${GOOGLE_CLOUD_PROJECT} \
--input_pattern ${INPUT_PATTERN} \
--output_table ${OUTPUT_TABLE} \
--temp_location ${TEMP_LOCATION} \
--job_name vcf-to-bigquery \
--runner DataflowRunner"
gcloud alpha genomics pipelines run \
--project "${GOOGLE_CLOUD_PROJECT}" \
--logging "${TEMP_LOCATION}/runner_logs_$(date +%Y%m%d_%H%M%S).log" \
--zones us-west1-b \
--service-account-scopes https://www.googleapis.com/auth/cloud-platform \
--docker-image gcr.io/gcp-variant-transforms/gcp-variant-transforms \
--command-line "${COMMAND}"
I tried to run this, while replacing the parameters appropriately and got this error:
ERROR: (gcloud.alpha.genomics.pipelines.run) INVALID_ARGUMENT: Error: validating pipeline: zones and regions cannot be specified together
I since then have tried to specify the region and zone on separate lines and have even changed the default region and zone. I have even tried example pipelines from google themselves and they still result in the same error. Am I doing something wrong or is there just something more I need to install for this to work?
You need to use the --regions flag first and in the end the --zone flag. As workaround you can set the default zone and region to your local client. Also keep in mind that the region is "us-west1" and the zone is "b"