Google cloud compute VM startup script got interrupted and does not finish - google-cloud-platform

I followed this guide in order to create self deleting virtual machine after 60 seconds with the following script calling it from a python script. Bellow you can find the startup script:
#!/bin/bash
echo Start the startup script
sleep 60s
echo BEFORE Deleting the VMs after max running time
export NAME="$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')"
export ZONE="$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')"
echo AFTER Deleting the VMs after max running time
gcloud --quiet compute instances delete $NAME --zone=$ZONE
Here is how it was triggered from the python code:
cmd = """gcloud compute instances create-with-container \
{0} \
--project={1} \
--zone=us-central1-c \
--container-image=gcr.io/project/image \
--machine-type={2} \
--scopes "bigquery","gke-default","storage-full","compute-rw" \
--boot-disk-size {3} \
--boot-disk-type "pd-ssd" \
--container-env YAML={4},DATE={5},BUCKET={6} \
--service-account "{7}" \
--metadata-from-file=startup-script=startup.sh \
--description="{8}"
""".format(vm,
gcp,
machine,
disk_size,
yamlup,
self.partition,
bucket_name,
serviceaccount,
description
)
On the google cloud compute engine, I can see the first echo appear in the logs: "Start the startup script" but after the sleep nothing happens. I am also not even sure if the sleep command works. Is there anything missing?

Related

Why is Dataflow Runner v2 experiments flag not appearing in pipeline options?

I've added to this command so that Dataflow Runner v2 is used:
mvn -Pdataflow-runner compile exec:java \
-Dexec.mainClass=org.apache.beam.examples.WordCount \
-Dexec.args="--project=PROJECT_ID \
--gcpTempLocation=gs://BUCKET_NAME/temp/ \
--output=gs://BUCKET_NAME/output \
--runner=DataflowRunner \
--region=REGION \
--experiments=use_runner_v2"
Note --experiments=use_runner_v2 is added at the end. But experiments flag is not shown in the pipeline options in the gcp ui:
The userAgent is Apache_Beam_SDK_for_Java/2.39.0(JRE_17_environment) and region is europe-west1. This might be relevant information about why my setup isn't working.

using gcloud beta builds triggers create cloud-source-repositories doesn't working with --dockerfile-image

I'm working on a auto devops workflow only based on the dockerfile using Cloud Build on GCP, when I try to use the following command it seems is not using the flag: --dockerfile-image
gcloud beta builds triggers create cloud-source-repositories \
--name="test-trigger-2" \
--repo="projects/nodrize-dev/repos/b722166a-56e0-46af-bd0d-42af8d37c570/bf11672f-34d5-4d8c-80cb-31120f39251a/quirino-backend" \
--branch-pattern="^master$" \
--dockerfile="Dockerfile" \
--dockerfile-dir="" \
--dockerfile-image="gcr.io/nodrize-dev/test-backend"
Created [https://cloudbuild.googleapis.com/v1/projects/nodrize-dev/triggers/896f8ac8-397c-464a-84f7-43e69f1bc6cb].
NAME CREATE_TIME STATUS
test-trigger-2 2021-06-02T21:06:54+00:00
I want to create trigger to run it later but the last flag isnt working I asume is using the default or fallback, because as you can see in the image name is:
gcr.io/nodrize-dev/b722166a-56e0-46af-bd0d-42af8d37c570/bf11672f-34d5-4d8c-80cb-31120f39251a/quirino-backend:$COMMIT_SHA:
dockerimage-name in gcp concole:
I hope someone can help me or at least know what is happening.
This works for me.
I suspect perhaps that the trigger is incorrect or is not being triggered and|or the image is not what was generated by the trigger.
PROJECT=...
REPO=...
gcloud source repos create ${REPO} \
--project=${PROJECT}
gcloud beta builds triggers create cloud-source-repositories \
--name="trigger" \
--project=${PROJECT} \
--repo=${REPO} \
--branch-pattern="^master$" \
--dockerfile="Dockerfile" \
--dockerfile-dir="." \
--dockerfile-image="gcr.io/${PROJECT}/freddie-01"
NAME CREATE_TIME STATUS
trigger 2021-06-03T15:24:27+00:00
git push google master
gcloud builds list \
--project=${PROJECT} \
--format="value(images)"
gcr.io/${PROJECT}/freddie-01:7dcf74e126af711d24bb2b652d86f0d28bbe3bd9
gcloud container images list \
--project=${PROJECT}
NAME
gcr.io/${PROJECT}/freddie-01

Running a dataflow batch using flexRSGoal

I found this article about running a dataflow batch on preemptive machines.
I tried to use this feature using this script:
gcloud beta dataflow jobs run $JOB_NAME \
--gcs-location gs://.../Datastore_to_Datastore_Delete \
--flexRSGoal=COST_OPTIMIZED \
--region ...1 \
--staging-location gs://.../temp \
--network XXX \
--subnetwork regions/...1/subnetworks/... \
--max-workers 1 \
--parameters \
datastoreReadGqlQuery="$QUERY",\
datastoreReadProjectId=$PROJECTID,\
datastoreDeleteProjectId=$PROJECTID
But this is the result:
ERROR: (gcloud.beta.dataflow.jobs.run) unrecognized arguments:
--flexRSGoal=COST_OPTIMIZED
To search the help text of gcloud commands, run: gcloud help --
SEARCH_TERMS
I run the command gcloud beta dataflow jobs run help and seems like this option flexRSGoal is not there...
# gcloud version
Google Cloud SDK 319.0.0
alpha 2020.11.13
beta 2020.11.13
bq 2.0.62
core 2020.11.13
gsutil 4.55
kubectl 1.16.13
What I'm missing?
Have you followed this? It seems that the correct command should be:
--flexrs_goal=COST_OPTIMIZED
It seems the --flexrs_goal flag [1] is not intended for the gcloud beta dataflow jobs run command tool, but for java/python command tools. For example the python3 -m ... command as the ones in [2] (Complete lecture of this doc recommended).
So instead of using:
gcloud beta dataflow jobs run <job_name>
--flexRSGoal=COST_OPTIMIZE ...
Run:
python3 <my-pipeline-script.py> \
--flexrs_goal=COST_OPTIMIZED ...
If you prefer to use java just switch the --flexRSGoal flag to --flexRSGoal and follow [3] instead [2].
[1] https://cloud.google.com/dataflow/docs/guides/flexrs#python
[2] https://cloud.google.com/dataflow/docs/quickstarts/quickstart-python#run-wordcount-on-the-dataflow-service
[3] https://cloud.google.com/dataflow/docs/quickstarts/quickstart-java-maven

Adding custom dependancy wont work in ML-Engine submit training

I have a .sh script that lunches a submit training job as following:
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="campign_retention_model__$now"
JOB_DIR="gs://machine_learning_datasets/campaign_retention"
REGION="us-east1"
PYTHON_VERSION='3.5'
RUNTIME_VERSION='1.12'
TRAINER_PACKAGE_PATH="./trainer/"
PACKAGE_STAGING_PATH="gs://machine_learning_datasets/campaign_retention"
CLOUDSDK_PYTHON="/usr/bin/python"
MAIN_TRAINER_MODULE="trainer.task"
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
Which works great (Notice that the .sh is located next to the trainer dir).
Due to external infra requirements, i was forced to save the content of my project within a bucket named:
"gs://campign_retention_code/camp_ret"
And hand out a stand alone sh, So I've just changed it to (just changed the path of TRAINER_PACKAGE_PATH):
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="campign_retention_model__$now"
JOB_DIR="gs://machine_learning_datasets/campaign_retention"
REGION="us-east1"
PYTHON_VERSION='3.5'
RUNTIME_VERSION='1.12'
TRAINER_PACKAGE_PATH="gs://campign_retention_code/camp_ret/trainer"
PACKAGE_STAGING_PATH="gs://machine_learning_datasets/campaign_retention"
CLOUDSDK_PYTHON="/usr/bin/python"
MAIN_TRAINER_MODULE="trainer.task"
gcloud ml-engine jobs submit training $JOB_NAME \
--job-dir $JOB_DIR \
--package-path $TRAINER_PACKAGE_PATH \
--module-name $MAIN_TRAINER_MODULE \
--region $REGION \
--runtime-version=$RUNTIME_VERSION \
--python-version=$PYTHON_VERSION \
Now when i'm running it (I moved it to a different location on the desktop to /Users/yehoshaphatschellekens/Desktop, to make sure its not close to my project) i'm getting the following error:
ERROR: (gcloud.ml-engine.jobs.submit.training) Source directory [/Users/yehoshaphatschellekens/Desktop/camp_ret] is not a valid directory.
Looking at the docs packaging-trainer i noticed that there are two examples, one that works like my original script, which as i said, works perfectly, and another example that uses a packaged dependancy.
Why the submit job won't recognise my dependancies on gs, can't i just point to --package-path a directory from gs instead of my local dir?
Thanks in Advance!!!
I believe what you are trying to do requires using
--packages gs://path/to/packages
INSTEAD of --package-path

ERROR: gcloud crashed (ArgumentError): argument USER_ARGS: unrecognized args: --runtime_version=1.0

Below script was running fine until yesterday morning.
gcloud ml-engine jobs submit training "$JOB_ID" \
--module-name trainer.task \
--package-path trainer \
--staging-bucket "$BUCKET" \
--region us-central1 \
--runtime_version=1.0 \
-- \
--output_path "${GCS_PATH}/training" \
--eval_data_paths "${GCS_PATH}/preproc/eval*" \
--train_data_paths "${GCS_PATH}/preproc/train*" \
--classification_type "multilabel" \
Running into below error:
ERROR: gcloud crashed (ArgumentError): argument USER_ARGS: unrecognized args: --runtime_version=1.0
The '--' argument must be specified between gcloud specific args on the left and USER_ARGS on the right.
Below are the gcloud components version:
$ gcloud version
Google Cloud SDK 147.0.0
alpha 2016.01.12
app-engine-go
app-engine-go-linux-x86_64 1.9.50
app-engine-java 1.9.50
app-engine-php " "
app-engine-python 1.9.50
beta 2016.01.12
bq 2.0.24
bq-nix 2.0.24
cloud-datastore-emulator 1.2.1
core 2017.03.13
alpha 2016.01.12
core-nix 2016.11.07
datalab 20170309
datalab-nix 20170105
gcd-emulator v1beta3-1.0.0
gcloud
gcloud-deps 2017.03.13
gcloud-deps-linux-x86_64 2017.02.21
gsutil 4.22
gsutil-nix 4.18
kubectl
kubectl-linux-x86_64 1.5.3
pubsub-emulator 2017.02.07
Not sure whether this is anything changed in Cloud, or I need check any config on my end that may cause this error.
You might need to use --runtime-version as the name of the argument (hyphen instead of underscore).
Without that, gcloud is assuming its some custom user-defined argument, which it expects to be in the list after the '--', hence the confusing error message.