GCP composer creation failed with bad request - google-cloud-platform

Trying to create GCP composer environment instance with gcloud CLI
gcloud composer environments create "jakub" \
> --project "projectX" \
> --location "us-central1" \
> --zone "us-central1-a" \
> --disk-size 50GB \
> --node-count 3 \
> --image-version composer-1.7.1-airflow-1.10.2 \
> --machine-type n1-standard-2 \
> --python-version 3 \
> --labels env="test"
After an hour getting error:
f7b3f4-6b95-4fb0-85e3-f39a2b11cec9] failed: Http error status code: 400
Http error message: BAD REQUEST
Additional errors:
{"ResourceType":"appengine.v1.version","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Legacy health checks are no longer supported for the App Engine Flexible environment. Please remove the 'health_check' section from your app.yaml and configure updated health checks. For instructions on migrating to split health checks see https://cloud.google.com/appengine/docs/flexible/java/migrating-to-split-health-checks","status":"INVALID_ARGUMENT","details":[],"statusMessage":"Bad Request","requestPath":"https://appengine.googleapis.com/v1/apps/vd41e6cad4ccb2a7b-tp/services/default/versions","httpMethod":"POST"}}
Based on https://cloud.google.com/sdk/gcloud/reference/composer/environments/create

This is caused because you are trying to use an old image version. Retry Composer Environment creation with a newer supported version as per Supported Cloud Composer versions.

Related

GCP Cloud Logging Cost increasing with Dataproc img version 2.0.39-ubuntu18

I've a Dataproc cluster with image version - 2.0.39-ubuntu18, which seems to be putting all logs into Cloud Logging, this is increasing our costs a lot.
Here is the command used to create the cluster, i've added the following - spark:spark.eventLog.dir=gs://dataproc-spark-logs/joblogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-logs/joblogs
to stop using the Cloud Logging, however that is not working .. Logs are being re-directed to Cloud Logging as well.
Here is the command used to create the Dataproc cluster :
REGION=us-east1
ZONE=us-east1-b
IMG_VERSION=2.0-ubuntu18
NUM_WORKER=3
# in versa-sml-googl
gcloud beta dataproc clusters create $CNAME \
--enable-component-gateway \
--bucket $BUCKET \
--region $REGION \
--zone $ZONE \
--no-address --master-machine-type $TYPE \
--master-boot-disk-size 100 \
--master-boot-disk-type pd-ssd \
--num-workers $NUM_WORKER \
--worker-machine-type $TYPE \
--worker-boot-disk-type pd-ssd \
--worker-boot-disk-size 500 \
--image-version $IMG_VERSION \
--autoscaling-policy versa-dataproc-autoscaling \
--scopes 'https://www.googleapis.com/auth/cloud-platform' \
--project $PROJECT \
--initialization-actions 'gs://dataproc-spark-configs/pip_install.sh','gs://dataproc-spark-configs/connectors-feb1.sh' \
--metadata 'gcs-connector-version=2.0.0' \
--metadata 'bigquery-connector-version=1.2.0' \
--properties 'dataproc:dataproc.logging.stackdriver.job.driver.enable=true,dataproc:job.history.to-gcs.enabled=true,spark:spark.dynamicAllocation.enabled=false,spark:spark.executor.instances=6,spark:spark.executor.cores=2,spark:spark.eventLog.dir=gs://dataproc-spark-logs/joblogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-logs/joblogs,spark:spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2'
We have another Dataproc cluster (image version 1.4.37-ubuntu18, similar configuration as the image version 2.0-ubuntu18), which has similar configuration but does not seem to using Cloud Logging as much.
Attached is screenshot properties of both the clusters.
What do i need to change to ensure the Dataproc jobs(pyspark) donot use the Cloud Logging ?
tia!
[
I saw dataproc:dataproc.logging.stackdriver.job.driver.enable is set to true. By default, the value is false, which means driver logs will be saved to GCS and streamed back to the client for viewing, but it won't be saved to Cloud Logging. You can try disabling it. BTW, when it is enabled, the job driver logs will be available in Cloud Logging under the job resource (instead of the cluster resource).
If you want to disable Cloud Logging completely for a cluster, you can either add dataproc:dataproc.logging.stackdriver.enable=false when creating the cluster or write an init action with systemctl stop google-fluentd.service. Both will stop Cloud Logging on the cluster's side, but using property is recommended.
See Dataproc cluster properties for the property.
Here is the update on this (based on discussions with GCP Support) :
In the GCP Logging, we need to create a Log Routing sink with inclusion filter - this will write the logs to BigQuery or Cloud Storage depending upon the target you specify.
Additionally, the _Default sink needs to be modified to add exclusion filters so specific logs will NOT be re-directed to GCP Logging
Attached are screenshots of the _Default log sink and the Inclusion sink for Dataproc.

Submit Presto job on dataproc

I am trying to submit a dataproc job on a cluster running Presto with the postgresql connector.
The cluster is initialized as followed:
gcloud beta dataproc clusters create ${CLUSTER_NAME} \
--project=${PROJECT} \
--region=${REGION} \
--zone=${ZONE} \
--bucket=${BUCKET_NAME} \
--num-workers=${WORKERS} \
--scopes=cloud-platform \
--initialization-actions=${INIT_ACTION}
${INIT_ACTION} point to a bash file with the initialization actions for starting a presto cluster with postgresql.
I do not use --optional-components=PRESTO since I need --initialization-actions to perform non-default operations. And having both --optional-component and --initialization-actions does not work.
When I try to run a simple job:
gcloud beta dataproc jobs submit presto \
--cluster ${CLUSTER_NAME} \
--region ${REGION} \
-e "SHOW TABLES"
I get the following error:
ERROR: (gcloud.beta.dataproc.jobs.submit.presto) FAILED_PRECONDITION: Cluster
'<cluster-name>' requires optional component PRESTO to run PRESTO jobs
Is there some other way to define the optional component on the cluster?
UPDATE:
Using both --optional-component and --initialization-actions, as:
gcloud beta dataproc clusters create ${CLUSTER_NAME} \
...
--scopes=cloud-platform \
--optional-components=PRESTO \
--image-version=1.3 \
--initialization-actions=${INIT_ACTION} \
--metadata ...
The ${INIT_ACTION} is copied from this repo. With a slight modification to the function configure_connectors to create a postgresql connector.
When running the create cluster the following error is given:
ERROR: (gcloud.beta.dataproc.clusters.create) Operation [projects/...] failed: Initialization action failed. Failed action 'gs://.../presto_config.sh', see output in: gs://.../dataproc-initialization-script-0_output.
The error output is logged as:
+ presto '--execute=select * from system.runtime.nodes;'
Error running command: java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:8080
Which leads me to believe I have to re-write the initialization script.
It would be nice to know which initialization script is running when I specify --optional-components=PRESTO.
If all you want to do is setup the optional component to work with a Postgres endpoint writing an optional component to do it is pretty easy. You just have to add the catalog file and restart presto.
https://gist.github.com/KoopaKing/8e653e0c8d095323904946045c5fa4c2
Is an example init action. I have tested it successfully with the presto optional component, but it is pretty simple. Feel free to fork the example and stage it in your GCS bucket.

Can't add CloudRun as add-on when creating a GKE cluster

I'm trying out Cloud Run with GKE, was wondering about this error:
ERROR: (gcloud.beta.container.clusters.create) argument --addons:
CloudRun must be one of [HttpLoadBalancing, HorizontalPodAutoscaling,
KubernetesDashboard, Istio, NetworkPolicy]
It seems it won't let me use CloudRun as an addon, I'm just creating a cluster.
Whole command is:
gcloud beta container clusters create testcloudrun \
--addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio,CloudRun \
--machine-type=n1-standard-4 \
--cluster-version=1.12.6-gke.16 --zone=us-central1-a \
--enable-stackdriver-kubernetes --enable-ip-alias \
--scopes cloud-platform
I'm just following the quick-start from the docs:
https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy-gke
~~
Update:
I've tried creating a cluster via Cloud Console, and I'm getting an error:
Horizontal pod autoscaling must be enabled in order to enable the Cloud Run addon.
Which is a known issue as well:
https://cloud.google.com/run/docs/issues

How to keep Google Dataproc master running?

I created a cluster on Dataproc and it works great. However, after the cluster is idle for a while (~90 min), the master node will automatically stops. This happens to every cluster I created. I see there is a similar question here: Keep running Dataproc Master node
It looks like it's the initialization action problem. However the post does not give me enough info to fix the issue. Below are the commands I used to create the cluster:
gcloud dataproc clusters create $CLUSTER_NAME \
--project $PROJECT \
--bucket $BUCKET \
--region $REGION \
--zone $ZONE \
--master-machine-type $MASTER_MACHINE_TYPE \
--master-boot-disk-size $MASTER_DISK_SIZE \
--worker-boot-disk-size $WORKER_DISK_SIZE \
--num-workers=$NUM_WORKERS \
--initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh,gs://dataproc-initialization-actions/datalab/datalab.sh \
--metadata gcs-connector-version=$GCS_CONNECTOR_VERSION \
--metadata bigquery-connector-version=$BQ_CONNECTOR_VERSION \
--scopes cloud-platform \
--metadata JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn \
--optional-components=ANACONDA,JUPYTER \
--image-version=1.3
I need the BigQuery connector, GCS connector, Jupyter and DataLab for my cluster.
How can I keep my master node running? Thank you.
As summarized in the comment thread, this is indeed caused by Datalab's auto-shutdown feature. There are a couple ways to change this behavior:
Upon first creating the Datalab-enabled Dataproc cluster, log in to Datalab and click on the "Idle timeout in about ..." text to disable it: https://cloud.google.com/datalab/docs/concepts/auto-shutdown#disabling_the_auto_shutdown_timer - The text will change to "Idle timeout is disabled"
Edit the initialization action to set the environment variable as suggested by yelsayed:
function run_datalab(){
if docker run -d --restart always --net=host -e "DATALAB_DISABLE_IDLE_TIMEOUT_PROCESS=true" \
-v "${DATALAB_DIR}:/content/datalab" ${VOLUME_FLAGS} datalab-pyspark; then
echo 'Cloud Datalab Jupyter server successfully deployed.'
else
err 'Failed to run Cloud Datalab'
fi
}
And use your custom initialization action instead of the stock gs://dataproc-initialization-actions one. It could be worth filing a tracking issue in the github repo for dataproc initialization actions too, suggesting to disable the timeout by default or provide an easy metadata-based option. It's probably true that the auto-shutdown behavior isn't as expected in default usage on a Dataproc cluster since the master is also performing roles other than running the Datalab service.

GCP Dataproc create cluster with gpus error

GCP Dataproc service now supports creating a cluster with GPUs as a beta feature. The problem I met was that when I tried to specify the GPU type, gcloud command line cannot recognize the type I specified.
The gcloud command I use is shown below.
gcloud beta dataproc clusters create gpu-cluster \
--zone us-east1-b \
--master-machine-type n1-standard-4 \
--master-boot-disk-size 100 \
--num-workers 2 \
--worker-machine-type n1-standard-1 \
--worker-boot-disk-size 50 \
--initialization-actions gs://15418-initial-script/initialize_cluster.sh \
--worker-accelerator type=nvidia-tesla-p100,count=1
I returned with:
ERROR: (gcloud.beta.dataproc.clusters.create) INVALID_ARGUMENT: Insufficient 'NVIDIA_K80_GPUS' quota. Requested 2.0, available 0.0.
Anyone knows what happened? Am I using wrong command or is there something wrong with gcloud command line?
You may need to request quota for the GPUs
Check the quotas page for your project to ensure that you have sufficient GPU quota (NVIDIA_K80_GPUS or NVIDIA_P100_GPUS) available in your project. If GPUs are not listed on the quotas page or you require additional GPU quota, request a quota increase.