I'm trying out Cloud Run with GKE, was wondering about this error:
ERROR: (gcloud.beta.container.clusters.create) argument --addons:
CloudRun must be one of [HttpLoadBalancing, HorizontalPodAutoscaling,
KubernetesDashboard, Istio, NetworkPolicy]
It seems it won't let me use CloudRun as an addon, I'm just creating a cluster.
Whole command is:
gcloud beta container clusters create testcloudrun \
--addons=HorizontalPodAutoscaling,HttpLoadBalancing,Istio,CloudRun \
--machine-type=n1-standard-4 \
--cluster-version=1.12.6-gke.16 --zone=us-central1-a \
--enable-stackdriver-kubernetes --enable-ip-alias \
--scopes cloud-platform
I'm just following the quick-start from the docs:
https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy-gke
~~
Update:
I've tried creating a cluster via Cloud Console, and I'm getting an error:
Horizontal pod autoscaling must be enabled in order to enable the Cloud Run addon.
Which is a known issue as well:
https://cloud.google.com/run/docs/issues
Related
I've a Dataproc cluster with image version - 2.0.39-ubuntu18, which seems to be putting all logs into Cloud Logging, this is increasing our costs a lot.
Here is the command used to create the cluster, i've added the following - spark:spark.eventLog.dir=gs://dataproc-spark-logs/joblogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-logs/joblogs
to stop using the Cloud Logging, however that is not working .. Logs are being re-directed to Cloud Logging as well.
Here is the command used to create the Dataproc cluster :
REGION=us-east1
ZONE=us-east1-b
IMG_VERSION=2.0-ubuntu18
NUM_WORKER=3
# in versa-sml-googl
gcloud beta dataproc clusters create $CNAME \
--enable-component-gateway \
--bucket $BUCKET \
--region $REGION \
--zone $ZONE \
--no-address --master-machine-type $TYPE \
--master-boot-disk-size 100 \
--master-boot-disk-type pd-ssd \
--num-workers $NUM_WORKER \
--worker-machine-type $TYPE \
--worker-boot-disk-type pd-ssd \
--worker-boot-disk-size 500 \
--image-version $IMG_VERSION \
--autoscaling-policy versa-dataproc-autoscaling \
--scopes 'https://www.googleapis.com/auth/cloud-platform' \
--project $PROJECT \
--initialization-actions 'gs://dataproc-spark-configs/pip_install.sh','gs://dataproc-spark-configs/connectors-feb1.sh' \
--metadata 'gcs-connector-version=2.0.0' \
--metadata 'bigquery-connector-version=1.2.0' \
--properties 'dataproc:dataproc.logging.stackdriver.job.driver.enable=true,dataproc:job.history.to-gcs.enabled=true,spark:spark.dynamicAllocation.enabled=false,spark:spark.executor.instances=6,spark:spark.executor.cores=2,spark:spark.eventLog.dir=gs://dataproc-spark-logs/joblogs,spark:spark.history.fs.logDirectory=gs://dataproc-spark-logs/joblogs,spark:spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2'
We have another Dataproc cluster (image version 1.4.37-ubuntu18, similar configuration as the image version 2.0-ubuntu18), which has similar configuration but does not seem to using Cloud Logging as much.
Attached is screenshot properties of both the clusters.
What do i need to change to ensure the Dataproc jobs(pyspark) donot use the Cloud Logging ?
tia!
[
I saw dataproc:dataproc.logging.stackdriver.job.driver.enable is set to true. By default, the value is false, which means driver logs will be saved to GCS and streamed back to the client for viewing, but it won't be saved to Cloud Logging. You can try disabling it. BTW, when it is enabled, the job driver logs will be available in Cloud Logging under the job resource (instead of the cluster resource).
If you want to disable Cloud Logging completely for a cluster, you can either add dataproc:dataproc.logging.stackdriver.enable=false when creating the cluster or write an init action with systemctl stop google-fluentd.service. Both will stop Cloud Logging on the cluster's side, but using property is recommended.
See Dataproc cluster properties for the property.
Here is the update on this (based on discussions with GCP Support) :
In the GCP Logging, we need to create a Log Routing sink with inclusion filter - this will write the logs to BigQuery or Cloud Storage depending upon the target you specify.
Additionally, the _Default sink needs to be modified to add exclusion filters so specific logs will NOT be re-directed to GCP Logging
Attached are screenshots of the _Default log sink and the Inclusion sink for Dataproc.
Is possible to deploy the HBase component in Cloud Platform. If so, how to manage ACL?
You can deploy with the help of Dataproc
Here is a complete guide you will need to deploy it with the use of the following gcloud command
gcloud beta dataproc clusters create cluster-name \
--optional-components=HBASE,ZOOKEEPER \
--region=region \
--image-version=1.5 \
--enable-component-gateway \
... other flags
Please keep in mind that
The HBase component requires the installation of the Zookeeper component
so you should keep it on the optional_component flag
Trying connect SQL instance to Cloud Run Service, using Fully Managed cloud run works fine but when I try to connect service via Anthos (which is required as we need to use websockets on services) I just get ENOENT (No Entry), update IAM for GKE with correct permissions, recreated cluster with all services enabled/
Here's the deploy command I am doing
gcloud run deploy \
--project ${GOOGLE_PROJECT_ID} \
--platform gke \
--cluster dev \
--cluster-location ${GOOGLE_COMPUTE_ZONE} \
--image gcr.io/${GOOGLE_PROJECT_ID}/${PROJECT_NAME} \
--set-cloudsql-instances "${GOOGLE_PROJECT_ID}:europe-west1:dev" \
--set-env-vars "$(tr '\n' ',' < "${ENV_KEY_PRODUCTION}")" \
--set-env-vars "SERVICE=${1}" \
--set-env-vars "DB_HOST=/cloudsql/${GOOGLE_PROJECT_ID}:europe-west1:dev" \
"${1}"
If I use the private IP from SQL and remove --set-cloudsql-instances and set DB_HOST as private IP it works.
But adding --set-cloudsql-instances should make a sidecar for service in GKE cluster and allow it to connect to SQL?
The documentation isn't clear... the parameter '--set-cloudsql-instances' is only available for Cloud Run Managed version. The first sentence of the section is important. And the limitation is not clear in the doc
Only applicable if connecting to Cloud Run (fully managed). Specify --platform=managed to use:
--[no-]allow-unauthenticated
Whether to enable allowing unauthenticated access to the service. This may take a few moments to take effect. Use --allow-unauthenticated to enable and --no-allow-unauthenticated to disable.
--clear-vpc-connector
Remove the VPC connector for this Service.
--revision-suffix=REVISION_SUFFIX
Specify the suffix of the revision name. Revision names always start with the service name automatically. For example, specifying [--revision-suffix=v1] for a service named 'helloworld', would lead to a revision named 'helloworld-v1'.
--vpc-connector=VPC_CONNECTOR
Set a VPC connector for this Service.
These flags modify the Cloud SQL instances this Service connects to. You can specify a name of a Cloud SQL instance if it's in the same project and region as your Cloud Run service; otherwise specify :: for the instance. At most one of these may be specified:
--add-cloudsql-instances=[CLOUDSQL-INSTANCES,…]
Append the given values to the current Cloud SQL instances.
--clear-cloudsql-instances
Empty the current Cloud SQL instances.
--remove-cloudsql-instances=[CLOUDSQL-INSTANCES,…]
Remove the given values from the current Cloud SQL instances.
--set-cloudsql-instances=[CLOUDSQL-INSTANCES,…]
Completely replace the current Cloud SQL instances with the given values.
I am trying to submit a dataproc job on a cluster running Presto with the postgresql connector.
The cluster is initialized as followed:
gcloud beta dataproc clusters create ${CLUSTER_NAME} \
--project=${PROJECT} \
--region=${REGION} \
--zone=${ZONE} \
--bucket=${BUCKET_NAME} \
--num-workers=${WORKERS} \
--scopes=cloud-platform \
--initialization-actions=${INIT_ACTION}
${INIT_ACTION} point to a bash file with the initialization actions for starting a presto cluster with postgresql.
I do not use --optional-components=PRESTO since I need --initialization-actions to perform non-default operations. And having both --optional-component and --initialization-actions does not work.
When I try to run a simple job:
gcloud beta dataproc jobs submit presto \
--cluster ${CLUSTER_NAME} \
--region ${REGION} \
-e "SHOW TABLES"
I get the following error:
ERROR: (gcloud.beta.dataproc.jobs.submit.presto) FAILED_PRECONDITION: Cluster
'<cluster-name>' requires optional component PRESTO to run PRESTO jobs
Is there some other way to define the optional component on the cluster?
UPDATE:
Using both --optional-component and --initialization-actions, as:
gcloud beta dataproc clusters create ${CLUSTER_NAME} \
...
--scopes=cloud-platform \
--optional-components=PRESTO \
--image-version=1.3 \
--initialization-actions=${INIT_ACTION} \
--metadata ...
The ${INIT_ACTION} is copied from this repo. With a slight modification to the function configure_connectors to create a postgresql connector.
When running the create cluster the following error is given:
ERROR: (gcloud.beta.dataproc.clusters.create) Operation [projects/...] failed: Initialization action failed. Failed action 'gs://.../presto_config.sh', see output in: gs://.../dataproc-initialization-script-0_output.
The error output is logged as:
+ presto '--execute=select * from system.runtime.nodes;'
Error running command: java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:8080
Which leads me to believe I have to re-write the initialization script.
It would be nice to know which initialization script is running when I specify --optional-components=PRESTO.
If all you want to do is setup the optional component to work with a Postgres endpoint writing an optional component to do it is pretty easy. You just have to add the catalog file and restart presto.
https://gist.github.com/KoopaKing/8e653e0c8d095323904946045c5fa4c2
Is an example init action. I have tested it successfully with the presto optional component, but it is pretty simple. Feel free to fork the example and stage it in your GCS bucket.
I'm new to google cloud and i try to experiment it.
I can see that preparing scripts is some kind of vital if i want to create and delete clusters every days.
For dataproc clusters, it's easy :
gcloud dataproc clusters create spark-6-m \
--async \
--project=my-project-id \
--region=us-east1 \
--zone=us-east1-b \
--bucket=my-project-bucket \
--image-version=1.2 \
--num-masters=1 \
--master-boot-disk-size=10GB \
--master-machine-type=n1-standard-1 \
--worker-boot-disk-size=10GB \
--worker-machine-type=n1-standard-1 \
--num-workers=6 \
--initialization-actions=gs://dataproc-initialization-actions/jupyter2/jupyter2.sh
Now, i'd like to create a cassandra cluster. I see that the code launcher allows to do that easily too but I can't find a gcloud command to automate it.
Is there a way to create cloud launcher products clusters via gcloud ?
Thanks
Cloud Launcher deployments can be replicated from the Cloud Shell using Custom Deployments [1].
Once the Cloud Launcher deployment (in this case a Cassandra cluster) is finished the details of the deployment can be seen in the Deployment Manager [2].
The deployment details have an Overview section with the configuration and the imported files used for the deployment process. Download the “Expanded Config” file, this will be the .yaml file for the custom deployment [3]. Download the imports files to the same directory as the .yaml file to be able to deploy correctly [4].
This files and configuration will create an equivalent deployment as the Cloud Launcher.