I am trying to increase the the Memory Allocation for some of my Cloud Function for past few hours. If I change it and deploy the Memory Allocation keeps staying in 512 MiB. It was working when I tried it few days back.
This is what I am doing,
Click on edit in function
Change Memory allocated to 2 GiB and click Next & Deploy
The memory allocated remain 512 MiB after deploying
What am I doing wrong ? Can someone help me out on this please?
I'm unable to repro your experience using Cloud Console and gcloud.
gcloud functions describe ${NAME} \
--region=${REGION} \
--project=${PROJECT} \
--format="value(availableMemoryMb)"
256
Then revise it in the Console to 256MB and:
gcloud functions describe ${NAME} \
--region=${ERGION} \
--project=${PROJECT} \
--format="value(availableMemoryMb)"
512
Then revise it to 1024MB using gcloud deploy:
gcloud functions deploy ${NAME} \
--trigger-http \
--entry-point=${FUNCTION} \
--region=${REGION} \
--project=${PROJECT} \
--memory=1024MB
gcloud functions describe ${NAME} \
--region=${ERGION} \
--project=${PROJECT} \
--format="value(availableMemoryMb)"
1024
Related
I am trying to create an instance-template, where a instance create with this template automatically gets an public ipv4 asigned.
Currently I am using something like following gcloud command:
gcloud compute instance-templates create TEMPLATENAME \
--project=PROJECT \
--machine-type=e2-small \
--network-interface=network=default,network-tier=PREMIUM \
--maintenance-policy=MIGRATE --provisioning-model=STANDARD \
--service-account=SERVICE_ACCOUNT \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--tags=http-server,https-server \
--create-disk=CREATE_DISK \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any
This command is generated by the Google Cloud Console, but I have to use gcloud since I have to use a image-family to create the disk (which is to my knowledge not supported using gui).
If running this command I get the following result:
The result I want to get is:
What am I missing?
In order to get an ephemeral IP adress has to be set as empty string in the network interface flag.
--network-interface=network=default,network-tier=PREMIUM,adress=''
see https://cloud.google.com/sdk/gcloud/reference/compute/instance-templates/create?hl=de#--network-interface
How do I run a gcloud command that will create a Dataflow job from a default template? e.g. Pub/Sub Topic to BigQuery. I can do this via the console but looking to get this done via command line if possible?
gcloud dataflow jobs run mydataflowjob \
--gcs-location ... \
--parameters ... \
To answer my own question, I downloaded the template from GCP's Github and moved to a storage bucket:
wget https://raw.githubusercontent.com/GoogleCloudPlatform/DataflowTemplates/master/src/main/java/com/google/cloud/teleport/templates/PubSubToBigQuery.java
export BUCKET_URI=gs://mybucketname && \
export TEMPLATE_NAME=PubSubToBigQuery.java && \
gsutil cp $TEMPLATE_NAME $BUCKET_URI && \
Then passed bucket file path to --gcs-location
gcloud dataflow jobs run $DATAFLOW_NAME \
--gcs-location $BUCKET_URI/$TEMPLATE_NAME \
--parameters \
topic=projects/$PROJECT_ID/topics/$BQ_DATASET_NAME-$BQ_TABLE_NAME,\
table=$PROJECT_ID:$BQ_DATASET_NAME.$BQ_TABLE_NAME
Need to figure out how to pass a temp location (perhaps something to do with service account permissions? For another thread though...)
Edit
The default templates are located here in fact: gs://dataflow-templates-us-central1/latest/PubSub_to_BigQuery
So code to run job would be:
gcloud dataflow jobs run $DATAFLOW_NAME \
--gcs-location gs://dataflow-templates-us-central1/latest/PubSub_to_BigQuery \
--region us-central1 \
--staging-location $BUCKET_URI/temp \
--parameters \
inputTopic=projects/pubsub-public-data/topics/taxirides-realtime,\
outputTableSpec=$PROJECT_ID:$BQ_DATASET_NAME.$BQ_TABLE_NAME
I know how to do it when I create an instance:
gcloud compute instances create ${INSTANCE_NAME} \
--machine-type=n1-standard-8 \
--scopes=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email \
--min-cpu-platform="Intel Skylake" \
${IMAGE} \
--image-project=deeplearning-platform-release \
--boot-disk-size=100GB \
--boot-disk-type=pd-ssd \
--accelerator=type=nvidia-tesla-p100,count=1 \
--boot-disk-device-name=${INSTANCE_NAME} \
--maintenance-policy=TERMINATE --restart-on-failure \
--metadata="proxy-user-mail=${GCP_LOGIN_NAME},install-nvidia-driver=True,startup-script=${STARTUP_SCRIPT}"
but what if I already have an instance, how do I update/create the startup script?
To add or update the metadata, you can use the endpoint "add-metadata" like this
gcloud compute instances add-metadata ${INSTANCE_NAME} \
--metadata startup-script=${NEW_STARTUP_SCRIPT}
The other metadatas are kept.
I can't figure out how to specify preemptible GPU Deep Learning VM on GCP
This what I used:
export IMAGE_FAMILY="tf-latest-gpu"
export ZONE="europe-west4-a "
export INSTANCE_NAME="deeplearning"
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator='type=nvidia-tesla-v100,count=2' \
--metadata='install-nvidia-driver=True'
Thank you!
You can create a preemptible Compute Engine instance with GPU by adding the --preemptible gcloud command option. As per your example, that would be:
export IMAGE_FAMILY="tf-latest-gpu"
export ZONE="europe-west4-a "
export INSTANCE_NAME="deeplearning"
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator type=nvidia-tesla-v100,count=2 \
--metadata='install-nvidia-driver=True'
--preemptible
See documentation here and here for more details on available options.
GCP Dataproc service now supports creating a cluster with GPUs as a beta feature. The problem I met was that when I tried to specify the GPU type, gcloud command line cannot recognize the type I specified.
The gcloud command I use is shown below.
gcloud beta dataproc clusters create gpu-cluster \
--zone us-east1-b \
--master-machine-type n1-standard-4 \
--master-boot-disk-size 100 \
--num-workers 2 \
--worker-machine-type n1-standard-1 \
--worker-boot-disk-size 50 \
--initialization-actions gs://15418-initial-script/initialize_cluster.sh \
--worker-accelerator type=nvidia-tesla-p100,count=1
I returned with:
ERROR: (gcloud.beta.dataproc.clusters.create) INVALID_ARGUMENT: Insufficient 'NVIDIA_K80_GPUS' quota. Requested 2.0, available 0.0.
Anyone knows what happened? Am I using wrong command or is there something wrong with gcloud command line?
You may need to request quota for the GPUs
Check the quotas page for your project to ensure that you have sufficient GPU quota (NVIDIA_K80_GPUS or NVIDIA_P100_GPUS) available in your project. If GPUs are not listed on the quotas page or you require additional GPU quota, request a quota increase.