I'm trying to update a cloud function that has been working for over a week now.
But when I try to update the function today, I get BUILD FAILED: BUILD HAS TIMED OUT error
Build fail error
I am using the google cloud console to deploy the python function and not cloud shell. I even tried to make a new copy of the function and that fails too.
Looking at the logs, it says INVALID_ARGUMENT. But I'm just using the console and haven't changed anything apart from the python code in comparison to previous build that I successfully deployed last week.
Error logs
{
insertId: "fjw53vd2r9o"
logName: " my log name "
operation: {…}
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {…}
methodName: "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction"
requestMetadata: {…}
resourceName: " my function name"
serviceName: "cloudfunctions.googleapis.com"
status: {
code: 3
message: "INVALID_ARGUMENT"
}
}
receiveTimestamp: "2020-02-05T18:04:18.269557510Z"
resource: {…}
severity: "ERROR"
timestamp: "2020-02-05T18:04:18.241Z"
}
I even tried to increase the timeout parameter to 540 seconds and I still get the build error.
Timeout parameter setting
Can someone help please ?
In future, please copy and paste the text from errors and logs rather than reference screenshots; it's easier to parse and it's possibly more permanent.
It's possible that there's an intermittent issue with the service (in your region) that is causing you problems. Does this issue continue?
You may check the status dashboard (there are none for Functions) for service issues:
https://status.cloud.google.com/
I just deployed and updated a Golang Function in us-centrall without issues.
Which language|runtime are you using?
Which region?
Are you confident that your updates to the Function are correct?
A more effective albeit dramatic way to test this would be to create a new (temporary) project and try to deploy the function there (possibly to a different region too).
NB The timeout setting applies to the Function's invocations not to the deployment.
Example (using gcloud)
PROJECT=[[YOUR-PROJECT]]
BILLING=[[YOUR-BILLING]]
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} --billing-account=${BILLING}
gcloud services enable cloudfunctions.googleapis.com --project=${PROJECT}
touch function.go go.mod
# Deploy
gcloud functions deploy fred \
--region=us-central1 \
--allow-unauthenticated \
--entry-point=HelloFreddie \
--trigger-http \
--source=${PWD} \
--project=${PROJECT} \
--runtime=go113
# Update
gcloud functions deploy fred \
--region=us-central1 \
--allow-unauthenticated \
--entry-point=HelloFreddie \
--trigger-http \
--source=${PWD} \
--project=${PROJECT} \
--runtime=go113
# Test
curl \
--request GET \
$(\
gcloud functions describe fred \
--region=us-central1 \
--project=${PROJECT} \
--format="value(httpsTrigger.url)")
Hello Freddie
Logs:
gcloud logging read "resource.type=\"cloud_function\" resource.labels.function_name=\"fred\" resource.labels.region=\"us-central1\" protoPayload.methodName=(\"google.cloud.functions.v1.CloudFunctionsService.CreateFunction\" OR \"google.cloud.functions.v1.CloudFunctionsService.UpdateFunction\")" \
--project=${PROJECT} \
--format="json(protoPayload.methodName,protoPayload.status)"
[
{
"protoPayload": {
"methodName": "google.cloud.functions.v1.CloudFunctionsService.CreateFunction"
}
},
{
"protoPayload": {
"methodName": "google.cloud.functions.v1.CloudFunctionsService.CreateFunction",
"status": {}
}
},
{
"protoPayload": {
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction"
}
},
{
"protoPayload": {
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"status": {}
}
}
]
Related
I receive the error when triggering a cloud function using the gcloud command from terminal:
gcloud functions call function_name
On the cloud function log page no error is shown and the task is finished with no problem, however, after the task is finished this error shows up on the terminal.
gcloud crashed (ReadTimeout): HTTPSConnectionPool(host='cloudfunctions.googleapis.com', port=443): Read timed out. (read timeout=300)
Note: my function time out is set to 540 second and it takes ~320 seconds to finish the job
I think the issue is that gcloud functions call times out after 300 seconds and is non-configurable for a longer timeout to match the Cloud Function.
I created a simple Golang Cloud Function:
func HelloFreddie(w http.ResponseWriter, r *http.Request) {
log.Println("Sleeping")
time.Sleep(400*time.Second)
log.Println("Resuming")
fmt.Fprint(w, "Hello Freddie")
}
And deployed it:
gcloud functions deploy ${NAME} \
--region=${REGION} \
--allow-unauthenticated \
--entry-point="HelloFreddie" \
--runtime=go113 \
--source=${PWD} \
--timeout=520 \
--max-instances=1 \
--trigger-http \
--project=${PROJECT}
Then I time'd it using gcloud functions call ${NAME} ...
time \
gcloud functions call ${NAME} \
--region=${REGION} \
--project=${PROJECT}
And this timed out:
ERROR: gcloud crashed (ReadTimeout): HTTPSConnectionPool(host='cloudfunctions.googleapis.com', port=443): Read timed out. (read timeout=300)
real 5m1.079s
user 0m0.589s
sys 0m0.107s
NOTE 5m1s ~== 300s
But, using curl:
time \
curl \
--request GET \
--header "Authorization: Bearer $(gcloud auth print-access-token)" \
$(\
gcloud functions describe ${NAME} \
--region=${REGION}
--project=${PROJECT} \
--format="value(httpsTrigger.url)")
Yields:
Hello Freddie
real 6m43.048s
user 0m1.210s
sys 0m0.167s
NOTE 6m43s ~== 400s
So, gcloud functions call times out after 300 seconds and this is non-configurable.
Submitted an issue to Google's Issue Tracker.
I have data in a DocumentDB database that I would like to export to an S3 bucket. However, when I try to run the mongoexport command:
mongoexport --uri="my_cluster_address/database_to_use" --collection=my_collection --out=some_file.json
I get this error:
could not connect to server: server selection error: server selection timeout, current topology:
{ Type: Single, Servers: [{ Addr: docdb_cluster_address, Type: Unknown, State: Connected, Average RTT: 0, Last error:
connection() : connection(docdb_cluster_address[-13]) incomplete read of message header: read tcp port_numbers-
>port_numbers: i/o timeout }, ] }
I am able to ssh into the cluster and do all sorts of transformations and really anything else related to database work but when I exit the mongoshell and try to run the mongoexport command it does not work. I already downloaded the mongoexport tools to the EC2 instance and added them to the .bash_profile path. I do not think it is a networking issue because if that were the case I wouldn't be able to ssh into the cluster so I think I am good on that part, I am not sure what I could be missing here. Any ideas?
When working with DocumentDB the mongoexport does not take the same parameters as it normally would when exporting/importing/restoring/dumping from/to MongoDB
Below is the command that worked for me and a link to the documentation:
https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-dump_restore_import_export_data.html
mongoexport --ssl \
--host="tutorialCluster.node.us-east-1.docdb.amazonaws.com:27017" \
--collection=restaurants \
--db=business \
--out=restaurant2.json \
--username=<yourUsername> \
--password=<yourPassword> \
--sslCAFile rds-combined-ca-bundle.pem
And below is the documentation for how it would normally work if you were working with MongoDB:
https://docs.mongodb.com/database-tools/mongoexport/
I am trying to upload my custom plugin to datafusion using CDAP RESTAPI reference. I followed the steps as per documentation but I still didn't find the way to add the plugin JSON file using REST API.
curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" "${CDAP_ENDPOINT}/api/v3/namespaces/vega_demo/artifacts/example" -H "Artifact-Extends: system:cdap-data-pipeline[6.0.0,10.0.0-SNAPSHOT)/system:cdap-data-streams[6.0.0,10.0.0-SNAPSHOT)" --data-binary #/path/to/example-1.0.0-SNAPSHOT.jar #/path/to/example-1.0.0-SNAPSHOT.json
Artifact added successfullycurl: (6) Could not resolve host:
Plugin is loaded but the config json file is not loaded causing errors in plugin
Per the command used, I suggest to verify if you are setting correctly the endpoint.
export INSTANCE_ID=your-instance-id
export CDAP_ENDPOINT=$(gcloud beta data-fusion instances describe \
--location=us-central1 \
--format="value(apiEndpoint)" \
${INSTANCE_ID})
Per the official CDAP documentation, it seems that the endpoint should not include the part api before v3.
Also, if your instance belongs to Basic edition, the namespace is default; otherwise, when using Enterpise edition you can create the namespace.
When using the curl method, it seems you need to add the config information within the headers due this method doesn't include the json load
On the other hand, if you are having issues to use curl, I would suggest to use the UI.
Taking this example to upload the plugin mysql-connector-java-5.1.35.jar to Data fusion with curl, the configuration file should be like:
{
"parents": [ "system:cdap-data-pipeline[6.1.1,6.1.1]", "system:cdap-data-streams[6.1.1,6.1.1]" ],
"plugins": [
{
"name": "mysql",
"type": "jdbc",
"className": "com.mysql.jdbc.Driver"
}
]
}
due using curl you can only upload the jar file, to include the information from the configuration file, you should use the HTTP Headers to include this information as this:
curl -X POST -H "Authorization: Bearer $(gcloud auth print-access-token)" \
"${CDAP_ENDPOINT}/v3/namespaces/default/artifacts/example" \
-H 'Artifact-Plugins: [ { "name": "mysql", "type": "jdbc", "className": "com.mysql.jdbc.Driver" } ]' \
-H "Artifact-Version: 5.1.35" \
-H "Artifact-Extends: system:cdap-data-pipeline[6.1.1, 6.1.1]/system:cdap-data-streams[6.1.1, 6.1.1]" \
--data-binary #mysql-connector-java-5.1.35.jar
Recently, I have experienced occasional errors while attempting to create dataproc clusters in GCP. The creation command is similar to:
gcloud dataproc clusters create ${CLUSTER_NAME} \
--zone "us-east1-b" \
--master-machine-type "n1-standard-16" \
--master-boot-disk-size 150 \
--num-workers ${WORKER_NODE_COUNT:-9} \
--worker-machine-type "n1-standard-16" \
--worker-boot-disk-size 25 \
--project ${PROJECT_NAME} \
--properties 'yarn:yarn.log-aggregation-enable=true'
Very intermittently, the error I receive is:
ERROR: (gcloud.dataproc.clusters.create) Operation [projects/PROJECT/regions/global/operations/UUID] failed: Multiple Errors:
- Failed to initialize node random-name-m. See output in: gs://dataproc-UUID-us/google-cloud-dataproc-metainfo/UUID/random-name-m/dataproc-startup-script_output
- Failed to initialize node random-name-w-0. See output in: gs://dataproc-UUID-us/google-cloud-dataproc-metainfo/UUID/random-name-w-0/dataproc-startup-script_output
- Failed to initialize node random-name-w-1. See output in: gs://dataproc-UUID-us/google-cloud-dataproc-metainfo/UUID/random-name-w-1/dataproc-startup-script_output
- Worker random-name-w-8 unable to register with master random-name-m. This could be because it is offline, or network is misconfigured..
And the last lines of the Google Storage bucket output file (dataproc-startup-script_output) are:
+ debconf-set-selections
debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable
++ logstacktrace
++ local err=1
++ local code=1
++ set +o xtrace
ERROR: 'debconf-set-selections' exited with status 1
Call tree:
0: /usr/local/share/google/dataproc/startup-script-cloud_datarefinery_image_20180803_nightly-RC04.sh:490 main
Exiting with status 1
This one is really starting to annoy me! Any ideas/thoughts/resolutions are much appreciated!
A fix for this issue will be rolling out over the course of next week's release.
You can check the release notes to see when the fix has rolled out here:
https://cloud.google.com/dataproc/docs/release-notes
I was trying to run some whole genome sequencing samples on google cloud using dsub. The dsub commands work ok for some samples, but not others. I have tried reducing the number of parallel threads, increasing the memory and disk, but it still fails. Since each run takes about 2 days, the trial and error approach is pretty expensive! Any help/tips would be highly appreciated!
My command is:
dsub \
--project "${MY_PROJECT}" \
--zones "us-central1-a" \
--logging "${LOGGING}" \
--vars-include-wildcards \
--disk-size 800 \
--min-ram 60 \
--image "us.gcr.io/xxx-yyy-zzz/data" \
--tasks "${SCRIPT_DIR}"/tBOWTIE2.tsv \
--command 'bismark --bowtie2 --bam --parallel 2 "${GENOME_REFERENCE}" -1 "${INPUT_FORWARD}" -2 "${INPUT_REVERSE}" -o "${OUTPUT_DIR}"' \
--wait
The dstat command with '--full' option shows the error as:
status: FAILURE
status-detail: "11: Docker run failed"
The last line in the log file, on google cloud, just states "(exit status 141)".
many thanks!