Google Firebase Functions deployment fails - what can I do? - google-cloud-platform

Error message:
Error: There was an error deploying functions
firebease-debug.log holds this:
[debug] [2021-11-16T12:12:16.165Z] Error: Failed to upsert schedule function lab2 in region europe-west3
function code:
exports.lab2 =
functions
.region('europe-west3')
.pubsub.schedule('*/10 * * * *')
.onRun(lab);
What can I do? Google support leads to stackoverflow, so I post it here. Are there better ways to deal with the Google Cloud problems?

When you are using scheduled functions in Firebase Functions, an App Engine instance is created that is needed for Cloud Scheduler to work. You can read about it here. During its setup you're prompted to select your project's default Google Cloud Platform (GCP) resource location (if it wasn't already selected when setting up another service).
You are getting that error because there is a difference between the default GCP resource location you specified and the region of your scheduled Cloud Function. If you click on the cogwheel next to project-overview in Firebase you can see where your resources are located. Setting the default GCP resource location same as the scheduler function region, solves the issue.

The reason our schedule function deployment failed was because of the cron job formula. For some reason firebase couldn't understand 0 0 3 1/1 * ? *, but could understand every 5 minutes.
It's a shame that firebase doesn't provide a better error message. Error: Failed to upsert schedule function xxx in region xxx is way too generic.

I got the same problem of deploying my schedule functions, and I solved it with a few steps as below:
Step 1: Rename the schedule function (for example: newScheduleABC), then only deploy it:
$ Firebase deploy --only functions:newScheduleABC
NOTE: as #Nicholas mentioned, the Unix Cron Format The GCP accepts has only 5 fields: schedule(* * * * *)
Step 2: Delete old (schedule) functions by going to your Firebase Console : https://console.firebase.google.com/.../functions , you will see all of your functions there. click on the vertical ellipsis at the end of a function and click delete.
That's it. There should be no problem from now.
You can read more from Manage functions

Related

How to schedule AWS App flow using Cloud Formation Template

I have created AWS App flow using cloud formation template and I want to schedule an app flow using Trigger Config.
How can we pass date ScheduleStartTime using cloud formation?
Error I'm getting
AWS::AppFlow::FlowCreate Flow request failed:
[Schedule start time cannot be in the past. Please update the schedule
start time to a value in future.
The snippet I'm using in cloud formation,
"TriggerConfig": {
"TriggerType": "Scheduled",
"TriggerProperties": {
"DataPullMode": "Incremental",
"ScheduleExpression": "rate(5minutes)",
"TimeZone": "America/New_York",
"ScheduleStartTime" : 4.05
}
}
TriggerConfig:
TriggerType: Scheduled
TriggerProperties:
DataPullMode: Incremental
ScheduleExpression: rate(1days)
ScheduleStartTime: 1652970600
Use Unix timestamp for start time.
Please refer to the below link for conversion.
https://www.epochconverter.com/
try ScheduleInterval parameter instead of ScheduleExpression

Druid can not see/read GOOGLE_APPLICATION_CREDENTIALS defined on env path

I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.

Schedule google cloud function (no triggers)

So I have a very simple python script that writes a txt-file to my google storage bucket.
I just want to set this job to run each hour i.e not based on a trigger. It seems like that when using SDK, it needs to have a --triger- flag, but I only want it to be "triggered" by the scheduler.
Is that possible?
You can create a Cloud Function with Pub/Sub trigger and then create a Cloud Scheduler job targeting the topic which triggers the function.
I did it by following these steps:
Create a Cloud Function with Pub/Sub trigger
Select your topic or create a new one
This is the default code I am using:
exports.helloPubSub = (event, context) => {
const message = event.data
? Buffer.from(event.data, 'base64').toString()
: 'Hello, World';
console.log(message);
};
Create a Scheduler job targeting with the same Pub/Sub topic
Check it is working.
I tried it with the frequency ***** (every minute) and it works for me, I can see the logs from the Cloud Function.
Currently in order to execute a Cloud Function it needs to be triggered because once it stops the execution the only way to execute it again is through the trigger.
You can also follow the same steps I indicated to you in this page where you can find some images for further help.

Is it possible to deploy a background Function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"

It's possible to create a topic "my-topic-project-a" in project "project-a" so that it can be publicly visible (this is done by setting the role "pub/sub subscriber" to "allUsers" on it).
Then from project "project-b" I can create a subscription to "my-topic-project-a" and read the events from "my-topic-project-a". This is done using the following gcloud commands:
(these commands are executed on project "project-b")
gcloud pubsub subscriptions create subscription-to-my-topic-project-a --topic projects/project-a/topics/my-topic-project-a
gcloud pubsub subscriptions pull subscription-to-my-topic-project-a --auto-ack
So ok this is possible when creating a subscription in "project-b" linked to "my-topic-project-a" in "project-a".
In my use case I would like to be able to deploy a background function "myBgFunctionInProjectB" in "project-b" and triggered by my topic "my-topic-project-a" from "project-a"
But ... this doesn't seem to be possible since gcloud CLI is not happy when you provide the full topic name while deploying the cloud function:
gcloud beta functions deploy myBgFunctionInProjectB --runtime nodejs8 --trigger-topic projects/project-a/topics/my-topic-project-a --trigger-event google.pubsub.topic.publish
ERROR: (gcloud.beta.functions.deploy) argument --trigger-topic: Invalid value 'projects/project-a/topics/my-topic-project-a': Topic must contain only Latin letters (lower- or upper-case), digits and the characters - + . _ ~ %. It must start with a letter and be from 3 to 255 characters long.
is there a way to achieve that or this is actually not possible?
Thanks
So, it seems that is not actually possible to do this. I have found it by checking it in 2 different ways:
If you try to create a function through the API explorer, you will need to fill the location where you want to run this, for example, projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION, and then, provide a request body, like this one:
{
"eventTrigger": {
"resource": "projects/PROJECT_FOR_TOPIC/topics/YOUR_TOPIC",
"eventType": "google.pubsub.topic.publish"
},
"name":
"projects/PROJECT_FOR_FUNCTION/locations/PREFERRED-LOCATION/functions/NAME_FOR_FUNTION
}
This will result in a 400 error code, with a message saying:
{
"field": "event_trigger.resource",
"description": "Topic must be in the same project as function."
}
It will also say that you missed the source code, but, nonetheless, the API already shows that this is not possible.
There is an already open issue in the Public Issue Tracker for this very same issue. Bear in mind that there is no ETA for it.
I also tried to do this from gcloud, as you tried. I obviously had the same result. I then tried to remove the projects/project-a/topics/ bit from my command, but this creates a new topic in the same project that you create the function, so, it's not what you want.

Aws Elasticbeanstalk cron.yaml worker issue

I have an application deployed to Elasticbeanstalk and run as worker, I wanted to add a periodic task ti run each hour, so I create a cron.yaml with this conf:
version: 1
cron:
- name: "task1"
url: "/task"
schedule: "00 * * * *"
But during the deploy I always got this error:
[Instance: i-a072e41d] Command failed on instance. Return code: 1 Output: missing required parameter params[:table_name] - (ArgumentError). Hook /opt/elasticbeanstalk/addons/sqsd/hooks/start/02-start-sqsd.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
I added the right permission to EBT role, and I verified the cron.yaml maybe it formatted for Windows (CR/LF), but always got the same error.
missing required parameter params[:table_name] looks like DynamoDB table name is missing, where I can define it ? ,
Any idea how I can fix that.
Thanks !
Well I didn't figure out a solution with this issue so I moved to another approach which use CloudWatch Event to create a Rule type:schedule and select a target as SQS Queue (the one configured with the worker).
Works perfectly!
I encountered this same error when I was dynamically generating a cron.yaml file in a container command instead of already having it in my application root.
The DynamoDB table for the cron is created in the PreInitStage which occurs before any of your custom code executes so if there is no cron.yaml file than no DynamoDB table is created. When the file later appears and the cron jobs are being scheduled it fails because the table was never created.
I solved this problem by having a skeleton cron.yaml in my application root. It must have a valid cron job (I just hit my health check URL once a month) but it doesn't get scheduled since the job registration does happen after your custom commands which can reset the file with only the jobs you need.
This might not be your exact problem but hopefully it helps you find yours as it appears the error happens when the DynamoDB table does not get created.
I looks like your yaml formatting is off. That might be the issue here.
version: 1
cron:
- name: "task1"
url: "/task"
schedule: "00 * * * *"
Formatting is critical in Yaml. Give this a try at the very least.