Azure WebJob: i am calling a sql server stored procedure from web job - azure-webjobs

I have to run a stored procedure from azure webjob in continous mode.
I have written the code in c# and deployed the same in my development environment.
After monitoring for 3 or 4 days i found, the webjobs aborts if stored procedure runs for long time. My procedure takes approx 50 seconds to return output. While the job aborts by then.
It works fine if stores procedure returns data quickly. Where as if data is more and procedure takes time , i get aborted. But in my case i am looking to keep the job tunning till procedure returns data.
I am not able to figure it out.
I have tried below options
Turning always on ON
stopping_wait_time : 300
Is there any suggestion.?

You can use below code to set the timeout period to solve your problem. For more details, you can see this post about asynchronously wait for Task to complete with timeout. You also can choose other code to implement.
int timeout = 1000;
var task = SomeOperationAsync();
if (await Task.WhenAny(task, Task.Delay(timeout)) == task) {
// task completed within timeout
} else {
// timeout logic
}
Due to your azure webjob in continous mode, you can't use WEBJOBS_IDLE_TIMEOUT setting. But your webjobs will always on, you also can set WEBJOBS_RESTART_TIME to re-launch.

Related

Running background processes in Google Cloud Run

I have a lightweight server that runs cron jobs at a given time. As I understand Google Cloud Run only processes incoming requests and then becomes idle after a short time if there is no other request to process. Hence, it is not advisable to deploy that cron service to Cloud Run.
Out of curiosity, I deployed the following server that starts up and then prints a log every hour.
const express = require('express');
const app = express();
setInterval(() => console.log('ping!'), 1000 * 60 * 60);
app.listen(process.env.PORT, () => {
console.log('server listening');
})
I deployed it with a minimum and maximum instance count of 1. It has not received any request and when I checked back the next day, it was precisely printing the log every hour. Was this coincidence or can I use this setup for production?
If you set the min instance to 1 and the CPU always on to true, yes, you can perform background compute intensive processing without CPU Throttling (in your hello world case, you can use the few CPU % allowed to the idle instance without the CPU always on option).
BUT, and the but is very important, you will pay for 1 Cloud Run instance always up. In addition, is you receive request, you can scale up and have more than 1 instance up and running. Does it make sense to have several instances with the same CRON scheduling? (except if you set the max instance to 1).
At the end, the best pattern is to host the scheduling outside, on Cloud Scheduler, and then to query your instance to perform the task. It's serverless, you can handle several task in parallel, it's scalable.
From my understanding no.
From the documentation here, Google indicates that the CPU of idle instances is throttled to nearly zero. I suppose this means that very simple operation can still be performed (e.g. logging a string every hour). I guess you could test it more extensively by doing some more complex operations and evaluate the processing time of these operations.
Either way, I would not count on it in a production environment. There is no guarantee that the CPU "throttled to nearly zero" will be able to complete the operations you need in a reasonable time delay.

Why is Google Cloud Task queue so slow?

I have a google cloud tasks queue that processes 1000's of HTTP requested (cloud functions). I've set up the task queue with the default settings except updated "Max attempts" = 2
Each task is dispatched using python using "from google.cloud import tasks_v2" package.
the issue I'm facing is that it just takes too long to finish processing all the tasks within the queue, I would have expected with a setting of "max concurrent" = 1000 I would see more tasks running in one go? When refreshing all observing the "running tasks" indicator I've only seen it at a maximum of 15.
Have I missed something or are there other settings I can play with to get these tasks process quicker?
It turns out that the issues had to do with my cloud function. I have try-catch statements that would return a status of 500 when an error occurred.
It seems that cloud tasks will back off when it sees an increase in error responses. I ended up changing my catch statements to return a 200 and my task queue is finishing substantially quicker now.
Hope this helps someone else in the future.

Design Issue for web service call out from salesforce

For the scenario
As a user, whenever I try to generate or fetch codes, :
If, while generating codes via PUT callout, the request fails, then the system should identify that the put callout has failed and should not do subsequent GET callout to the codes which were not even created in the first place.
If, while generating codes via PUT callout, the request is successful, the system should wait for a while (30 secs to 1 min) and should not poll the Service API very frequently.
I have written a code thats call the PUT callout than after success of Put , calling the GET callout in future to retrieve the codes
Expected result is -
When PUT callout is sucess , system should wait for 30sec to 1 min to GET callout and retrieve all the data and store it in salesforce using scheduler and batch.
You can't schedule in Salesforce on a second-level cadence. The smallest allowable increment for a Schedulable job is fifteen minutes. Salesforce asynchronous jobs are always executed based on server load and are in a queue; you cannot control the time of their execution to the second.
While some approximation of this pattern could potentially be achieved using a Queueable chain, this pattern is not at all suited to the Salesforce architecture and really should be delegated to a middleware platform.

Do ColdFusion Scheduled Tasks have a built-in request timeout?

I have several scheduled tasks that essentially perform the same type of functionality:
Request JSON data from an external API
Parse the data
Save the data to a database
The "Timeout (in seconds)" field in the Scheduled Task form is empty for each task.
Each CFM template has the following line of code at the top of the page:
<cfscript>
setting requesttimeout=299;
</cfscript>
However, I consistently see the following entries in the scheduled.log file:
"Information","DefaultQuartzScheduler_Worker-8","04/24/19","12:23:00",,"Task
default - Data - Import triggered."
"Error","DefaultQuartzScheduler_Worker-8","04/24/19","12:24:00",,"The
request has exceeded the allowable time limit Tag: cfhttp "
Notice, there is only a 1-minute difference between the start of the task, and its timing out.
I know that, according to Charlie Arehart, the timeout error messages that are logged are usually not indicative of the actual cause/point of the timeout, and, in fact, I have run tests and confirmed that the CFHTTP calls generally run in a matter of 1-10 seconds.
Lastly, when I make the same request in a browser, it runs until the requesttimeout set in the CFM page is reached.
This leads me to believe that there is some "forced"/"built-in"/"unalterable" request timeout for Scheduled Tasks, or, that it is using the default timeout value for the server and/or application (which is set to 60 seconds for this server/application) yet, I cannot find this documented anywhere.
If this is the case, is it possible to scheduled a task in ColdFusion that runs longer than the forced request timeout?

How can I force ColdFusion to stop rendering a page until a process invoked with <cfexecute> completes?

I'm working on a script that creates a MySQL dump via <cfexecute> and then FTPs the SQL script to another server. I've resorted to checking once per second to see if the filesize has changed, and if it has not changed within the past five seconds I assume it has completed.
This is fine for the current application, but eventually I would like to be able to import the SQL script on the second server and provide some sort of notification that it has completed.
Is there some way to track the status of a running process?
If not, is there a way to accomplish a full DB export and import via ColdFusion alone?
Actually you may not realize it, but when you call <cfexecute> without passing a timeout attribute it defaults to '0' timeout. And if you read the docs on <cfexecute> you'd see:
If the value is 0:
ColdFusion starts a process and returns immediately. ColdFusion may
return control to the calling page
before any program output displays. To
ensure that program output displays,
set the value to 2 or higher.
So I would suggest passing a higher value for timeout which will cause ColdFusion to wait for mysqldump to complete before moving on.
Reference
Check out Event Gateways[1] for one way to deal with asynchronous operations. There's a Directory Watcher gateway that comes with CF as an example.[2]
Barring that, create some sort of batch processing facility using CF Scheduled Tasks. Add the job to a database table and have a scheduled task periodically pull jobs out of the table and execute them, reporting on the result. A second scheduled task can detect that the first completed and carry out the next step of the process.
[1] http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec214e3-7fa7.html
[2] http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-77f7.html