Azure webjob is always "Running" - azure-webjobs

I just created an azure web job. I scheduled it to run every 1 minute:
0 */1 * * * *
This is the code
var host = new JobHost();
Console.WriteLine("Starting program...");
var unityContainer = new UnityContainer();
unityContainer.RegisterType<ProgramStarter, ProgramStarter>();
unityContainer.RegisterType<IOutgoingEmailRepository, OutgoingEmailRepository>();
unityContainer.RegisterType<IOutgoingEmailService, OutgoingEmailService>();
unityContainer.RegisterType<IDapperHelper, DapperHelper>();
//var game = unityContainer.Resolve<IOutgoingEmailRepository>();
var program = unityContainer.Resolve<ProgramStarter>();
program.Run().Wait();
Console.WriteLine("All done....");
host.RunAndBlock();
The problem is that the status never change to "success". Am I doing smth wrong? The followings are the app settings I use, should I change? I also noticed that it runs just the first time, I believe it is because it never ends

You could check your webkjob logs on KUDU.
If you use the above job in a RunAndBlock scenario, then your job has to be continuous. That means, the process will run all the time.
Obviously, you're using Trigger webjob here, not Continuous. RunAndBlock method can not be used here.
WEBJOBS_IDLE_TIMEOUT - Time in seconds after which we'll abort a
running triggered job's process if it's in idle, has no cpu time or
output (Only for triggered jobs).
In addition,I notice that you set WEBJOBS_IDLE_TIMEOUT to 100000.It seems that the value is too large so that it makes your webjob never stops for a long time when it's in idle.
You could also change the grace period of a job by specifying it (in seconds) in the settings.job file where the name of the setting is stopping_wait_time like so:
{ "stopping_wait_time": 60 }
More details ,please refer to this doc.
Hope it helps you.

Related

Lambda SQL Server RDS Connection Leak

Problem
I'm using mssql v6.2.0 in a Lambda that is invoked frequently (consistently ~25 concurrent invocations under standard load).
I seem to be having trouble with connection pooling or something because I keep having tons of open DB connections which overwhelm my database (SQL Server on RDS) causing the Lambdas to just time out waiting for query results.
I have read the docs, various similar questions, Github issues, etc. but nothing has worked for this particular issue.
Things I've Learned Already
I did learn that pooling is possible across invocations due to the fact that variables outside the handler function are shared across invocations in the same container. This makes me think I should see just a few connections for each container running my Lambda, but I don't know how many that is so it's hard to verify. Bottom line is that pooling should keep me from having tons and tons of open connections, so something isn't working right.
There are several different ways to use mssql and I have tried several of them. Notably I've tried specifying max pool size with both large and small values but got the same results.
AWS recommends that you check to see if there's already a pool before trying to create a new one. I tried that to no avail. It was something like pool = pool || await createPool()
I know that RDS Proxy exists to help with situations like this, but it appears it isn't offered (at this time) for SQL Server instances.
I do have the ability to slow down my data a bit, but this has a slight impact on the performance of the product as a whole, so I don't want to do that just to avoid solving a DB connections issue.
Left unchecked, I saw as many as 700 connections to the DB at once, leading me to think there's a leak of some kind and it's maybe not just a reasonable result of high usage.
I didn't find a way to shorten the TTL for connections on the SQL Server side as recommended by this re:Invent slide. Perhaps that is part of the answer?
Code
'use strict';
/* Dependencies */
const sql = require('mssql');
const fs = require('fs').promises;
const path = require('path');
const AWS = require('aws-sdk');
const GeoJSON = require('geojson');
AWS.config.update({ region: 'us-east-1' });
var iotdata = new AWS.IotData({ endpoint: process.env['IotEndpoint'] });
/* Export */
exports.handler = async function (event) {
let myVal= event.Records[0].Sns.Message;
// Gather prerequisites in parallel
let [
query1,
query2,
pool
] = await Promise.all([
fs.readFile(path.join(__dirname, 'query1.sql'), 'utf8'),
fs.readFile(path.join(__dirname, 'query2.sql'), 'utf8'),
sql.connect(process.env['connectionString'])
]);
// Query DB for updated data
let results = await pool.request()
.input('MyCol', sql.TYPES.VarChar, myVal)
.query(query1);
// Prepare IoT Core message
let params = {
topic: `${process.env['MyTopic']}/${results.recordset[0].TopicName}`,
payload: convertToGeoJsonString(results.recordset),
qos: 0
};
// Publish results to MQTT topic
try {
await iotdata.publish(params).promise();
console.log(`Successfully published update for ${myVal}`);
//Query 2
await pool.request()
.input('MyCol1', sql.TYPES.Float, results.recordset[0]['Foo'])
.input('MyCol2', sql.TYPES.Float, results.recordset[0]['Bar'])
.input('MyCol3', sql.TYPES.VarChar, results.recordset[0]['Baz'])
.query(query2);
} catch (err) {
console.log(err);
}
};
/**
* Convert query results to GeoJSON for API response
* #param {Array|Object} data - The query results
*/
function convertToGeoJsonString(data) {
let result = GeoJSON.parse(data, { Point: ['Latitude', 'Longitude']});
return JSON.stringify(result);
}
Question
Please help me understand why I'm getting runaway connections and how to fix it. For bonus points: what's the ideal strategy for handling high DB concurrency on Lambda?
Ultimately this service needs to handle several times the current load -- I realize this becomes a quite intense load. I'm open to options like read replicas or other read-performance-boosting measures as long as they're compatible with SQL Server, and they're not just a cop out for writing proper DB access code.
Please let me know if I can improve the question. I know there are similar ones out there but I have read/tried a lot of them and didn't find them to help. Thanks in advance!
Related Material
https://forums.aws.amazon.com/thread.jspa?messageID=678029 (old, but similar)
https://www.slideshare.net/AmazonWebServices/best-practices-for-using-aws-lambda-with-rdsrdbms-solutions-srv320 re:Invent slide deck
https://www.jeremydaly.com/reuse-database-connections-aws-lambda/ Relevant info but for MySQL instead of SQL Server
Answer
I finally found the answer after 4 days of effort. All I needed to do was scale up the DB. The code is actually fine as-is.
I went from db.t2.micro to db.t3.small (or 1 vCPU, 1GB RAM to 2 vCPU and 2GB RAM) at a net cost of roughly $15/mo.
Theory
In my case, the DB probably couldn't handle the processing (which involves several geographic calculations) for all my invocations at once. I did see CPU go up, but I assumed that was a result of the high open connections. When the queries slowed down, the concurrent invocations pile up as Lambdas start to wait for results, finally causing them to time out and not close their connections properly.
Comparisions:
db.t2.micro:
200+ DB connections (goes up continuously if you leave it running)
50+ concurrent invocations
5000+ ms Lambda duration when things slow down, ~300ms under no load
db.t3.small:
25-35 DB connections (constantly)
~5 concurrent invocations
~33 ms Lambda duration <-- ten times faster!
CloudWatch Dashboard
Summary
I think this issue was confusing to me because it didn't smell like a capacity issue. Almost every time I've dealt with high DB connections in the past, it has been a code error. Having tried options there, I thought it was "some magical gotcha of serverless" that I needed to understand. In the end it was as simple as changing DB tiers. My takeaway is that DB capacity issues can manifest themselves in ways other than high CPU and memory usage, and that high connections may be a result of something besides a code bug.
Update (4 months in)
This continues to work very well. I'm impressed that doubling the DB resources seems to have given > 2x performance. Now, when due to load (or a temporary bug during development), the db connections get really high (even over 1k) the DB handles it. I'm not seeing any issues at all with db connections timing out or the database getting bogged down due to load. Since the original time of writing I've added several CPU-intensive queries to support reporting workloads, and it continues to handle all these loads simultaneously.
We've also deployed this setup to production for one customer since the time of writing and it handles that workload without issue.
So a connection pool is no good on Lambda at all what you can do is reuse connections.
Trouble is every Lambda execution opens a pool it'll just flood the DB like you're getting, you want 1 connection per lambda container, you can use a db class like so (this is rough but lemmy know if you've got questions)
export default class MySQL {
constructor() {
this.connection = null
}
async getConnection() {
if (this.connection === null || this.connection.state === 'disconnected') {
return this.createConnection()
}
return this.connection
}
async createConnection() {
this.connection = await mysql.createConnection({
host: process.env.dbHost,
user: process.env.dbUser,
password: process.env.dbPassword,
database: process.env.database,
})
return this.connection
}
async query(sql, params) {
await this.getConnection()
let err
let rows
[err, rows] = await to(this.connection.query(sql, params))
if (err) {
console.log(err)
return false
}
return rows
}
}
function to(promise) {
return promise.then((data) => {
return [null, data]
}).catch(err => [err])
}
What you need to understand is A lambda execution is a little virtual machine that does a task and then stops, it does sit there for a while and if anyone else needs it then it gets reused along with the container and connection for a single task there's never multiple connections to a single lambda.
Hope this helps let me know if ya need any more detail! Oh and welcome to stackoverflow, that's a well-constructed question.

Update Dataflow Streaming job with Session and Siding window embedded in DF

In my use-case, I'm performing Session as well as Sliding window inside Dataflow job. So basically my Sliding window timing is 10 hour with sliding time 4 min. Since I'm applying grouping and performing max function on top of that, on every 3 min interval, window will fire the pane and it will go into Session window with triggering logic on it. Below is the code for the same.
Window<Map<String, String>> windowMap = Window.<Map<String, String>>into(
SlidingWindows.of(Duration.standardHours(10)).every(Duration.standardMinutes(4)));
Window<Map<String, String>> windowSession = Window
.<Map<String, String>>into(Sessions.withGapDuration(Duration.standardHours(10))).discardingFiredPanes()
.triggering(Repeatedly
.forever(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(5))))
.withAllowedLateness(Duration.standardSeconds(10));
I would like to add logger on some steps for Debugging, so I'm trying to update the current streaming job using below code:
options.setRegion("asia-east1");
options.setUpdate(true);
options.setStreaming(true);
So previously I had around 10k data and I updated the existing pipeline using above config and now I'm not able to see that much data in steps of updated DF job. So help me with the understanding whether it preserves the previous job data or not as I'm not seeing previous DF step count in updated Job.

Single actor executing on scheduled interval, how to shut down inbetween execution?

I have a single actor spawned by a call scheduler.schedule like so:
val yelpActor = actorSystem.actorOf(YelpActor.props, "yelpActor")
actorSystem.scheduler.schedule(0.seconds, 3.hour, yelpActor, Start(browser, mailActor, mailer, reviewsTable))
I want it to execute its code and shutdown after it runs, then startup again when 3 hour mark occurs (then shut down again etc).
Actor is defined like this:
class YelpActor extends Actor {
import YelpActor._
def receive = {
case Start(browser, mailActor, mailer, reviewsTable) =>
.....
}
}
What do I call within the Actor to make sure this happens correctly?
EDIT:
The reason I'm asking this questions is because the memory in my JVM is slowly filling up and causing OOM's. From metrics, it appears to be making a step in memory consumption everytime this scheduler runs (every 3 hours) See below, the red circles are when I re-deployed.

task queue in Appengine (using NDB) stopping another function from updating data

cred_query = credits_tbl.query(ancestor=user_key).fetch(1)
for q in cred_query:
q.total_credits = q.total_credits + credits_bought
q.put()
I have a task running which is constantly updating a users total_credits in the credits table.
While that task runs the user can also buy additional credits at any point (as shown in the code above) to add to the total. However, when they try to do so, it does not update the total_credits in the credits table.
I guess I don't understand the 'strongly consistent' modelling of appengine (using ndb) as well as I thought.
Do you know why this happens?

how to get the next scheduled trigger fire time in Quartz.net

This is my first Quartz.net project. I have done my basic homework and all my cron triggers fire correctly and life is good. However Iam having a hard time finding a property in the api doc. I know its there , just cannot find it. How do I get the exact time a trigger is scheduled to fire ? If I have a trigger say at 8:00 AM every day where in the trigger class is this 8:00 AM stored in ?
_quartzScheduler.ScheduleJob(job, trigger);
Program.Log.InfoFormat
("Job {0} will trigger next time at: {1}", job.FullName, trigger.WhatShouldIPutHere?);
So far I have tried
GetNextFireTimeUtc(), StartTimeUTC and return value of _quartzScheduler.ScheduleJob() shown above. Nothing else on http://quartznet.sourceforge.net/apidoc/topic645.html
The triggers fire at their scheduled times correctly. Just the cosmetics. thank you
As jhouse said ScheduleJob returns the next schedule.
I am using Quartz.net 1.0.3. and everything works fine.
Remember that Quartz.net uses UTC date/time format.
I've used this cron expression: "0 0 8 1/1 * ? *".
DateTime ft = sched.ScheduleJob(job, trigger);
If I print ft.ToString("dd/MM/yyyy HH:mm") I get this 09/07/2011 07.00
which is not right cause I've scheduled my trigger to fire every day at 8AM (I am in London).
If I print ft.ToLocalTime().ToString("dd/MM/yyyy HH:mm") I get what I expect 09/07/2011 08.00
You should get what you're after from getNextFireTime (the value from that method should be accurate after having called ScheduleJob()). Also the ScheduleJob() method should return the date of the first fire time.