how to get the next scheduled trigger fire time in Quartz.net - scheduling

This is my first Quartz.net project. I have done my basic homework and all my cron triggers fire correctly and life is good. However Iam having a hard time finding a property in the api doc. I know its there , just cannot find it. How do I get the exact time a trigger is scheduled to fire ? If I have a trigger say at 8:00 AM every day where in the trigger class is this 8:00 AM stored in ?
_quartzScheduler.ScheduleJob(job, trigger);
Program.Log.InfoFormat
("Job {0} will trigger next time at: {1}", job.FullName, trigger.WhatShouldIPutHere?);
So far I have tried
GetNextFireTimeUtc(), StartTimeUTC and return value of _quartzScheduler.ScheduleJob() shown above. Nothing else on http://quartznet.sourceforge.net/apidoc/topic645.html
The triggers fire at their scheduled times correctly. Just the cosmetics. thank you

As jhouse said ScheduleJob returns the next schedule.
I am using Quartz.net 1.0.3. and everything works fine.
Remember that Quartz.net uses UTC date/time format.
I've used this cron expression: "0 0 8 1/1 * ? *".
DateTime ft = sched.ScheduleJob(job, trigger);
If I print ft.ToString("dd/MM/yyyy HH:mm") I get this 09/07/2011 07.00
which is not right cause I've scheduled my trigger to fire every day at 8AM (I am in London).
If I print ft.ToLocalTime().ToString("dd/MM/yyyy HH:mm") I get what I expect 09/07/2011 08.00

You should get what you're after from getNextFireTime (the value from that method should be accurate after having called ScheduleJob()). Also the ScheduleJob() method should return the date of the first fire time.

Related

Is there any Callback option in GCP cloud function?

I am looking for a way to "wake up" the cloud function when a related process is done.
To understand in-depth - these are the functions I have-
1. A cloud function that gets called any X time and its purpose is to call another function (function # 2).
2. An external provider function that requests information, (I can't edit the code, I have only the request body). The information is not received in real-time, but once it ends - it sends a callback. It should be noted that the process can take long minutes and even hours.
I want to create a process where every X time function 1 will call function 2 and as soon as the second is over it will return the information to function 1 and it will store it in DB.
Example code for func1:
import requests
def entry_point():// func1
response = requests.get('https://outsourceapi.com/get_info')// func2
save_response_in_DB(response.json())// This will happand after getting response
Because I can not keep function 1 awake for so long, is there a way to "wake it up" again?
Or alternatively another solution?

Update Dataflow Streaming job with Session and Siding window embedded in DF

In my use-case, I'm performing Session as well as Sliding window inside Dataflow job. So basically my Sliding window timing is 10 hour with sliding time 4 min. Since I'm applying grouping and performing max function on top of that, on every 3 min interval, window will fire the pane and it will go into Session window with triggering logic on it. Below is the code for the same.
Window<Map<String, String>> windowMap = Window.<Map<String, String>>into(
SlidingWindows.of(Duration.standardHours(10)).every(Duration.standardMinutes(4)));
Window<Map<String, String>> windowSession = Window
.<Map<String, String>>into(Sessions.withGapDuration(Duration.standardHours(10))).discardingFiredPanes()
.triggering(Repeatedly
.forever(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(5))))
.withAllowedLateness(Duration.standardSeconds(10));
I would like to add logger on some steps for Debugging, so I'm trying to update the current streaming job using below code:
options.setRegion("asia-east1");
options.setUpdate(true);
options.setStreaming(true);
So previously I had around 10k data and I updated the existing pipeline using above config and now I'm not able to see that much data in steps of updated DF job. So help me with the understanding whether it preserves the previous job data or not as I'm not seeing previous DF step count in updated Job.

Azure webjob is always "Running"

I just created an azure web job. I scheduled it to run every 1 minute:
0 */1 * * * *
This is the code
var host = new JobHost();
Console.WriteLine("Starting program...");
var unityContainer = new UnityContainer();
unityContainer.RegisterType<ProgramStarter, ProgramStarter>();
unityContainer.RegisterType<IOutgoingEmailRepository, OutgoingEmailRepository>();
unityContainer.RegisterType<IOutgoingEmailService, OutgoingEmailService>();
unityContainer.RegisterType<IDapperHelper, DapperHelper>();
//var game = unityContainer.Resolve<IOutgoingEmailRepository>();
var program = unityContainer.Resolve<ProgramStarter>();
program.Run().Wait();
Console.WriteLine("All done....");
host.RunAndBlock();
The problem is that the status never change to "success". Am I doing smth wrong? The followings are the app settings I use, should I change? I also noticed that it runs just the first time, I believe it is because it never ends
You could check your webkjob logs on KUDU.
If you use the above job in a RunAndBlock scenario, then your job has to be continuous. That means, the process will run all the time.
Obviously, you're using Trigger webjob here, not Continuous. RunAndBlock method can not be used here.
WEBJOBS_IDLE_TIMEOUT - Time in seconds after which we'll abort a
running triggered job's process if it's in idle, has no cpu time or
output (Only for triggered jobs).
In addition,I notice that you set WEBJOBS_IDLE_TIMEOUT to 100000.It seems that the value is too large so that it makes your webjob never stops for a long time when it's in idle.
You could also change the grace period of a job by specifying it (in seconds) in the settings.job file where the name of the setting is stopping_wait_time like so:
{ "stopping_wait_time": 60 }
More details ,please refer to this doc.
Hope it helps you.

task queue in Appengine (using NDB) stopping another function from updating data

cred_query = credits_tbl.query(ancestor=user_key).fetch(1)
for q in cred_query:
q.total_credits = q.total_credits + credits_bought
q.put()
I have a task running which is constantly updating a users total_credits in the credits table.
While that task runs the user can also buy additional credits at any point (as shown in the code above) to add to the total. However, when they try to do so, it does not update the total_credits in the credits table.
I guess I don't understand the 'strongly consistent' modelling of appengine (using ndb) as well as I thought.
Do you know why this happens?

copying rather than modifying a job (APScheduler)

I'm writing a database-driven application with APScheduler (v3.0.0). Especially during development, I find myself frequently wanting to command a scheduled job to start running now without affecting its subsequent schedule.
It's possible to do this at job creation time, of course:
def dummy_job(arg):
pass
sched.add_job(dummy_job, trigger='interval', hours=3, args=(None,))
sched.add_job(dummy_job, trigger=None, args=(None,))
However, if I already have a job scheduled with an interval or date trigger...
>>> sched.print_jobs()
Jobstore default:
job1 (trigger: interval[3:00:00], next run at: 2014-08-19 18:56:48 PDT)
... there doesn't seem to be a good way to tell the scheduler "make a copy of this job which will start right now." I've tried sched.reschedule_job(trigger=None), which schedules the job to start right now, but removes its existing trigger.
There's also no obvious, simple way to duplicate a job object while preserving its args and any other stateful properties. The interface I'm imagining is something like this:
sched.dup_job(id='job1', new_id='job2')
sched.reschedule_job('job2', trigger=None)
Clearly, APScheduler already contains an internal mechanism to copy job objects since repeated calls to get_job don't return the same object (that is, (sched.get_job(id) is sched.get_job(id))==False).
Has anyone else come up with a solution here? I'm thinking of posting a suggestion on the developers' site if not.
As you've probably figured out by now, that phenomenon is caused by the job stores instantiating jobs on the fly based on data loaded from the back end. To run a copy of a job immediately, this should do the trick:
job = sched.get_job(id)
sched.add_job(job.func, args=job.args, kwargs=job.kwargs)