Jobs optimization in Laravel 5.5 - laravel-5.5

I have below code in supervisor which keep polling the jobs table
program:laravel-queue-listener]
command=php /var/www/laravel/artisan queue:work --sleep=120 --tries=2 --daemon
Question: Right now, it goes to database to check pending jobs after each 2 minutes...Is there any way to process queues on demand? I meant when the below code executes...it may process the queue and before that check if the queue is already processing or not...
Is there any such function in the Framework to process queues manually and check if the queue is currently polling or processing any job or not?
$User->notify(new RegisterNotification($token, $User));

I understand your question as how to process queues on demand in Laravel. There is already a detailed answer here but the command you are looking for is.
php artisan queue:work --once
However if what you are trying to do is to run the queue worker when an event happens, you can still do that by invoking the queue worker from code. Example:
public static function boot(){
static creating($user){
Artisan::call('queue:work --once');
}
}

Related

Camunda External tasks messages are de prioritising

We use node camunda-external-task-client-js to handle camunda external tasks.
Following is the client configuration
"topic_name": "app-ext-task",
"maxTasks": 5,
"maxParallelExecutions": 5,
"interval": 500,
"usePriority": true,
"lockDuration":2100000,
"workerId": "app-ext-task-worker"
We are getting external task details and able to processing them,But some times we see some tasks are getting deprioritised.
We are not setting any priority to any external task, by default all tasks are assigned priority 0.
We expect all tasks will execute in sequential manner, we agree some tasks may take more time than the subsequent task so that the taks-1 may take more time than task-2.
Ex: If a queue contains 10 taks [task1,taks-2,task-3,task-4,task-5,...task-10]
All the tasks executed sequentially as all the tasks have same priority.
1st:task-1,
2nd:task-2
3rd: task-3
Problem:
We see some tasks are getting deprioritised it means early messages are taking priority over existing messages.
1st:task-1,
2nd:task-2
3rd: task-4
4th: task-5
5th: task-6
6th: task-7
7th: task-8
8th: task-3
I am seeing problem at 2 places
While producing the message, camunda could have not posted the message in QUEUE.
While reading the Queue camunda external tasks are not processed properly.
I didn't find much docs on this, I don't know how do I debug this.
For me this is an intermitent issue, as I am not able to find the root cause of the problem.
I am not sure how to debug this as well.
Is my expectation wrong in camunda queues?
The external tasks do not form a "queue". They are instances in a pool of possible tasks, your worker fetches "some" tasks, which might be in order or not. You could prioritise the tasks, but still, if you have 10 "highest" prio tasks in the pool and the worker fetches 5, you won't be able to determine which are chosen.
But you have a process engine at hand, if keeping the sequence is essential for your process, why do you start all tasks at once and rely on the external worker to keep the order? Why not just creating one task at a time and continue when it is finished?

How to periodically schedule a same activity when the previous one is still excuting

My goal is to have an workflow which periodically (every 30 seconds) add a same activity (doing nothing but sleep for 1 minute) to the taskList. Also I have multiple machines hosting activity workers to poll the taskList simultaneously. When the activity got scheduled, one of the workers can poll it and execute.
I tried to use a cron decorator to create a DynamicActivityClient and use the DynamicActivityClient.scheduleActivity() to schedule the activity periodically. However, it seems the the activity will not be scheduled until the last activity is finished. In my case, the activity got scheduled every 1 minute rather than 30 seconds which I set in the cron pattern.
The package structure is almost the same as aws sdk sample code: cron
Is there any other structure recommended to achieve this? I am very much new to SWF.Any suggestion is highly appreciated.
You may do so by writing a much simpler workflow code and using workflow clock and timer. Refer to the example in the link below.
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/executioncontext.html
Also remember one thing. The maximum number of events allowed in a workflow execution is 25000. So the cron job will not run for ever but you will have to write code to start a new workflow execution after some time. Refer to continuous workflow example provided at link below
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/continuous.html
The cron decorator internally relies on AsyncScheduledExecutor which is by design written to wait for all asynchronous code in the invoked method to complete before calling the cron again. So the behavior you are witnessing is expected. The workaround is to not invoke activity from the code under cron, but from the code in the different scope. Something like:
// This is a field
Settable<Void> invokeNextActivity = new Settable<>();
void executeCron() {
scheduledExecutor.execute(new AsyncRunnable() {
#Override
public void run() throws Throwable {
// Instead of executing activity here just unblock
// its execution in a different scope.
invokeNextActivity.set(null);
}
});
// Recursive loop with each activity invocation
// gated on invokeNextActivity
executeActivityLoop(invokeNextActivity);
}
#Asynchronous
void executeActivityLoop(Promise waitFor) {
activityClient.executeMyActivityOnce();
ivnokeNextActivity = new Settable<>();
executeActivityLoop(ivnokeNextActivity);
}
I recommend reading TryCatchFinally documentation to get understanding of error handling and scopes.
Another option is to rewrite AsyncScheduledExecutor to invoke invoked.set(lastInvocationTime) not from the doFinally but immediately after calling the command.run()

winrt c++ tasks queue

I need make a task queue in c++/cx but due to my poor experience i dont know how.
The purpose is:
- creating the task in some thread with lambda ("task1 = [] () {}")
- then add this task to task queue, task queue executing in other thread
- when task expecting it queue, it doesnt execute
- tasks executing only after previously executed task
As i got, when you use auto a = concurrency::create_task(lambda) it start immediately. Delayed start of such task need an pointer to previous task, but i cant get it as my tasks generated in separate threads.
So could anybody help me to solve this problem?
Seems like proper using of std::task_group can solve my problem.
Also std::task_handle dont execute on creation so using it may solve my problem too, but it needs its own queue.

What happens to running processes on a continuous Azure WebJob when website is redeployed?

I've read about graceful shutdowns here using the WEBJOBS_SHUTDOWN_FILE and here using Cancellation Tokens, so I understand the premise of graceful shutdowns, however I'm not sure how they will affect WebJobs that are in the middle of processing a queue message.
So here's the scenario:
I have a WebJob with functions listening to queues.
Message is added to Queue and job begins processing.
While processing, someone pushes to develop, triggering a redeploy.
Assuming I have my WebJobs hooked up to deploy on git pushes, this deploy will also trigger the WebJobs to be updated, which (as far as I understand) will kick off some sort of shutdown workflow in the jobs. So I have a few questions stemming from that.
Will jobs in the middle of processing a queue message finish processing the message before the job quits? Or is any shutdown notification essentially treated as "this bitch is about to shutdown. If you don't have anything to handle it, you're SOL."
If we are SOL, is our best option for handling shutdowns essentially to wrap anything you're doing in the equivalent of DB transactions and implement your shutdown handler in such a way that all changes are rolled back on shutdown?
If a queue message is in the middle of being processed and the WebJob shuts down, will that message be requeued? If not, does that mean that my shutdown handler needs to handle requeuing that message?
Is it possible for functions listening to queues to grab any more queue messages after the Job has been notified that it needs to shutdown?
Any guidance here is greatly appreciated! Also, if anyone has any other useful links on how to handle job shutdowns besides the ones I mentioned, it would be great if you could share those.
After no small amount of testing, I think I've found the answers to my questions and I hope someone else can gain some insight from my experience.
NOTE: All of these scenarios were tested using .NET Console Apps and Azure queues, so I'm not sure how blobs or table storage, or different types of Job file types, would handle these different scenarios.
After a Job has been marked to exit, the triggered functions that are running will have the configured amount of time (grace period) (5 seconds by default, but I think that is configurable by using a settings.job file) to finish before they are exited. If they do not finish in the grace period, the function quits. Main() (or whichever file you declared host.RunAndBlock() in), however, will finish running any code after host.RunAndBlock() for up to the amount of time remaining in the grace period (I'm not sure how that would work if you used an infinite loop instead of RunAndBlock). As far as handling the quit in your functions, you can essentially "listen" to the CancellationToken that you can pass in to your triggered functions for IsCancellationRequired and then handle it accordingly. Also, you are not SOL if you don't handle the quits yourself. Huzzah! See point #3.
While you are not SOL if you don't handle the quit (see point #3), I do think it is a good idea to wrap all of your jobs in transactions that you won't commit until you're absolutely sure the job has ran its course. This way if your function exits mid-process, you'll be less likely to have to worry about corrupted data. I can think of a couple scenarios where you might want to commit transactions as they pass (batch jobs, for instance), however you would need to structure your data or logic so that previously processed entities aren't reprocessed after the job restarts.
You are not in trouble if you don't handle job quits yourself. My understanding of what's going on under the covers is virtually non-existent, however I am quite sure of the results. If a function is in the middle of processing a queue message and is forced to quit before it can finish, HAVE NO FEAR! When the job grabs the message to process, it will essentially hide it on the queue for a certain amount of time. If your function quits while processing the message, that message will "become visible" again after x amount of time, and it will be re-grabbed and ran against the potentially updated code that was just deployed.
So I have about 90% confidence in my findings for #4. And I say that because to attempt to test it involved quick-switching between windows while not actually being totally sure what was going on with certain pieces. But here's what I found: on the off chance that a queue has a new message added to it in the grace period b4 a job quits, I THINK one of two things can happen: If the function doesn't poll that queue before the job quits, then the message will stay on the queue and it will be grabbed when the job restarts. However if the function DOES grab the message, it will be treated the same as any other message that was interrupted: it will "become visible" on the queue again and be reran upon the restart of the job.
That pretty much sums it up. I hope other people will find this useful. Let me know if you want any of this expounded on and I'll be happy to try. Or if I'm full of it and you have lots of corrections, those are probably more welcome!

Django-celery - How to execute tasks serially?

I'm using Celery 3.1. I need to only execute the next task when the last one is finish. How can I assure that there are not two tasks working at the same time? I've read the documentation but it is not clear for me.
I've the following scheme:
Task Main
- Subtask 1
- Subtask 2
I need that when I call "Task Main" the process will run till the end(Subtask 2) without any new "Task Main" starting.
How can I assure this?
One strategy is through the use of locks. The Celery Task Cookbook has an example at http://docs.celeryproject.org/en/latest/tutorials/task-cookbook.html.
If I understand you want to execute only MainTask one by one, and you want to call subtasks in your MainTask. Without creating separate queues and at least 2 separate workers this is impossible. Because if you will store in same queue all tasks looks for celery as same tasks.
So solution for is:
map MainTask to main_queue
Start separate worker for this queue like:
celeryd --concurrency=1 --queue=main_queue
map subtasks to sub_queue
Start separate worker for this queue
celeryd --queue=sub_queue
Should work!
But I think this is complecated architecture, may be you can make it much easier if you will redesign your process.
Also you can find this useful (it works for you but it could run parallel MainTask):
You should try to use chains, here is an example on Celery's docs: http://docs.celeryproject.org/en/latest/userguide/tasks.html#avoid-launching-synchronous-subtasks.