I need make a task queue in c++/cx but due to my poor experience i dont know how.
The purpose is:
- creating the task in some thread with lambda ("task1 = [] () {}")
- then add this task to task queue, task queue executing in other thread
- when task expecting it queue, it doesnt execute
- tasks executing only after previously executed task
As i got, when you use auto a = concurrency::create_task(lambda) it start immediately. Delayed start of such task need an pointer to previous task, but i cant get it as my tasks generated in separate threads.
So could anybody help me to solve this problem?
Seems like proper using of std::task_group can solve my problem.
Also std::task_handle dont execute on creation so using it may solve my problem too, but it needs its own queue.
Related
We use node camunda-external-task-client-js to handle camunda external tasks.
Following is the client configuration
"topic_name": "app-ext-task",
"maxTasks": 5,
"maxParallelExecutions": 5,
"interval": 500,
"usePriority": true,
"lockDuration":2100000,
"workerId": "app-ext-task-worker"
We are getting external task details and able to processing them,But some times we see some tasks are getting deprioritised.
We are not setting any priority to any external task, by default all tasks are assigned priority 0.
We expect all tasks will execute in sequential manner, we agree some tasks may take more time than the subsequent task so that the taks-1 may take more time than task-2.
Ex: If a queue contains 10 taks [task1,taks-2,task-3,task-4,task-5,...task-10]
All the tasks executed sequentially as all the tasks have same priority.
1st:task-1,
2nd:task-2
3rd: task-3
Problem:
We see some tasks are getting deprioritised it means early messages are taking priority over existing messages.
1st:task-1,
2nd:task-2
3rd: task-4
4th: task-5
5th: task-6
6th: task-7
7th: task-8
8th: task-3
I am seeing problem at 2 places
While producing the message, camunda could have not posted the message in QUEUE.
While reading the Queue camunda external tasks are not processed properly.
I didn't find much docs on this, I don't know how do I debug this.
For me this is an intermitent issue, as I am not able to find the root cause of the problem.
I am not sure how to debug this as well.
Is my expectation wrong in camunda queues?
The external tasks do not form a "queue". They are instances in a pool of possible tasks, your worker fetches "some" tasks, which might be in order or not. You could prioritise the tasks, but still, if you have 10 "highest" prio tasks in the pool and the worker fetches 5, you won't be able to determine which are chosen.
But you have a process engine at hand, if keeping the sequence is essential for your process, why do you start all tasks at once and rely on the external worker to keep the order? Why not just creating one task at a time and continue when it is finished?
The CompletableFuture is very powerful when it comes to joining futures. Among other advantages (execute something when the task finishes, execute something on an exception, etc) it has the option to run tasks in the background using runAsync.
What it lacks though is the possibility to have a task run periodically, similar to ScheduledExecutorService.scheduleAtFixedRate.
Does anyone know how to have a task running periodically using a CompletableFuture? I tried using an endless loop in the task itself, however one loses the option to cancel a task using the future's cancel method.
I was wondering - is there a straightforward way to wait for all tasks to finish running before exiting without keeping track of all the ObjectIDs (and get()ing them)? Use case is when I launch #remotes for saving output, for example, where there is no return result needed. It's just extra stuff to keep track of if I have to store those futures.
Currently there is no standard way to block until all tasks have finished.
There are some workarounds that can be used.
Keep track of all of the object IDs in a list object_ids and then call ray.get(object_ids) or ray.wait(object_ids, num_returns=len(object_ids)).
Loop as long as some resources are being used.
import time
while (ray.global_state.cluster_resources() !=
ray.global_state.available_resources()):
time.sleep(1)
The above code will loop until it detects that no tasks are currently being executed. However this is not a foolproof approach. It's possible that there could be a moment in time when no tasks are running but a scheduler a task is about to start running.
I have below code in supervisor which keep polling the jobs table
program:laravel-queue-listener]
command=php /var/www/laravel/artisan queue:work --sleep=120 --tries=2 --daemon
Question: Right now, it goes to database to check pending jobs after each 2 minutes...Is there any way to process queues on demand? I meant when the below code executes...it may process the queue and before that check if the queue is already processing or not...
Is there any such function in the Framework to process queues manually and check if the queue is currently polling or processing any job or not?
$User->notify(new RegisterNotification($token, $User));
I understand your question as how to process queues on demand in Laravel. There is already a detailed answer here but the command you are looking for is.
php artisan queue:work --once
However if what you are trying to do is to run the queue worker when an event happens, you can still do that by invoking the queue worker from code. Example:
public static function boot(){
static creating($user){
Artisan::call('queue:work --once');
}
}
I'm using Celery 3.1. I need to only execute the next task when the last one is finish. How can I assure that there are not two tasks working at the same time? I've read the documentation but it is not clear for me.
I've the following scheme:
Task Main
- Subtask 1
- Subtask 2
I need that when I call "Task Main" the process will run till the end(Subtask 2) without any new "Task Main" starting.
How can I assure this?
One strategy is through the use of locks. The Celery Task Cookbook has an example at http://docs.celeryproject.org/en/latest/tutorials/task-cookbook.html.
If I understand you want to execute only MainTask one by one, and you want to call subtasks in your MainTask. Without creating separate queues and at least 2 separate workers this is impossible. Because if you will store in same queue all tasks looks for celery as same tasks.
So solution for is:
map MainTask to main_queue
Start separate worker for this queue like:
celeryd --concurrency=1 --queue=main_queue
map subtasks to sub_queue
Start separate worker for this queue
celeryd --queue=sub_queue
Should work!
But I think this is complecated architecture, may be you can make it much easier if you will redesign your process.
Also you can find this useful (it works for you but it could run parallel MainTask):
You should try to use chains, here is an example on Celery's docs: http://docs.celeryproject.org/en/latest/userguide/tasks.html#avoid-launching-synchronous-subtasks.