I want to make complex tasks that let me control when the task will be started, when it will be finished, when it can repeated
Example: I want to create task, it's description call 10 of clients and this task will be assigned to X user and the quantity of this task will be 10 but I want this task will start from 1/1/2018 and it will be finished in 30/12/2018 and the repeat of this task will be weekly
Hint: meaning of the repeat, each week I will get the real quantity of the user that's done divided by the quantity of the task and make the quantity of the user is zero in the next week and so on.
In details, The X user will start work on Saturday with the target of calling 10 clients by the end of the week, at the end of the week I will calculate the number of clients he called him and I will start again from scratch in the next week with the new task and so on.
what is the best way to do that in Django? is there any way to do that with Celery? if can I do that is there any way to do that in Django Admin?
Thank you
Related
I have got a question about django-q, where I could not find any answers in its documentation.
Question: Is it possible to calculate the next_run at every run at the end?
The reason behind it: The q cluster does not cover local times with dst (daylight saving time).
As example:
A schedule that should run 6am. german time.
For summer time: The schedule should be executed at 4am (UTC).
For winter time: The schedule should be executed at 5am (UTC).
To fix that I wrote custom logic for the next run. This logic is taking place in the custom function.
I tried to retrieve the schedule object in the custom function and set the next_run there.
The probleme here is: If I put the next_run logic before the second section "other calculations" it does work, But if I place it after the second section "other calculations" it does not work. Other calculations are not related to the schedule object.
def custom_function(**kwargs)
# 1. some calculations not related to schedule object
# this place does work
related_schedule= Schedule.objects.get(id=kwargs["schedule_id"])
related_schedule.next_run = ...
related_schedule.save()
# 2. some other calculations not related to schedule object
# this place doesn't
That is very random behaviour which I could not explain to me.
... or more specific i want to know
for each process
for each process step
how many processes are on this step
at the moment and more nice for more than x minutes
The REST interface https://docs.camunda.org/manual/7.5/reference/rest/execution/get-query-count/ gives me the count only for a specific step, not for all. And for processes with many step i dont want to query (feeled) thousand times to get the information.
In the database i tried this, but i gives my redundant not specific active on this step count. But i dont need to rework my queries when something is changing.
select job.proc_def_key_, job.act_id_, count(ex.id_)
from camunda.act_ru_jobdef job, camunda.act_ru_execution ex
where job.proc_def_id_ = ex.proc_def_id_
and ex.business_key_ is not null
group by job.proc_def_key_, job.act_id_
order by job.proc_def_key_, job.act_id_
I have created a schedule in Anylogic within the population of agents "customer", where customers have to create orders and send it to "terminals". Every day, the amount of orders that has to be send to terminals is different for every customer. I want to create multiple orders at once (every day, that is the start column within the schedule), and the amount I want to create is the value column within the schedule. How to do this?
As you can see below, now just one order is created every day (with the amount as parameter), but I want to create this amount of orders at that one day/moment. Thank you for the help!
The schedule data looks like:
You could do something like this:
You will have to set the parameters of your agent in the source and on the exit block you do send(agent,main.terminals(0))
If you have missing data instead of 0 in your value, use this in your agents per arrival:
selectFrom(db_table)
.where(db_table.name.eq(name))
.where(db_table.start.eq(getDayOfWeek()-1))
.count()>0
?
selectFrom(db_table)
.where(db_table.name.eq(name))
.where(db_table.start.eq(getDayOfWeek()-1))
.uniqueResult(db_table.value, int.class)
:
0
I would add dates to my schedule data, such as 28-12-2021 15:28. Then type something big into the Repeat every section. This is how I do it (my unit is always 1, but you can have any number instead):
I am trying to schedule a monthly airflow job. I kept start date as
'start_date':datetime(2020,9,23),
which is the date for previous month(today's date); because of the 'start_date+schedule_interval' rule. I kept my schedule interval as :
schedule_interval="20 9 23 * *"
By this logic job should run on 2020/23/10 9:23 UTC . But I don't know why it's not running or even creating an instance. I did everything right, kept start date to one month before and even tried with catchup= True. But it doesn't help.
Job is running if I try keeping the schedule as daily; ex:
start_date':airflow.utils.dates.days_ago(1)
and schedule interval as:
schedule_interval="20 9 * * *"
and it works file. Ran a job today at 9.20 UTC.
Note: I have ran the job before manually so it has last execution date as something else. Can that be the problem . If so, how can I resolve it or will I have to create a new job.
Changing the schedul_interval can cause problems and it's recommended to create a new DAG, see Common Pitfalls on Apache Airflow Confluence:
When needing to change your start_date and schedule interval, change
the name of the dag (a.k.a. dag_id) - I follow the convention :
my_dag_v1, my_dag_v2, my_dag_v3, my_dag_v4, etc...
Changing schedule
interval always requires changing the dag_id, because previously run
TaskInstances will not align with the new schedule interval
Changing
start_date without changing schedule_interval is safe, but changing to
an earlier start_date will not create any new DagRuns for the time
between the new start_date and the old one, so tasks will not
automatically backfill to the new dates. If you manually create
DagRuns, tasks will be scheduled, as long as the DagRun date is after
both the task start_date and the dag start_date.
i'm writing a django app that features a timer like in a game.
lets say that the game is a basketball game and i have 4 quarters of 10 min.
i need that in the end of each of the 10 min the db will be changed.
to set a timer that will change the db won't work for me because the quarter
won't always be of 10 min, and it will be changed while the app is on
production, i.e i save the quarter time in the db so i can change it whenever
i want.
i thought to use signals but i just could't find a way to make it work.
any help will be good
thx
one way to think about it would be to say it doesn't matter what state the db is in when nobody is looking at it... in other words you don't have to update the db after exactly 10 minutes
instead: as each request comes in first check if you are past the limit of the timer, if so then update the db before continuing with the usual view code