Django Signals for task plannification - django

I try to found the best way to manage this issue with Django Signals:
I created a datamodel with a table ExecutionPlan
In this table I simply store task to execute at a "startDate" value.
I would like to be "signaled" when a "execution line" have a "startDate < Now"
I read Django documentation on signals but I didn't found any case where a signal is send for "a data value check", in my case when a startdate is outpassed.
So my question is more methodic than technical:
Do you think that Django Signals is designed for that case ?
Should I design my own event loop ?
Thanks in advance.
Cyril

You can create a cronjob that runs every X minutes (using django-cron for example) that checks if now>startDate and starts the task if needed.

Related

MismatchingMessageCorrelationException : Cannot correlate message ‘onEventReceiver’: No process definition or execution matches the parameters

We are facing an MismatchingMessageCorrelationException for the receive task in some cases (less than 5%)
The call back to notify receive task is done by :
protected void respondToCallWorker(
#NonNull final String correlationId,
final CallWorkerResultKeys result,
#Nullable final Map<String, Object> variables
) {
try {
runtimeService.createMessageCorrelation("callWorkerConsumer")
.processInstanceId(correlationId)
.setVariables(variables)
.setVariable("callStatus", result.toString())
.correlateWithResult();
} catch(Exception e) {
e.printStackTrace();
}
}
When i check the logs : i found that the query executed is this one :
select distinct RES.* from ACT_RU_EXECUTION RES
inner join ACT_RE_PROCDEF P on RES.PROC_DEF_ID_ = P.ID_
WHERE RES.PROC_INST_ID_ = 'b2362197-3bea-11eb-a150-9e4bf0efd6d0' and RES.SUSPENSION_STATE_ = '1'
and exists (select ID_ from ACT_RU_EVENT_SUBSCR EVT
where EVT.EXECUTION_ID_ = RES.ID_ and EVT.EVENT_TYPE_ = 'message'
and EVT.EVENT_NAME_ = 'callWorkerConsumer' )
Some times, When i look for the instance of the process in the database i found it waiting in the receive task
SELECT DISTINCT * FROM ACT_RU_EXECUTION RES
WHERE id_ = 'b2362197-3bea-11eb-a150-9e4bf0efd6d0'
However, when i check the subscription event, it's not yet created in the database
select ID_ from ACT_RU_EVENT_SUBSCR EVT
where EVT.EXECUTION_ID_ = 'b2362197-3bea-11eb-a150-9e4bf0efd6d0'
and EVT.EVENT_TYPE_ = 'message'
and EVT.EVENT_NAME_ = 'callWorkerConsumer'
I think that the solution is to save the "receive task" before getting the response for respondToCallWorker, but sadly i can't figure it out.
I tried "asynch before" callWorker and "Message consumer" but it did not work,
I also tried camunda.bpm.database.jdbc-batch-processing=false and got the same results,
I tried also parallel branches but i get OptimisticLocak exception and MismatchingMessageCorrelationException
Maybe i am doing it wrong
Thanks for your help
This is an interesting problem. As you already found out, the error happens, when you try to correlate the result from the "worker" before the main process ended its transaction, thus there is no message subscription registered at the time you correlate.
This problem in process orchestration is described and analyzed in this blog post, which is definitely worth reading.
Taken from that post, here is a design that should solve the issue:
You make message send and receive parallel and put an async before the send task.
By doing so, the async continuation job for the send event and the message subscription are written in the same transaction, so when the async message send executes, you already have the subscription waiting.
Although this should work and solve the issue on BPMN model level, it might be worth to consider options that do not require remodeling the process.
First, instead of calling the worker directly from your delegate, you could (assuming you are on spring boot) publish a "CallWorkerCommand" (simple pojo) and use a TransactionalEventLister on a spring bean to execute the actual call. By doing so, you first will finish the BPMN process by subscribing to the message and afterwards, spring will execute your worker call.
Second: you could use a retry mechanism like resilience4j around your correlate message call, so in the rare cases where the result comes to quickly, you fail and retry a second later.
Another solution I could think of, since you seem to be using an "external worker" pattern here, is to use an external-task-service task directly, so the send/receive synchronization gets solved by the Camunda external worker API.
So many options to choose from. I would possibly prefer the external task, followed by the transactionalEventListener, but that is a matter of personal preference.

Is it possible to pause the django background tasks?

For example I have a django background task like this.
notify_user(user.id, repeat=3600, repeat_until=2020-12-12 00:00:00)
Which will repeat every 1 hour until some datetime.
My question is :
Is it possible to pause/resume this task? (if not possible to resume then restart the task again would be fine also).
Is there someone who is experienced with django background tasks ?
There doesn't appear to be a documented way of achieving this, but you can always delete the task from the DB.
For example:
from background_task.models import Task
task = notify_user(user.id, repeat=3600, repeat_until=2020-12-12 00:00:00)
instance = Task.objects.get(id=task.pk)
instance.delete()
Now just call the task again to restart it:
task = notify_user(user.id, repeat=3600, repeat_until=2020-12-12 00:00:00)

Camunda set Execution Variable

I'm trying to set a process variable in Task Listener of a human-task using Groovy script during 'create' as event type in Camunda BPMN work-flow.
execution.setVariable('newUserType',"RMAOFF1");
But it is giving me error saying "The task does not exist or the corresponding process instance could not be resumed successfully."
Any help most appreciated.
In a Task Listener you don't have an execution (DelegateExecution).
But you have a task (DelegateTask delegateTask).
So your example is: task.setVariable('newUserType',"RMAOFF1")
By the way this would also work in an expression ${task.setVariable('newUserType',"RMAOFF1")}
For more infos see https://docs.camunda.org/manual/7.18/user-guide/process-applications/process-application-event-listeners/
The signature of a TaskListener is void notify(DelegateTask task). I guess you are accidentally using an ExceutionListener, but that does not have a "create" event.

Celery: dynamic queues at object level

I'm writing a django app to make polls which uses celery to put under control the voting system. Right now, I have two queues, default and polls, the first one with concurrency set to 8 and the second one set to 1.
$ celery multi start -A myproject.celery default polls -Q:default default -Q:polls polls -c:default 8 -c:polls 1
Celery routes:
CELERY_ROUTES = {
'polls.tasks.option_add_vote': {
'queue': 'polls',
},
'polls.tasks.option_subtract_vote': {
'queue': 'polls',
}
}
Task:
#app.task
def option_add_vote(pk):
"""
Updates given option id and its poll increasing vote number by 1.
"""
option = Option.objects.get(pk=pk)
try:
with transaction.atomic():
option.vote_quantity += 1
option.save()
option.poll.total_votes += 1
option.poll.save()
except IntegrityError as exc:
raise self.retry(exc=exc)
The option_add_vote method (task) updates the poll-object vote-number value adding 1 to the previous value. So, to avoid concurrency problems, I set the poll queue concurrency to 1. This allow the system to handle thousand of vote requests to be completed successfully.
The problem will be, as I can imagine, a bottle-neck when the system grows up.
So, I was thinking about some kind of dynamic queues where all vote requests to any options of a certain poll where routered to a custom queue. I think this will make the system more reliable and fast.
What do you think? How can I make it?
EDIT1:
I got a new idea thanks to Paul and Plahcinski. I'm storing the votes as objects in their own model (a user-options relationship). When someone votes an option it creates an object from this model, allowing me to count how many votes an option has. This free the system from the voting-concurrency problem, so it could be executed in parallel.
I'm thinking about using CELERYBEAT_SCHEDULE to cron a task that updates poll options based on the result of Vote.objects.get(pk=pk).count(). Maybe I could execute it every hour or do partial updates for those options that are getting new votes...
But, how do I give to the clients updated options in real time?
As Plahcinski says, I can have a cached value for my options in Redis (or any other mem-cached system?) and use it to temporally store this values, giving to any new request the cached value.
How can I mix this with my standar values in django models? Anyone could give me some code references or hints?
Am I in the good way or did I make mistakes?
What I would do is remove your incrementation for the database and move to redis and use the database model as your cached value. Have a celery beat that updates recently incremented redis keys to your database
http://redis.io/commands/INCR
What about just having a simple model that stores vote -1/+1 integers then a celery task that reconciles those with the FK object for atomic transactions and updates?

Is there any qt signal for sql database change?

I wrote a C++ program using Qt. some variables inside my algorithm are changed outside of my program and in a web page. every time the user changes the variable values in the web page I modify a pre-created SQL database.
Now I want my code to change the variables value during run time without to stop the code. there is two options :
Every n seconds check the database and retrieve the variables value -> this is not good since I have to check if database content is changed every n seconds (it might be without any change for years. Also I don't want to check if the database content is changed)
Every time the database is changed my Qt program emits a signal so by catching this signal I can refresh the variables value, This seems an optimal solution and I want to write a code for this part
The C++ part of my code is:
void Update Database()
{
QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL");
db.setHostName("localhost");
db.setDatabaseName("Mydataset");
db.setUserName("user");
db.setPassword("pass");
if(!db.open())
{
qDebug()<<"Error is: "<<db.lastError();
qFatal("Failed To Connect");
}
QSqlQuery qry;
qry.exec("SELECT * from tblsystemoptions");
QSqlRecord rec = qry.record();
int cols = rec.count();
qry.next();
MCH = qry.value(0).toString(); //some global variables used in other functions
MCh = qry.value(1).toString();
// ... this goes on ...
}
QSqlDriver supports notifications which emit a signal when a specific event has occurred. To subscribe to an event just use QSqlDriver::subscribeToNotification( const QString & name ). When an event that you’re subscribing to is posted by the database the driver will emit the notification() signal and your application can take appropriate action.
db.driver()->subscribeToNotification("someEventId");
The message can be posted automatically from a trigger or a stored procedure. The message is very lightweight: nothing more than a string containing the name of the event that occurred.
You can connect the notification(const QString&)signal to your slot like:
QObject::connect(db.driver(), SIGNAL(notification(const QString&)), this, SLOT(refreshView()));
I should note that this feature is not supported by MySQL as it does not have an event posting mechanism.
There is no such thing. The Qt event loop and the database are not connected in any way. You only fetch/alter/delete/insert/... data and that's it. Option 1 is the one you have to do. There are ways to use TRIGGER on the server side to launch external scripts but this would not help you very much.