Camunda process versioning using "Process Instance Modification" migrate call activities - camunda

In our project we have problem with camunda process versioning.
We have read some guides and decided to use Process Instance Modification over Process Instance Migration due to limitations that the last approach has.
As we see Process Instance Migration does not allow us to change current variables (based on their previous value, and current wait point we stay), sometimes we only want to change variables because we change delegate executions code and we know that business model (BPMN) haven't bean changed.
So currently I am trying to develop migration framework based on Process Instance Modification.
And first issue I encounter is:
How properly migrate process instance which currently stays on wait point in Call Activity?
For example, I have process:
I start it. One exectuions stays on wait point before Message 1 event. Another gets into Call activity:
And stays there before Message 3 and Message 4.
By using Process Instance Modification I stop processes in Call Activity and then start them again (changing variables, and bpmn model to the latest). How can I attach them to the parent process instance which called Call activity in the first place, to make it return back to the parent process instance (which called Call activity) and proceed with processing (executing Task 6). What if I want to migrate parent process as well?

Related

How to update MultiInstance User Task to add/delete Tasks?

We have a business scenario where we would like to have the ability to INCREASE or DELETE tasks within a multi-instance context.
I’ve managed to successfully create a mutli-instance User task based on a collection workPartnerList
If a Process is working on a multi instance stage of the workflow - how can I increase or decrease the multi instance state based on the count/values of workPartnerList which can increase or decrease based on updates from the API call. (we need to do this prior to the overall task completion)?
I assume you are referring to a parallel multi-instance task.
https://docs.camunda.org/manual/latest/reference/bpmn20/tasks/task-markers/
Another way to define the number of instances is to specify the name
of a process variable which is a collection using the loopDataInputRef
child element. For each item in the collection, an instance will be
created
The creation of the instances happens at the point in time when the execution reaches the parallel multi-instance activity. The number of instances created is determined by the size of the collection at this specific point in time. (A BPMN2 process engine will not automatically keep the task instances in sync with the collection.)
To "delete" task instance you can complete or cancel them (e.g. via an attached boundary event) or us the completion condition.
A multi-instance activity ends when all instances are finished.
However, it is possible to specify an expression that is evaluated
every time one instance ends. When this expression evaluates to true,
all remaining instances are destroyed and the multi-instance activity
ends, continuing the process. Such an expression must be defined in
the completionCondition child element.
To add additional task instances to a running process instance dynamically you can use for instance event sub processes or attach a boundary event to the task.
https://docs.camunda.org/manual/7.13/reference/bpmn20/events/message-events/#message-boundary-event
Boundary events are catching events that are attached to an activity.
This means that while the activity is running, the message boundary
event is listening for named message. When this is caught, two things
might happen, depending on the configuration of the boundary event:
Interrupting boundary event: The activity is interrupted and the sequence flow going out of the event is followed.
Non-interrupting boundary event: One token stays in the activity and an additional token is created which follows the sequence flow
going out of the event.
If you are willing to approach this on API level then the TaskService allows you to create a new task (with a user defined task id).
Example:
https://github.com/rob2universe/cam-multi-instance/blob/25f524be6a112deb1b4ae3bb4f28a35422e428e0/src/test/java/org/camunda/bpm/example/ProcessJUnitTest.java#L79
The migration API would even allow you to add additional instances to the already created set of task instances - see: https://docs.camunda.org/manual/latest/user-guide/process-engine/process-instance-modification/#modify-multi-instance-activity-instances

Deleting a Google Cloud Task does not stop task running

I have a task queue which users can push tasks onto, only one task can run at a time enforced by the concurrency setting for the queue. In some cases (e.g. long running task) they may wish to cancel a running task in order to free up the queue to process the next task.
To achieve this I have been running the task queue as a Flask application, should a user wish to cancel a task I call the delete_task method of the python client library for a given queue & task.
However, I am seeing that the underlying task continues to be processed even after the task has been deleted. Have been trying to find documentation of how Cloud Tasks handles a task being deleted, but haven't found anything concrete.
Hoping that i'd be able to listen for a signal of some sort in order to gracefully shut down the process if a deletion is received. Or that the underlying process would be killed if the parent task is deleted.
Has anyone worked with the Cloud Tasks API before? Is it correct to assume that a deleted task will cleanup any processes that are running?
I don't see how a worker would be able to find out that the task it is working on has been deleted.
In the eyes of the worker, a task is an incoming Http request. I don't know how the Queue could tell that specific process to stop. I'm fairly certain that "deleting" a task just removes it from the Queue only.
You'd have to build a custom 'cancel' function that would be able to reach out to this worker.
Or this worker would have to periodically check with the Queue to see if its task still exists.
https://cloud.google.com/tasks/docs/reference/rest/v2/projects.locations.queues.tasks/get
https://googleapis.dev/python/cloudtasks/latest/gapic/v2/api.html#google.cloud.tasks_v2.CloudTasksClient.get_task
I'm not actually sure what the Queue will return if you try to call 'get task' a deleted task since i don't see a 'status' property for task. Maybe it will return an error like 'task does not exist'

How to send TaskSuccess to the right activity with AWS Step Functions?

So, I'm working on a state machine. It can have up to 20 or 30 executions of it running at the same time, with different parameters.
One of it's states is an activity worker (needs to wait for some input from another step function execution started from one of it's states through a lambda function, since you can't directly start a new execution from a state machine).
I know how to send a "Task Success" for an activity. But how can I make sure it's sent to the right execution ?
Using a pub/sub service such as mqtt would be useful here.
Generate a UUID in the lambda that spawns the new execution.
Pass the UUID to the new execution and return it to the activity worker.
The new execution writes the UUID and result to the queue once it's done.
The activity worker reads from the queue and uses the UUID to find the right message.
Depending on the design of your state machine, you may also be able to pass the current activity's taskToken as an input parameter when your activity creates a new StepFunction execution. Then the last state in the sub-execution can call Task Success for the state in the parent execution using the taskToken passed in, returning any result data as the results for that state. (Don't forget the last state would also have to call Task Success for itself as well.)

How can I force ColdFusion to stop rendering a page until a process invoked with <cfexecute> completes?

I'm working on a script that creates a MySQL dump via <cfexecute> and then FTPs the SQL script to another server. I've resorted to checking once per second to see if the filesize has changed, and if it has not changed within the past five seconds I assume it has completed.
This is fine for the current application, but eventually I would like to be able to import the SQL script on the second server and provide some sort of notification that it has completed.
Is there some way to track the status of a running process?
If not, is there a way to accomplish a full DB export and import via ColdFusion alone?
Actually you may not realize it, but when you call <cfexecute> without passing a timeout attribute it defaults to '0' timeout. And if you read the docs on <cfexecute> you'd see:
If the value is 0:
ColdFusion starts a process and returns immediately. ColdFusion may
return control to the calling page
before any program output displays. To
ensure that program output displays,
set the value to 2 or higher.
So I would suggest passing a higher value for timeout which will cause ColdFusion to wait for mysqldump to complete before moving on.
Reference
Check out Event Gateways[1] for one way to deal with asynchronous operations. There's a Directory Watcher gateway that comes with CF as an example.[2]
Barring that, create some sort of batch processing facility using CF Scheduled Tasks. Add the job to a database table and have a scheduled task periodically pull jobs out of the table and execute them, reporting on the result. A second scheduled task can detect that the first completed and carry out the next step of the process.
[1] http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec214e3-7fa7.html
[2] http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-77f7.html

implementing a timer in a django app

In my Django app, I need to implement this "timer-based" functionality:
User creates some jobs and for each one defines when (in the same unit the timer works, probably seconds) it will take place.
User starts the timer.
User may pause and resume the timer whenever he wants.
A job is executed when its time is due.
This does not fit a typical cron scenario as time of execution is tied to a timer that the user can start, pause and resume.
What is the preferred way of doing this?
This isn't a Django question. It is a system architecture problem. The http is stateless, so there is no notion of times.
My suggestion is to use Message Queues such as RabbitMQ and use Carrot to interface with it. You can put the jobs on the queue, then create a seperate consumer daemon which will process jobs from the queue. The consumer has the logic about when to process.
If that it too complex a system, perhaps look at implementing the timer in JS and having it call a url mapped to a view that processes a unit of work. The JS would be the timer.
Have a look at Pinax, especially the notifications.
Once created they are pushed to the DB (queue), and processed by the cron-jobbed email-sending (2. consumer).
In this senario you won't stop it once it get fired.
That could be managed by som (ajax-)views, that call system process....
edit
instead of cron-jobs you could use a twisted-based consumer:
write jobs to db with time-information to the db
send a request for consuming (or resuming, pausing, ...) to the twisted server via socket
do the rest in twisted
You're going to end up with separate (from the web server) processes to monitor the queue and execute jobs. Consider how you would build that without Django using command-line tools to drive it. Use Django models to access the the database.
When you have that working, layer on on a web-based interface (using full Django) to manipulate the queue and report on job status.
I think that if you approach it this way the problem becomes much easier.
I used the probably simplest (crudest is more appropriate, I'm afraid) approach possible: 1. Wrote a model featuring the current position and the state of the counter (active, paused, etc), 2. A django job that increments the counter if its state is active, 3. An entry to the cron that executes the job every minute.
Thanks everyone for the answers.
You can always use a client based jquery timer, but remember to initialize the timer with a value which is passed from your backend application, also make sure that the end user didn't edit the time (edit by inspecting).
So place a timer start time (initial value of the timer) and timer end time or timer pause time in the backend (DB itself).
Monitor the duration in the backend and trigger the job ( in you case ).
Hope this is clear.