How to update MultiInstance User Task to add/delete Tasks? - camunda

We have a business scenario where we would like to have the ability to INCREASE or DELETE tasks within a multi-instance context.
I’ve managed to successfully create a mutli-instance User task based on a collection workPartnerList
If a Process is working on a multi instance stage of the workflow - how can I increase or decrease the multi instance state based on the count/values of workPartnerList which can increase or decrease based on updates from the API call. (we need to do this prior to the overall task completion)?

I assume you are referring to a parallel multi-instance task.
https://docs.camunda.org/manual/latest/reference/bpmn20/tasks/task-markers/
Another way to define the number of instances is to specify the name
of a process variable which is a collection using the loopDataInputRef
child element. For each item in the collection, an instance will be
created
The creation of the instances happens at the point in time when the execution reaches the parallel multi-instance activity. The number of instances created is determined by the size of the collection at this specific point in time. (A BPMN2 process engine will not automatically keep the task instances in sync with the collection.)
To "delete" task instance you can complete or cancel them (e.g. via an attached boundary event) or us the completion condition.
A multi-instance activity ends when all instances are finished.
However, it is possible to specify an expression that is evaluated
every time one instance ends. When this expression evaluates to true,
all remaining instances are destroyed and the multi-instance activity
ends, continuing the process. Such an expression must be defined in
the completionCondition child element.
To add additional task instances to a running process instance dynamically you can use for instance event sub processes or attach a boundary event to the task.
https://docs.camunda.org/manual/7.13/reference/bpmn20/events/message-events/#message-boundary-event
Boundary events are catching events that are attached to an activity.
This means that while the activity is running, the message boundary
event is listening for named message. When this is caught, two things
might happen, depending on the configuration of the boundary event:
Interrupting boundary event: The activity is interrupted and the sequence flow going out of the event is followed.
Non-interrupting boundary event: One token stays in the activity and an additional token is created which follows the sequence flow
going out of the event.
If you are willing to approach this on API level then the TaskService allows you to create a new task (with a user defined task id).
Example:
https://github.com/rob2universe/cam-multi-instance/blob/25f524be6a112deb1b4ae3bb4f28a35422e428e0/src/test/java/org/camunda/bpm/example/ProcessJUnitTest.java#L79
The migration API would even allow you to add additional instances to the already created set of task instances - see: https://docs.camunda.org/manual/latest/user-guide/process-engine/process-instance-modification/#modify-multi-instance-activity-instances

Related

How to limit number of concurrent workflows running?

The title is pretty much the question. Is there some way to limit the number of concurrent workflows running at any given time?
Some background:
I'm using eventarc to dispatch a workflow once a message has been sent to a pubsub topic. The workflow will be used to start some long-running operation (LRO) but for reasons I won't go into, I don't want more than 3 instances of this workflow running at a given time.
Is there some way to do this? - primarily from some type of configuration rather than using another compute resource.
There is no configuration to limit running processes that specifically targets sessions that are executed by a Workflow enabled for concurrent execution.
The existing process limit applies to all sessions without differentiating between those from non-concurrent or concurrent enabled Workflows.
Synchronization enables users to limit the parallel execution of certain workflows or templates within a workflow without having to restrict others.
Users can create multiple synchronization configurations in the ConfigMap that can be referred to from a workflow or template within a workflow. Alternatively, users can configure a mutex to prevent concurrent execution of templates or workflows using the same mutex.
Refer to this link for more information.
Summarizing your requirements:
Trigger workflow executions with Pub/Sub messages
Execute at most 3 workflow executions concurrently
Queue up waiting Pub/Sub messages
(Unspecified) Do you need messages processed in the order delivered?
There is no out-of-the box capability to achieve this. For fun, below is a solution that doesn't need secondary compute (and therefore is still fully managed).
The key to making this work is likely starting new executions for every message, but waiting in that execution if needed. Workflows does not provide a global concurrency construct, so you'll need to use some external storage, such as Firestore. An algorithm like this could work:
Create a callback
Push the callback into a FIFO queue
Atomically increment a counter (which returns the new value)
If the returned value is <= 3, pop the last callback and call it
Wait on the callback
-- MAIN WORKFLOW HERE --
Atomically decrement the counter
If the returned value is < 3, pop the last callback and call it
To keep things cleaner, you could put the above steps in a the triggered workflow and the main logic in a separate workflow that is called as needed.

Camunda process versioning using "Process Instance Modification" migrate call activities

In our project we have problem with camunda process versioning.
We have read some guides and decided to use Process Instance Modification over Process Instance Migration due to limitations that the last approach has.
As we see Process Instance Migration does not allow us to change current variables (based on their previous value, and current wait point we stay), sometimes we only want to change variables because we change delegate executions code and we know that business model (BPMN) haven't bean changed.
So currently I am trying to develop migration framework based on Process Instance Modification.
And first issue I encounter is:
How properly migrate process instance which currently stays on wait point in Call Activity?
For example, I have process:
I start it. One exectuions stays on wait point before Message 1 event. Another gets into Call activity:
And stays there before Message 3 and Message 4.
By using Process Instance Modification I stop processes in Call Activity and then start them again (changing variables, and bpmn model to the latest). How can I attach them to the parent process instance which called Call activity in the first place, to make it return back to the parent process instance (which called Call activity) and proceed with processing (executing Task 6). What if I want to migrate parent process as well?

What happens when we trigger the SWF Flows #Execute method multiple times?

We have a usecase where we start a workflow (by invoking #Execute method) and the we schedule a timer for a subsequent activity. Now, this triggering of workflow is based on API call which can be triggered multiple times by a client.
Wanted to know how SWF flow handled the multiple invocations of #Execute method.
Does it create multiple executions ?
or would there be multiple timer clocks scheduled for same workflow execution ?
SWF allows only one open workflow execution per ID. So if the workflow is still running calling the Execute method again is going to return WorkflowExecutionAlreadyStartedFault.
Note that if a workflow is completed the new workflow is going to start even for the same ID.
The temporal.io which is an open source version of SWF has an additional WorkflowIdReusePolicy which specifies what should be done if there are already completed workflows.

Routing an activity task to a specific worker in the SWF fleet

I have a fleet of multiple worker hosts polling for the following tasks of my SWF:
Activity 1: Perform some business logic to create a large file.
Activity 2: Wait for some time (a human approval, timer, etc.)
Activity 3: Transmit the file using some protocol (governed by input parameters of the SWF).
Activity 4: Clean-up the local-generated file.
The file generated in Step-1 needs to be used again in Step-3, and then eventually discarded at the end of the workflow.
The system would work fine if there is only 1 host polling for all tasks. However, when I have multiple workers, I cannot seem to ensure that task-1 and task-3 would end up on the same host.
I would like to avoid doing the following:
Uploading the file to a central repository (say S3) on step-1 and download it in step-3; or
Having a single activity for the task-1 and task-3.
I have the following questions:
Is it possible to control that subsequent activities be run on the same host as opposed to going to any random host in my fleet?
What are specific guidelines/best practices on re-using resources generated in different activities in a workflow?
Is it possible to control that subsequent activities be run on the
same host as opposed to going to any random host in my fleet?
Yes, absolutely. The basic idea is that SWF task lists (queues used to deliver activity tasks) are dynamic. So each host can have its own task list and workflow can specify specific task list name when calling an activity. See fileprocessing sample which executes download activity on any host from the pool, then converts the file and uploads the result on the same host as the first one.
List item What are specific guidelines/best practices on re-using resources generated in different activities in a workflow?
The approach of caching result in the worker process memory or on the local disk is considered the best practice. Sometimes using external data store and getting it each times also makes sense.

How to send TaskSuccess to the right activity with AWS Step Functions?

So, I'm working on a state machine. It can have up to 20 or 30 executions of it running at the same time, with different parameters.
One of it's states is an activity worker (needs to wait for some input from another step function execution started from one of it's states through a lambda function, since you can't directly start a new execution from a state machine).
I know how to send a "Task Success" for an activity. But how can I make sure it's sent to the right execution ?
Using a pub/sub service such as mqtt would be useful here.
Generate a UUID in the lambda that spawns the new execution.
Pass the UUID to the new execution and return it to the activity worker.
The new execution writes the UUID and result to the queue once it's done.
The activity worker reads from the queue and uses the UUID to find the right message.
Depending on the design of your state machine, you may also be able to pass the current activity's taskToken as an input parameter when your activity creates a new StepFunction execution. Then the last state in the sub-execution can call Task Success for the state in the parent execution using the taskToken passed in, returning any result data as the results for that state. (Don't forget the last state would also have to call Task Success for itself as well.)