Move comments from task to following task - camunda

Camunda has the option to write a comment to a task. The current problem is that they get lost in the following task after the previous task has been completed. Is there any way to move the comments from task to task over a complete process instance?
Maybe it can be done via variables if anybody knows a way to access it in the Camunda modeler?

The follow up task doesn't exist yet, so you cannot store comments on it.
What should work:
Register a taskListener on Task#complete, read the comment and store it in a global process variable.
On the next task, use a taskListener on create to read the comment variable and write to the (new) tasks comments.

Not sure how you are accessing the task information.
The task service has a method: getProcessInstanceComments
https://docs.camunda.org/javadoc/camunda-bpm-platform/7.15/org/camunda/bpm/engine/TaskService.html#getProcessInstanceComments-java.lang.String-
In an expression it could look like:
${taskService.getProcessInstanceComments(execution.processInstanceId).toString()}

Related

How can I pass value through process variable in Camunda to subflow from main flow

Colleagues,
Can you please advise me a bit about the following.
I cannot figure out how to pass value through process variable from main flow to its subflow in Camunda. I am putting value to process variable in one task in main flow via execution.setVariable("toolId", toolId);
where execution is an instance of DelegateExecution. I am trying to retrieve in another task of subflow via
Long toolId = (Long) execution.getVariable("toolId");
However I am getting null.
By subflow I assume you mean a call activity (otherwise the data would be available).
A call activity references a technically independent process instance with its own data. Therefore you have to explicitly map the in data, which should be copied from the source (parent) to the target (sub process) and also the out data in the other direction.
Please see: https://docs.camunda.io/docs/components/modeler/bpmn/call-activities/#variable-mappings and https://docs.camunda.io/docs/components/concepts/variables/#inputoutput-variable-mappings

Global variable alternative in a AWS Step Function execution

Im running a workflow using a step function (with SAM), when I needed to send information between lambdas I've used events and everything was perfect! But now, I need that almost every lambda in my workflow have access to a constant received in the invocation input of the step function (it changes on every execution) like a global variable.
I know that I can solve it by returning it in every lambda output but I think that it is a very ugly solution :(
Is there any way to access the context of the execution and add data to it from a lambda in the step function ? Any other solution would be cool too.
Yes, see https://docs.aws.amazon.com/step-functions/latest/dg/input-output-resultpath.html#input-output-resultpath-append
You can keep the input of the state machine execution and combine it with the result of the state.
Going through the docs, I see that you can access the context object from each state in the state machine.
You can pass the information that you need to be global as the input to your state machine and then, access the state machine input from the context object.
You can refer the linked doc to see how to access the context object.

Unassign an assigned process from a job object

I m using assignprocesstojobobject to kill all child process when the parent dies.
However under sone circumstances I wish not to kill some of them.
So I thought I could just unassign a proceo however the documentation mentions nothing like that...
Any idea on how to do this ?
The documentation is quite clear, see Job Objects:
After a process is associated with a job, the association cannot be broken.

winrt c++ tasks queue

I need make a task queue in c++/cx but due to my poor experience i dont know how.
The purpose is:
- creating the task in some thread with lambda ("task1 = [] () {}")
- then add this task to task queue, task queue executing in other thread
- when task expecting it queue, it doesnt execute
- tasks executing only after previously executed task
As i got, when you use auto a = concurrency::create_task(lambda) it start immediately. Delayed start of such task need an pointer to previous task, but i cant get it as my tasks generated in separate threads.
So could anybody help me to solve this problem?
Seems like proper using of std::task_group can solve my problem.
Also std::task_handle dont execute on creation so using it may solve my problem too, but it needs its own queue.

Celery tasks per Model Object. Cleanest way to track progress

I have distributed hardware sensor nodes that will be interrogated by celery tasks. Each sensor node has a object associated holding recent readings, and config data.
I never want more than one celery task interrogating a single sensornode. But requests might come to interrogate the node while it is still being worked on from a previous request.
I didn't see any example of this sort of task tracking in any of the celery docs. But I assume its a fairly common requirement.
My first thought was to just mark the model object at the beginning and end of the task with a task_in_progress like flag.
Is there anything in the task instantiation that I can use to better realize my task tracking?
What you want is to lock a task on a given resource, there is a very nice example on the Celery.
To summarize the example suggests to use a cache key to hold the lock, a task will check the lock key (you can generate a instance specific cache key like "sensor-%(id)s") before starting and execute only if the cache key is not set.
example.
def check_sensor(sensor_id):
if check_lock_from_cache(sensor_id):
... handle the lock ...
else:
lock(sensor_id)
... use the sensor ...
unlock(sensor_id)
you probably want to be really sure to do the unlock properly (try except finally)
here's the celery example http://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-one-at-a-time