Externally cancelled subprocess in Camunda - camunda

The following subprocess in a BPM flow in Camunda appears as 'Cancelled activity instance'. Seems like it has been externally cancelled, but the flow itself continues after the subprocess close, even continue throw "External payment check finished" catch event, even when the event has not been launched.
My questions are:
What could be the reasons for an externally cancelled subprocess in Camunda?
Why after the cancellation, the flow continues by catch event if the signar referenced has not be launched?
Do you think could be related with the usage of signals instead messages? Maybe I'm completely wrong but... Can this flow being getting signals from different instances?

The problem is the usage of signals. Signals is like a broadcast; are listened for all active instances; in my case every time that "external payment check" was thrown, all the instances waiting for it was receiving the event and closing the subtask.
I have changed signals by 'escalation events' and everything goes fine!

Related

Camunda Parallel Gateway with compensating actions throws exceptions when an error occurs

I'm trying to create a simple Camunda BPM workflow with a parallel gateway and compensating actions like this:
All the Service Tasks are configured as external tasks that are executed by a C# program. This program calls the fetchAndLock method on the API to get a list of tasks to execute and then executes these tasks in parallel in the background. I'm experiencing some problems with this approach:
The Lock in the fetchAndLock method doesn't seem to do anything and the workflow engine does not seem to wait until all the fetched tasks are handled whenever one of the tasks is completed with a bpmnError'. Instead it immediately plans the execution of the compensating actions for the tasks it already received a complete` call and deletes the instances of all the other planned tasks without waiting for their results.
This results in the following problems:
The C# program continues to execute the unfinished tasks and when they complete it tries to call the complete method on the API, but that fails with a 404 error because the Camunda engine already deleted these task instances.
The compensating actions for these tasks are never called, which leaves the business process in an invalid state.
What am I doing wrong? I am not very familiar with BPMN so maybe there is a better way to design this process. Or is this a major bug in Camunda?
I can assume that after the parallel gateway there are two errors that trigger the event subprocess twice. You can try using the terminate event in the event subprocess

How to wait for Akka Persistent Actor to persistAll?

I want to send a reply after I have persisted and updated the state of the actor using persistAll. Unfortunately I have not found a callback or onSucces handler to send back a reply after the last event has been persisted.
This is a shortcoming of the API, there is no built in way to react on all persistAll completing, you will have to keep a counter or a set of completed persists yourself and only trigger your logic when the last persist completes.
As far as I remember this cannot be easily fixed because it would break binary and source compatibility.
In the "next generation" persistent actors (in Akka typed) this works more as you would expect and the side effect you want to execute on successful persist of the events will only execute once, when all the events are complete.

Event hub Handling receiver error for single event failure

is there a way for event hub listener to retry only the single failed event, or i have to fail the full batch
the listener gets a list of events and checkpoint will move pointer fwd. for full batch.
There is no good way to replay Events using EventProcessorHost. User is expected to handle the failures in user code (ProcessEvents code).
If there is a poison event in the system and the EventProcessorHost cannot proceed to execute and want to bail out - the only way to achieve this is to checkpoint until a known good Event and unregister the EventProcessorHost or kill the process.
You can control until which exact event you want to checkpoint using the PartitionContext.Checkpoint(EventData) API.

How to save data at time an application gets killed via the Task Manager?

In order to prevent application's data from losing caused by "End Task" from Task Manager, I am trying to save data at the function handler of WM_CLOSE event.
The app saves data successfully in case I closed my app via Alt+F4 or "close" button. But when I killed it via the Task Manager, the saving data process couldn't be done properly. It seems that the saving progress was terminated in middle.
I tried to debug it via VS2015 IDE, the debugger intercepted a break point in the WM_CLOSE handler successfully but it could not go further, hitting F10 to step over caused my app closes immediately.
Is there any way to delay the termination progress until my application saves data completely?
I found two links below but they didn't help.
How to handle "End Task" from Windows Task Manager on a background process?
How does task manager kill my program?
The task manager might decide that your application isn't responding, and terminate it. You can do nothing against it.
If you want to ensure that your data is always saved, you should save constantly (with some heuristics, like at least once every minute, preferrably after no change happened in a few seconds) in the background. It's more complex but has the advantage of working even when you won't receive WM_CLOSE at all, for example in the case of power loss.

How to kill /re-start a long running task

Is there a way to kill / re-start a long running task in AWS SWF? Sometimes some of our tasks run for a longer duration and we would like to manually kill a certain task (either via UI or programmatically) and re-start the task if possible. How to achieve this?
Console is option to manually kill workflow.
You can also set timeouts to whole workflow execution time or to individual activities. This can be set when you register your activity or when you start your activity (defaultTaskStartToCloseTimeoutSecond).
It's not clear what language you're using.
If you're using java, then you should look into Exponential Retry in Flow Framework. This make SDK restart your activity if it fails.
Long running activity is expected to heartbeat using RecordActivityTaskHeartbeat. It leads to timeout failure after short hearbeat interval instead of long task execution timeout if the activity process hangs or crashes.
The workflow code (decider) can always request activity cancellation through RequestCancelActivityTask decision. The cancellation request is returned as output of the RecordActivityTaskHeartbeat call. Activity implementation should cancel itself and report back to the service using RespondActivityTaskCanceled API call.
See Error Handling section of AWS Flow Framework Developer Guide for the AWS Flow Framework way of cancelling activities.
Sometimes activity implementation cannot support heartbeating and self cancellation. The solution is to execute another kill activity that terminates the first activity execution. For example under Unix such kill activity could emit "kill -9" command for the process that implements the first one.