I am using Camunda workflows to automate various processes. I have come across a scenario where the process is not moving from a service task. Usually, we call the task/{taskid}/complete to complete the task, but since the process is stuck on a service task, I am not able to complete that task. Can anybody help me find a way to complete the service task?
You are using a service task. That basically means "a machine should do something". The "normal" implementation is to provide code (a java Delegate or a connector endpoint) that is called by the process engine to execute this task.
The alternativ is to use the "external task" pattern. Think of external tasks as "user tasks for computers". So the process waits, tells subscribed clients that a job is to be done and waits for their completion.
I suppose your process uses the second option? (you can check in the modeler under "Implementation"). So completion can be done through the external task API, see docs.
/external-task/{id}/complete
If it is a connector then you likely will see when checking the log that retries have occurred and that the transaction rolled back. After addressing the underlying issue the service task (email) should be sent without explicitly triggering the service task and the following user task (Approval) should be created.
Related
Suppose we have a web-service written in python, that does some time-consuming file processing. It definitely should not be run inside the HTTP handler as it takes up to 10 mins to complete. Instead, the processing should be done asynchronously by some sort of workers, and it would be also nice to report the progress of the task execution to display to the user.
Would it be a good idea to setup Google Cloud Tasks with some Cloud Run or Cloud Functions service as a HTTP target to do this work?
Is Google Cloud Tasks suitable for handling this type of async tasks, where the user is sitting and waiting for result?
If not, is there any other options to achieve this with Google Cloud? (or should I use custom task services for this purpose, for instance, celery and redis) It also seems that Cloud Run Jobs features somewhat similar functionality, but there are not any queue systems to manage workers.
GC tasks is simply a tool for queuing tasks. It is a useful tool for ensuring that tasks do run, as it has built in retry mechanisms. How you use that in the context of an application depends on a lot of other detail of the application itself, but it is definitely possible to use it for background/asynchronous processing of tasks.
We use Google Cloud tasks to implement long running processes that report their progress via data store records. Some of these processes run longer than the standard 10 minute timeout, and trigger a new cloud task to complete the processing. We then have a simple lightweight handler that retrieves the status record from data store and reports that to the user. We poll that handler from the client, but you could also implement something like websockets.
GCP can handle Asynchronous tasks, Asynchronous execution is a well-established way to reduce request latency and make your application more responsive.
We can use cloud run or cloud functions for this type of tasks , Because we can increase the time limit upto 30 min in http task handlers in GCP cloud tasks.
For more information refer to this document.
We use Google Cloud tasks to implement long running processes that report their progress via data store records. Some of these processes run longer than the standard 10 minute timeout, and trigger a new cloud task to complete the processing.
I'm trying to create a simple Camunda BPM workflow with a parallel gateway and compensating actions like this:
All the Service Tasks are configured as external tasks that are executed by a C# program. This program calls the fetchAndLock method on the API to get a list of tasks to execute and then executes these tasks in parallel in the background. I'm experiencing some problems with this approach:
The Lock in the fetchAndLock method doesn't seem to do anything and the workflow engine does not seem to wait until all the fetched tasks are handled whenever one of the tasks is completed with a bpmnError'. Instead it immediately plans the execution of the compensating actions for the tasks it already received a complete` call and deletes the instances of all the other planned tasks without waiting for their results.
This results in the following problems:
The C# program continues to execute the unfinished tasks and when they complete it tries to call the complete method on the API, but that fails with a 404 error because the Camunda engine already deleted these task instances.
The compensating actions for these tasks are never called, which leaves the business process in an invalid state.
What am I doing wrong? I am not very familiar with BPMN so maybe there is a better way to design this process. Or is this a major bug in Camunda?
I can assume that after the parallel gateway there are two errors that trigger the event subprocess twice. You can try using the terminate event in the event subprocess
I need to implement camunda bpmn where 1 of my task is a java delegate task which calls an api.
now the api what it calls is an async api, because of which the bpmn flow moves to next task after calling the async api but i want is that after calling the api the flow shud stop and then some call back happens through some api to camunda server(hosted as spring boot app).
what would be the best way to achieve the above scenario.
Options for asynchronous communication are
A send task/event follow by receive task/event
https://docs.camunda.org/manual/latest/reference/bpmn20/tasks/send-task/
https://docs.camunda.org/manual/latest/reference/bpmn20/tasks/receive-task/
https://docs.camunda.org/manual/latest/reference/bpmn20/events/message-events/
a service task of implementation type external
https://docs.camunda.org/manual/latest/user-guide/process-engine/external-tasks/
Advanced: Implement asynchronous service invocation using a Signallable Activity Behavior
https://github.com/camunda/camunda-bpm-examples/tree/master/servicetask/service-invocation-asynchronous
From this blog post, whcih provides a detailed explanation:
https://blog.camunda.com/post/2013/11/bpmn-service-synchronous-asynchronous/
You can do this to halt the execution for the specified amount of time.
This will halt the execution for the response
I'm trying to implement a jruby SWF activity worker using AWS SDK v2.
I cannot use the aws-flow-ruby framework since it's not compatible with jruby(forking), so I wrote a worker that uses threading.
https://github.com/djpate/jflow if people are interested.
Anyway, in the framework they implement retries and It seems that it actually schedules the same activity later if an activity failed.
I found everywhere in the AWS docs and cannot find how to send that signal back to SWF using the SDK http://docs.aws.amazon.com/sdkforruby/api/Aws/SWF/Client.html
Anyone know where I should look?
From the question, I believe you are somewhat confused about what SWF is / how it works.
Activities don't run and are not retried in isolation. Everything happens in the context of a workflow. The workflow definition tell you when to retry and how to behave if activities fail/timeout etc.
The worker that processes the workflow definition and schedules the next thing that needs to happen is referred to as a decider. (you will see decider and workflow used interchangeably). It's called a decider because based on the current state it makes the decision on what the next activity that needs to be scheduled is. The decider normally takes the workflow history as input when making this input.
In Flow for example, the retry is encoded in the workflow logic. Basically if the activity fails you can just schedule it.
So to finally answer your question: if your target is to only implement the activity workers you don't need to implement any retry logic as that happens at the decider level. You should make sure that the activities are compatible with the decider (you need to make sure the history and the input/output convention are the same).
If your target is to implement your own framework on top of SWF you need to actually do the hard work needed to make the decider work.
I am using Amazon SWF service to automate some recurring tasks.
I am trying to use signals to execute some commands on remote machines. After the commands are finished executing, I'd like to send a signal back to the workflow to indicate success or failure.
The question is how can I find the workflow execution id programmatically? This is required for the remote machines to send a signal.
Thanks
Per http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/SimpleWorkflow/WorkflowExecution.html, shouldn't
your_workflow_execution_variable.run_id
get you exactly what you're looking for?