I am looking for help on how to re-trigger the failed camunda workflows in effective way such as Batch - camunda

I am looking for help on how to re-trigger the failed camunda workflows in effective way such as Batch..
Here is the scenario details
I have a camunda workflow which is triggered by service A via rest API. The workflow uses External task to implement the business logic to talk to service B .
If the workflow failed because of the transient errors in service B and those workflows will be failed. If the requests are 1 or 2, those can be re-triggered manually.
But if the failed workflows are in order 10s or 100s, it will be cumbersome to do manually hence I am looking on help the options to address this..
Are there are any better ways to re-trigger the failed camunda workflows either using Camunda UI or Rest APIs or any other options?
Details of the Camunda
- Community Edition, container image tag is: "7.11.0"

Note: These kind of batch operations are a build in feature of the enterprise edition, so if this is a recurring use case in your business, consider upgrading to enterprise.
That being said: you can re-build this feature using either script/rest or the java API. Basically, you create a jobQuery looking for failed instances and increase the number of retries.

Related

Does anyone know how to retrieve the list of tasks in Camunda 8, without using tasklist?

I am currently evalauting Camunda, having previously used version 7 in the past which seems to be significantly more open source than version 8.
I am aware that tasklist and an official tasklist-api exist, however they are only permitted during development and testing without a license.
In Bernd Rücker's medium post How Open is Camunda Platform 8?, there is a section:
A path to production with source-available software
...
Additionally, you will need to find solutions to replace the tools you cannot use.
Tasklist
You will need to implement your own task management solution based on using workers subscribing to Zeebe as described in the docs. That also means you have to build your own persistence to allow task queries, as the Tasklist API is part of the Tasklist component and is not free for production use.
I have tried to search the zeebe source for any hints, but the only job/task related APIs I seem to be able to find are:
activateJobs
completeJob
I do not believe that these could be the endpoints that tasklist uses as the jobs have to be manually claimed by user interaction from the UI.
Does anyone know how this is achieved?
Your own zeebe exporter allows you to export any events the engine produces, such as user task state updates. You could store this information in a data sink of your choice and and implement an API on top of it.
See, e.g. https://camunda.com/blog/2019/05/exporter-part-1/

Data flow pipeline got stuck

Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been seen in the last 1h. Please check the worker logs in Stackdriver Logging. You can also get help with Cloud Dataflow at https://cloud.google.com/dataflow/support.
I am using service account with all required IAM roles
Generally The Dataflow job appears to be stuck because no worker activity has been seen in the last 1h can be caused by too long setup progress. In order to solve this issue you can try to increase worker resources (via --machine_type parameter) to overcome the issue.
For example, While installing several dependencies that required building wheels (pystan, fbprophet) which will take more than an hour on the minimal machine (n1-standard-1 with 1 vCPU and 3.75GB RAM). Using a more powerful instance (n1-standard-4 which has 4 times more resources) will solve the problem.
You can debug this by looking at the worker startup logs in cloud logging. You are likely to see pip issues with installing dependencies.
Do you have any error logs showing that Dataflow Workers are crashing when trying to start?
If not, maybe worker VMs are started but they can't reach the Dataflow service, which is often related to network connectivity.
Please note that by default, Dataflow creates jobs using the network and subnetwork default (please check if it exists on your project), and you can change to a specific one by specifying --subnetwork. Check https://cloud.google.com/dataflow/docs/guides/specifying-networks for more information.

How to start Aws workflow execution from a web application

I have just started learning AWS Simple workflow service and wrote a workflow using AWS Flow framework for Java. I am able to execute the workflow successfully from eclipse. But for my requirement, I need to execute it from my web application back-end which I am planning to write using Nodejs. I found AWS Restful service for SWF but not sure if it will work with flow framework. So please help me in taking the right approach.
So basically my question is how can I execute workflow starter and workers from web back-end?
You can use A Node.js library for accessing Amazon Simple Workflow. But this library is pretty low level comparing to AWS Flow Framework making writing complex workflows really non trivial.
You might consider only starting workflows and implementing activities using Javascript and implementing workflow decider using Java AWS Flow Framework.

How to build a talend job as a web service with parameters and multiple strings as output

I have a talend job designed to do an ETL task. I am using the talend open studio for data integration. I have gone through the beginners, and components manuals(of TOS 5.6), but cannot find a way to design a job that could be exported as a web service such that it could be called with parameters in the request, and then getting a collection of strings as the response. I found out that in the version 5.1 there were components like tRestRequest, and tRESTResponse that were used to achieve what I want, but for versions above 5.1 I have found nothing helpful.
The web service and REST components are now the part of the Talend ESB. In Talend DI only the consumer components are remained. You can download ESB, and build standalone job or set up an ESB server and deploy your jobs there.
The ESB is available here: http://www.talend.com/download/talend-open-studio?qt-product_tos_download=2#qt-product_tos_download

Web framework for an application that runs a hourly job and has simple GUI

I have to build a application that does 2 things -
Gathers data from a remote source and shoves it into a database. Runs every hour.
Provides a simple GUI to view that data.
Questions -
a. Will using a mvc framework like Spring or Django be overkill for this?
b. Do web frameworks support daemon jobs (assuming 1 is run as a daemon job)?
c. I have never used MQ or any messaging system before. Can something like that be used in this scenario?
At present, I plan to accomplish the above by writing a script for 1 and JS page for 2. But I would like to design an MVC application that can be expanded upon in the future, possibly adding more functionality/features.
B) spring provides with the Schedule annotation support for running jobs at specific times, for example atevery hour.