What is actually a "topic" in Camunda External Task? - camunda

I have tried the External Task Pattern for the Camunda workflow engine.
I understand that external tasks are performed by some other Workers and the "topic" name is the main thing between the BPMN engine and the Worker process.
What is the actual implementation/technology behind this "topic" name, which we specify in External Task configuration and then same used in Worker to subscribe to the topic?

Camunda does not bundle any middleware (as the name topic may suggest). The implementation of the external task topics is a simple database table,as you can see in the documentation here: https://docs.camunda.org/manual/latest/user-guide/process-engine/database/database-schema/#engine-bpmn, specifically the ACT_RU_EXT_TASK table here:
https://docs.camunda.org/manual/latest/user-guide/process-engine/img/erd_715_bpmn.svg
The topic name is a column in this table used to select only the external tasks which can be performed by the worker. The worker communicates to the engine which type of work it can perform by 'subscribing' to a specific topic. (However, technically no subscription in the sense of a registration with the engine takes place. It is just setting a configuration of the worker, which will lead to the correct attribute being set on the REST API call.)
Also see: https://docs.camunda.org/manual/latest/user-guide/process-engine/external-tasks/

Related

Azure Web Jobs Pipeline [inject before a function instance is created]

Problem:
I have a Web Job that logs to ApplicationInsights. I have a custom TelemetryInitializer (that is registered as a singleton (with no options)) that add some contextual information to the request (representing a function invocation). The issue is, that Azure Web Jobs (from what I know) doesn't provide a statically available execution context (similar to HttpContext from asp.net). I tried to build my own (based on AsyncLocal) (initializing it via a FunctionInvocationFilter), but the context is not available from TelemetryInitializers, since they are being invoked from the thread created before, with different execution context. In order this to work - I need to initialize my context earlier, before the function instance (together with AppInsights stuff) is created.
I tried to search in the Web Jobs SDK sources, but couldn't find any place where I can inject my context initialization logic.
Question:
Does anybody know how to do that? Or maybe I can achieve the same differently?
This can be achieved with System.Diagnostics.Activity, it can act as a context for logical operation. It is created very deep in WebJobs pipeline and AppInsights initializers share the operation context

How to see Task Definition history for ECS Service

Is there a way to see the history of task definitions for an ECS service? It is easy to see the current task definition for the service as well as the current task definition for the tasks. But I can't see a way to see what the previous task definition registered with the service was. I can easily see all the task definitions I have, but I don't know which ones were registered to which service.
We can usually see the version because of the history of jobs in our Jenkins box. But we recently had a situation where Jenkins didn't have the history for the one we wanted to roll back to. We ended up guessing right because incrementing numbers are easy to guess. But I don't like that we had to guess. I couldn't see it in the CloudWatch logs. I could see the auto-scaling events there, but not task definition changes.

Creating Dynamically Cron jobs at particular intervals

Looking for dynamically creating cron jobs that gets created and configured using the request parameters send by the Cloud Functions or normal HTTP request.
There is already manual way by visiting the Google Cloud console but I actually make this manual task by configuring and creating jobs according to request parameters.
I am already aware that we can provide a cron.yaml file that can have all the configuration but I need some help or any reference that contains detail way to achieve this.
I am also beginner so indeed correct me or provide any alternate solution.
You'll want to use the Cloud Scheduler API. Specifically, this is a REST API that lets you do everything you could do via the console or the gcloud command.

What is an ECS Task Group and how do I create one?

This is the only doc i have found for Task Group and it doesn't explain how or where to create one.
I can't find any docs that adequately explain what a Task Group actually is with an example of how to create and use one. It sound like its a way for a service to run multiple different Task Definitions which would be useful to me.
For example, I added a container to a task definition and the service is balancing multiple instances of it on the cluster. But I have another container I want to deploy along with the first one, but I only want a single instance of it to run. So I can't add it to the same task definition because I'd be creating multiple instances of it and consuming unnecessary resources. Seems like this is what Task Groups are for.
You are indeed correct, there exists no proper documentation on this (I opened a support case with our AWS team to verify!).
However, all is not lost. A solution to your conundrum does indeed exist, and is a solution we use every day. You don't have to use the task group, whatever that is (since we don't actually know yet (AWS engineer is writing up some docs for me, will post them here when I get them)).
All you need though are placement constraints (your same doc), which are easy enough to setup. If you have a launch configuration, you can add something like this to the Advanced > User Data section, so that it gets run during boot (or just add it when launching your instance manually (or if you're feeling exceptionally hacky, you can logon to your instance and run the commands manually.. for science and stuff)):
echo ECS_INSTANCE_ATTRIBUTES={\"env\": \"prod\",\"primary\": \"app1\",\"secondary\": \"app2\"} >> /etc/ecs/ecs.config
Everything in quotes is arbitrarily defined by you, so use whatever tags and values make sense for your use case. If you go this route, make sure you add the following line to your docker launch command: --env-file=/etc/ecs/ecs.config
So now that you have an instance that's properly tagged (and make sure it's only the single instance you want (which means you probably need a dedicated launch configuration for this specific type of instance)), you can go ahead and create your ECS service like you were wanting to do. However, make sure you setup your Task Placement correctly, to match the roles that are now configured for your instances:
So for the example above, this service is configured to only launch this task on instances that are configured for both env==prod and secondary==app2 -- since your other two instances aren't configured for secondary==app2, they're not allowed to host this task.
It can be confusing at first, and took us a while to get right, but I hope this helps!
Response from AWS Support
I looked into the procedure how to use the Task Groups and here were my findings: - The assumption is that you already have a task group named "databases" if you had existing tasks launched from RunTask/StartTask API.
When you launch a task using the RunTask or StartTask action, you can specify the name of the task group for the task. If you don't specify a task group for the task, the default name is the family name of the task definition (for example, family:my-task-definition) - So to create a Task Group, either you define a TaskGroup (say webserver) while creating a Task on Task Console or use following command : $ aws ecs run-task --cluster <ecs-cluster> --task-definition taskGroup-td:1 --group webserver
Once created you will notice a Task running with a group: webserver.
Now you can use following placement constraints with the Task Definition to place your tasks only on the containers that are running tasks with this Task Group.
"placementConstraints":
[
{
"expression": "task:group == webserver", "type": "memberOf"
}
]
If you try to run a task with above placementConstraint, but you do not have any task running with taskGroup : webserver, you will receive following error: Run tasks failed Reasons : ["memberOf constraint unsatisfied"].
References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html

Accessing ID inside decider and worker in AWS SWF

I am working and AWS SWF and wanted to create a workflow such than I can pass an ID during starting of execution and be able to access that in my decider and activity worker.
I am not able to find any documentation related to that.
I am implementing workers and deciders in python using BOTO library.
You actually cannot create a workflow execution without specifying a workflowId according to the documentation of start_workflow_execution method.