How to catch events about start and finish of task in Camunda? - camunda

I need to get messages about task started and finished. I'd like to create one common callback and don't modify each task.
Any ideas?

The camunda-bpm-reactor extension and the camunda-spring-boot-starter (version 3.3 and up) both support registering global listeners for every task "hook" without explicitly adding a listener in the model.
The reactor extension is based on a a meanwhile no longer supported early projectreactor eventbus, so if you are free to choose, I would go for the spring eventing solution.
You can subscribe to all task create events happening in your engine via:
#Component
class MyTaskListener {
#EventListener(condition="#taskDelegate.eventName=='create'")
public void onTaskEvent(DelegateTask taskDelegate) {
// do stuff on every task create
}
}

Related

Running gRPC server in a microservice C++

I am a relative newbie to using gRPC. I am creating a microservice that need to run a long computation and return the results to the client via gRPC. I am trying to work out how I run the gRPC server in a thread or threads and the computation in another thread.
Usually you would want to have the server instance running and when a request comes in do some data retrieval or some computation and then formulate the results and return them to the requester. In many cases the operation performed in non-trivial and might take time to process. Ideally you don't want the server thread to block waiting for the operation to complete. The C++ code examples I have found are rather trivial and was looking for more guidance on how to correctly implement this scenario.
The controller would look something like:
void dummyFunction() {
while(true) {
// do something
}
}
void start() {
thread main_thread = thread{dummyFunction};
main_thread.join();
...
mainGRPCServer->start(target_str);
}
In the MainServer implementation I have used a Synchronous server as in the greeter example
void RunServer(string &server_address) {
NServiceImpl n_service;
SServiceImpl s_service;
grpc::EnableDefaultHealthCheckService(true);
grpc::reflection::InitProtoReflectionServerBuilderPlugin();
ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *synchronous* service.
builder.RegisterService(&n_service);
builder.RegisterService(&s_service);
// Finally assemble the server.
std::unique_ptr<Server> server(builder.BuildAndStart());
// Wait for the server to shutdown. Note that some other thread must be
// responsible for shutting down the server for this call to ever return.
server->Wait();
}
void MainServerImpl::start(string & target_str) {
worker_ = thread( RunServer, ref(target_str));
worker_.join();
}
Obviously this implementation is not going to work as I understand the grpc Server itself has it's own threading model. I have looked at using an async server implementation. Can anyone guide me on how to structure this ?
UPDATE: I found this on the GoogleGroups:
*The C++ server has two threading models available: sync and async. Most users will want to use the sync model: the server will have an (internal) threadpool that manages multiplexing requests onto some number of threads (reusing threads between requests). The async model allows you to bring your own threading model, but is a little trickier to use - in that mode you request new calls when your server is ready for them, and block in completion queues while there is no work to do. By arranging when you block on the completion queues, and on which completion queues you make requests, you can arrange a wide variety of threading models.*
But I still can't seem to find any good examples of implementation
Best intro found so far is https://grpc.io/docs/languages/cpp/async/
Yes, https://grpc.io/docs/languages/cpp/async/ is the best place to start with. And the helloworld example (https://github.com/grpc/grpc/tree/v1.35.0/examples/cpp/helloworld) using async would be a good reference, too.

Spring Integration Multiple consumers not processing concurrently

I am using Spring Integration with ActiveMQ. I defined a DefaultMessageListenerContainer with maxConcurrentConsumers = 5. It is referenced in a . After an int-xml:validating-filter and an int-xml:unmarshalling-transformer, I defined a queue channel actionInstructionTransformed. And I have got a poller for this queue channel. When I start my application, in the ActiveMQ console, I can see that a connection is created and inside five sessions.
Now, I have got a #MessageEndpoint with a method annotated
#ServiceActivator(inputChannel = "actionInstructionTransformed", poller = #Poller(value = "customPoller")).
I have got a log statement at the method entrance. Processing of each message is long (several minutes). In my logs, I can see that thread-1 starts the processing and then I can only see thread-1 outputs. Only when thread-1 has finished processing 1 message, I can see thread-2 starts processing the next message, etc. I do NOT have any synchronized block inside my class annotated #MessageEndpoint. I have not managed to get thread-1, thread-2, etc process messages concurrently.
Has anybody experienced something similar?
Look, you say:
After an int-xml:validating-filter and an int-xml:unmarshalling-transformer, I defined a queue channel actionInstructionTransformed.
Now let's go to the QueueChannel and PollingConsumer definitions!
On the other hand, a channel adapter connected to a channel that implements the org.springframework.messaging.PollableChannel interface (e.g. a QueueChannel) will produce an instance of PollingConsumer.
And pay attention that #Poller (PollerMetadata) has taskExecutor option.
By default the TaskScedhuler ask QueueChannel for data periodically according to the trigger configuration. If that is PeriodicTrigger with default options like fixedRate = false, the next poll really happens after the previous one. That's why you see only one Thread.
So, try to configure taskExecutor and your messages from that queue will go in parallel.
The concurrency on the DefaultMessageListenerContainer does not have effect. Because in the end you place all those messages to the QueueChannel. And here a new Threading model starts to work based on the #Poller configuration.

Difference between ExecutionListener and TaskListener

As I have read:
In general, the task listener event cycle is contained between execution listener events:
ExecutionListener#start
TaskListener#create
TaskListener#{assignment}*
TaskListener#{complete, delete}
ExecutionListener#end
see complete list at Camunda BPMN - Task listener vs Execution listeners
But now I have this question: what is the difference between ExecutionListener#start and TaskListener#create, or as I noticed the create event has started after start event started, which business should I set in the start event and which one should I set in the create event? Or are there any problems if I put all of my business in the start event?
I think the important difference to remember is that the ExecutionListener is available for all elements and allows access to the DelegateExecution, while the TaskListener only applies to tasks (bpmn and cmmn) and gives you access to the DelegateTask.
The DelegateTask is important for all task-lifecycle operations, like setting due date, assigning candidate groups, ... you just cannot do this with the DelegateExecution.
So in general, we use ExecutionListeners on events and gateways, JavaDelegates on ServiceTasks and TaskListeners on UserTasks.

How to periodically schedule a same activity when the previous one is still excuting

My goal is to have an workflow which periodically (every 30 seconds) add a same activity (doing nothing but sleep for 1 minute) to the taskList. Also I have multiple machines hosting activity workers to poll the taskList simultaneously. When the activity got scheduled, one of the workers can poll it and execute.
I tried to use a cron decorator to create a DynamicActivityClient and use the DynamicActivityClient.scheduleActivity() to schedule the activity periodically. However, it seems the the activity will not be scheduled until the last activity is finished. In my case, the activity got scheduled every 1 minute rather than 30 seconds which I set in the cron pattern.
The package structure is almost the same as aws sdk sample code: cron
Is there any other structure recommended to achieve this? I am very much new to SWF.Any suggestion is highly appreciated.
You may do so by writing a much simpler workflow code and using workflow clock and timer. Refer to the example in the link below.
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/executioncontext.html
Also remember one thing. The maximum number of events allowed in a workflow execution is 25000. So the cron job will not run for ever but you will have to write code to start a new workflow execution after some time. Refer to continuous workflow example provided at link below
http://docs.aws.amazon.com/amazonswf/latest/awsflowguide/continuous.html
The cron decorator internally relies on AsyncScheduledExecutor which is by design written to wait for all asynchronous code in the invoked method to complete before calling the cron again. So the behavior you are witnessing is expected. The workaround is to not invoke activity from the code under cron, but from the code in the different scope. Something like:
// This is a field
Settable<Void> invokeNextActivity = new Settable<>();
void executeCron() {
scheduledExecutor.execute(new AsyncRunnable() {
#Override
public void run() throws Throwable {
// Instead of executing activity here just unblock
// its execution in a different scope.
invokeNextActivity.set(null);
}
});
// Recursive loop with each activity invocation
// gated on invokeNextActivity
executeActivityLoop(invokeNextActivity);
}
#Asynchronous
void executeActivityLoop(Promise waitFor) {
activityClient.executeMyActivityOnce();
ivnokeNextActivity = new Settable<>();
executeActivityLoop(ivnokeNextActivity);
}
I recommend reading TryCatchFinally documentation to get understanding of error handling and scopes.
Another option is to rewrite AsyncScheduledExecutor to invoke invoked.set(lastInvocationTime) not from the doFinally but immediately after calling the command.run()

When is Akka's default system ready in Play?

I was writing an application in Play 2.3.7 and when trying to create an actor (using the default Akka.system() of Play) inside the beforeStart overriden method of the Global object, the application crashes with some infinite recursive call of beforeStart, ultimately throwing an exception due to Global object not being initialized. If I create this actor inside the onStart method, then everything goes well.
My "intuition" was: "ok, this actor must be ready before the application receives the first request, so it must be created on beforeStart, not in onStart".
When is Akka.system() ready to use?
Akka.system returns an ActorSystem held by the AkkaPlugin. Therefore, if you want to use it, you must do so after the AkkaPlugin has been initialized. The AkkaPlugin is given priority 1000, which means its started after most other internal plugins (database, evolutions, ..). The Global plugin has priority 10000, which means the AkkaPlugin is available there (and for any plugin with priority > 1000).
Note the warning in the docs about beforeStart:
Called before the application starts.
Resources managed by plugins, such as database connections, are likely not available at this point.
You have to start this in onStart() because beforeStart() is called too early - way before anything like Akka (which is actually a plugin) or any database connections are created. In fact, the documentation for GlobalSettings states:
Resources managed by plugins, such as database connections, are likely not available at this point.
The general guidance (confirmed by this thread) is that onStart() is the place to create your actors. And in practice, that has worked for me as well.