Do we need to increase activity version in SWF for increasing its timeout? - amazon-web-services

We have an existing workflow where we need to increase timeout for an activity (Start to Close) to enable an urgent processing. Do we require to do a version bump up on activity ?

It depends on how you specify the timeout. If you specify it on registration then you have to change the version as registration is immutable. If you specify it as part of the invocation then no need for a new version.
If you are using AWS Flow Framework for Java use ActivitySchedulingOptions to pass the timeout. Here is the relevant documentation.

Related

How to add delay or Thread.sleep() in script task or how to delay the http task in flowable?

I am running flowable maven dependency as a spring boot project (This project has flowable maven dependency and the bpmn model alone).
There is another micro-service (wrapper service) to access the flowable REST APIs to initiate the process and updating the tasks.
I am running a http task and make it as a loop, and keep on checking the count. if the count satisfies, I will end the process. Else, it will loop around the http task. The use case is, I cannot determine when the count will be met.(It might even take days).
Here I cannot have the provision to use Java Service Task.
How can I handle this scenario in bpmn model? or is there any other approach to follow? Please advice.
You can let your check complete, then check with an xor gateway if the count is reached. If yes, you continue with the regular process. If not, you continue with an intermediate timer event on which you define a wait time. After the specified time the token will continue and you loop back into the the checking service task.
Only use this approach if you the number of loops will be small. It is not a good patter to use if the loop is executed every few seconds, potentially over days. This it create a large instance tree and much audit information in the DB.
In such a case you can work with an external job scheduler such as Quartz and an asynchronous integration pattern.
Also see:
https://www.flowable.com/open-source/docs/bpmn/ch07b-BPMN-Constructs/#timer-intermediate-catching-event
or
https://docs.camunda.io/docs/next/components/modeler/bpmn/timer-events/

What's the update behavior (rolling vs blue/green) for AWS Lambda Functions when consuming from Kinesis Stream?

Let's say I have a Kinesis Stream with 4 shards being consumed by a Lambda Function. The stream is continuously receiving events so it would be a high usage scenario. As I have 4 shards I'd have 4 function instances running at the same time (assuming Parallelization Factor=1). Then I publish a new version of the function with some new code. What happens then?
The next invocation of the function will always pickup the latest version, meaning that there wont be intercalated invocations of both old and new versions?
A "rolling update", where each of the 4 instances are replaced one at a time over an interval so that some batches are processed by the old and some by the new version?
Something else?
The behavior is most like your first bullet point, but the details can vary.
A number of worker processes in the backend poll the shards for work. Whenever there is work to do, they do a synchronous invoke to the Lambda-API and wait for the response (docs).
The Lambda-API is now responsible for picking an execution context to handle the request. It will do that depending on the function-ARN that you specified in the event source mapping. If you use the "default", i.e., the $latest alias, Lambda will just create an execution context with the latest version of the code (or use an existing one that satisfies the criteria) to run your code.
If you use an alias or a specific version in the function ARN for the event source mapping, the behavior depends on what you pick. If you specify a function version, Lambda will specifically execute that version.
If you specify an alias that does weighted routing to multiple function versions, it will pick any of those versions to handle the request.
tl;dr The behavior depends on the function ARN in the event source mapping, but usually, Lambda will switch to the new version without doing any rolling update logic. It behaves more like a blue-green deployment.

Google Cloud IoT Few config updates mesages are missing when sending config updates frequently from cloud functions to device

I am using config update and cloud functions for communication between mobile application and esp32 device by following the example here, but when I am sending config update messages frequently some of them are not sending; say out of 5 only 3 config update messages are going, I have two questions:
1) How frequently we can send config update to avoid some missing updates.
2) Is there any alternative way to communicate between cloud functions and IoT device.
According to the docs: [IoT docs]
Configuration updates are limited to 1 update per second, per device.
However, for best results, device configuration should be updated much
less often — at most, once every 10 seconds.
The update rate is calculated as the time between the most recent
server acknowledgment and the next update request.
If your operations are mostly configuration updates I cannot think another alternative that could perform better.

GAE service running on Flexible Env. as target of a task queue

According to the google doc, a service running in the flexible enviroment can be the target of a push task:
Outside of the standard environment, you can't add tasks to push
queues, but a service running in the flexible environment can be the
target of a push task. You can specify this using the target parameter
when adding a task to queue or by specifying the default target for
the queue in queue.yaml.
However, when I tried to do it I get 404 errors in the flexible service.
That's totally normal due to the required endpoint (/_ah/queue/deferred) for task queues is it not defined in the flexible service.
How do I become a flexible service in a valid target for task queues?
Do I have to define that endpoint in my code in some way?
Usually, you'll need to write a handler in your worker service to do the processing after receiving a task. In the case of push tasks, the service will send HTTP requests to your whatever url you specify. If no url is specified the default URL /_ah/queue/[QUEUE_NAME] will be used.
Now, from the endpoint you mention, it seems you are using deferred tasks, which are a somewhat special kind. Please, see this thread for a workaround by adding the needed url entry. It mentions Managed VMS but it should still work.

wso2 bps human task can I set deadline for a task to be completed

As I look at samples of human task deadline sample, it sets deadline for when owners should start a task.
After a task is created with deadline, system will create a timer based on task created time plus the deadline delta.
In my situation, I need to set deadline when this task should be completed. It is an absolute time. How can I do it?
Try Sample [1]. If sample doesn't fit your scenario, Look at the deadline syntax [2] change according to that. You should be able to this..
[1] http://tryitnw.blogspot.com/2013/05/escalating-human-task-with-wso2-bps.html
[2] http://docs.oasis-open.org/bpel4people/ws-humantask-1.1-spec-cs-01.html#_Toc135718795
Maybe this will disappoint you, but had a requirement also that a task that was not picked up, ended automatically after a set time. Link (2) mentions the way to define this in the task, but the handling of this is not implemented in BPS (I use this regularly in Oracle SOA).
I ended up defining the timeout in the payload, and created an event listener that used quartz to track the task timeout. The quartz job then ends the task as needed. This should be a feature request in WSO2 though.