(AWS SWF) Is there a way to get a list of all activity workers listening on a particular tasklist? - amazon-web-services

In our beta stack, we have a single EC2 instance listening to a tasklist. Sometimes another developer in the team start's his own instance for testing purposes and forget to turn it off. This creates problems for the next developer who tries to start an activity only for it to be taken up by the last developer's machine. Is there a way to get the hostnames of all activity workers listening to a particular tasklist ?

It is not currently possible to get a list of pollers waiting on a task list through the SWF API. The workaround is to look at the identity field on the ActivityExecutionStarted event after it was picked up by the wrong worker.
One way to avoid this issue is always use a task list name that is specific to a machine or developer to avoid collisions.

Related

Amazon Connect Stop Call Recording

Is it possible to stop call recordings in Amazon Connect so the customer and agent can discuss sensitive material without being recorded?
I am aware of the set call recording behaviour blocks, but they don't seem to work on a call that has already been started with an agent with call recording enabled. Transferring to another contact flow with the recording type set to none doesn't seem to make a difference and the call carries on being recorded.
I am aware of the sample workflow Sample secure input with agent as outlined in this AWS blog https://aws.amazon.com/premiumsupport/knowledge-center/disable-recording-amazon-connect. This does work, however it relies on the customer entering payment details whilst the agent is on hold - preventing the agent and customer from having a sensitive conversation.
It seems the only way to stop recording once it has been enabled is to put the agent on hold?
Do not know if you have not solved your issue yet, but amazon has update their Amazon Connect API that would allow you to suspend the recording.
Boto3 implementation
response = client.suspend_contact_recording(
InstanceId='string',
ContactId='string',
InitialContactId='string'
)
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/connect.html#Connect.Client.suspend_contact_recording
They have also allow you to Start, Pause, Stop. (
We have just started to review this for a POC, turn recording off be default for a group of queues. Allow to Agents to start and stop and pause recording as needed.
You can also read this in an Amazon Blog post that should be able to help you fully implement the solution.
https://aws.amazon.com/blogs/contact-center/pausing-and-resuming-call-recordings-with-a-new-api-in-amazon-connect/#:~:text=is%20not%20recorded.-,End%20the%20call.,you%20start%20and%20stop%20it.
After speaking with Architects at AWS, the desired and designed for solution is to have the customer automatically enter sensitive information with the agent on hold and call recording turned off to remain PCI compliant.
If that is not an option there are workarounds possible that go against the way Amazon Connect has been designed. In order to turn off call recording once it has been enabled on a call, a new contact ID must be established. To do this you would need to transfer the user to your external phone number again or transfer to a queue and disable call recording in that new flow.
This brings in extra issues around how to get the customer back to the original agent once the sensitive information has been discussed. It also means you would potentially have 3+ contact IDs for the same transaction, with call recording spread across them.

How get the currently running activity instance of the process definition of camunda

I am a new for camunda.
I want cancel the currently running activity instance and start a new activity instance for move the token state.
But I got a hard time of how get the currently running activity instance id by the java api of camunda.
Any thougs ? Thank you all.
Actully the question is "How get the running activity instances". And I already got the answer from somewhere.
Here is the aswer.
Just use the java api like below
ActivityInstance activityInstance = runtimeService.getActivityInstance(instance.getProcessInstanceId());
ActivityInstance[] activityInstances = activityInstance.getChildActivityInstances();
The activityInstances array is the running activity instances. you can use the ids of the activity instances to cancel running activity instance.
Had the same trouble. This line returns a list of ids (whatever they are - user task, service task, and etc). If you don't have parallel active tasks - the list will contain a single activity id.
processEngine.getRuntimeService().getActiveActivityIds(
processInstance.getProcessInstanceId()
);

Routing an activity task to a specific worker in the SWF fleet

I have a fleet of multiple worker hosts polling for the following tasks of my SWF:
Activity 1: Perform some business logic to create a large file.
Activity 2: Wait for some time (a human approval, timer, etc.)
Activity 3: Transmit the file using some protocol (governed by input parameters of the SWF).
Activity 4: Clean-up the local-generated file.
The file generated in Step-1 needs to be used again in Step-3, and then eventually discarded at the end of the workflow.
The system would work fine if there is only 1 host polling for all tasks. However, when I have multiple workers, I cannot seem to ensure that task-1 and task-3 would end up on the same host.
I would like to avoid doing the following:
Uploading the file to a central repository (say S3) on step-1 and download it in step-3; or
Having a single activity for the task-1 and task-3.
I have the following questions:
Is it possible to control that subsequent activities be run on the same host as opposed to going to any random host in my fleet?
What are specific guidelines/best practices on re-using resources generated in different activities in a workflow?
Is it possible to control that subsequent activities be run on the
same host as opposed to going to any random host in my fleet?
Yes, absolutely. The basic idea is that SWF task lists (queues used to deliver activity tasks) are dynamic. So each host can have its own task list and workflow can specify specific task list name when calling an activity. See fileprocessing sample which executes download activity on any host from the pool, then converts the file and uploads the result on the same host as the first one.
List item What are specific guidelines/best practices on re-using resources generated in different activities in a workflow?
The approach of caching result in the worker process memory or on the local disk is considered the best practice. Sometimes using external data store and getting it each times also makes sense.

Akka design: How to add/remove routee from cluster aware router dynamically

I have the following use case and I am not sure if the akka toolkit provide this out of the box:
I have a number of nodes (instance/machine) that can run a finite number of long running task in the background and cannot accept more work while at max capacity.
Each instance can only process 50 tasks.
All instances are behind a load balancer.
Each task can respond to messages from the client who initiated the task, since the client sends the messages via the load balancer the instances need to route it to the correct instance that handles the task.
I have tried initially cluster sharding, but there doesn't seem to be a way to cap the maximum number of shard regions/actors per node (= #tasks).
Then I tried it with a cluster aware router, which acts as a guard for accepting or rejecting work. This seems to work reasonable well, one problem is that once it reaches capacity I need to remove it as a routee and add it back once it has capacity again.
Is there something out of the box that supports this use case or should I carry on with the routing option and if so how can I achieve this?
I'll update the description if you have further questions or something is unclear.
Your scenario sounds like a good fit for the work pulling pattern. The gist of this pattern is:
A master actor coordinates units of work among a number of worker actors.
Workers register themselves to the master, meaning that workers can be added or removed dynamically.
When the master receives work to be done, the master notifies the workers that work is available. Workers pull units of work when they're ready, do what needs to be done with their respective units of work, then ask the master for more work when they're finished.
To learn more about this pattern, read the following (the first two links are listed in the Akka documentation):
The original post (by Derek Wyatt): http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2
A follow-on post (by Michael Pollmeier): http://www.michaelpollmeier.com/akka-work-pulling-pattern
An application of the pattern in a clustered environment with a cluster-aware router (by Ryan Tanner): https://www.conspire.com/blog/2013/10/akka-at-conspire-part-5-the-importance-of/

Can Akka Cluster Client Send Messages to Cluster Nodes Not in Initial Contacts?

Using Akka 2.3.14, I'm trying to create an Akka cluster of various services. Until now, I have had all my "services" in one artifact that was clustered across multiple nodes, but now I am trying to break this artifact into multiple services that all exist on the same cluster.
So in breaking this up, we've designed it so that any node on the cluster will first try to connect to the seed nodes. If there is no seed node, it will look to see if it is a candidate to run as a seed node (if it's on the same host that a seed node can be on) in which case it will grab the an open seed node port and become a seed node. So in this sense, any service in the cluster can become the seed node.
At least, that was the idea. Our API into this system running as a separate service implements a ClusterClient into this system. The initialContacts are set to be the same as the seed nodes. The problem is that the only receptionist actors I can send a message to through the ClusterClient are the actors on the seed nodes.
Here is an example if it helps. Let's say I have a String Service and a Double Service, and the receptionist for each service is a StringActor and a DoubleActor respectively. Now lets say I have a Client Service which sends StringMessages and DoubleMessages to the StringActor and DoubleActor
So for simplicity, let's say I have two nodes, server1 and server2 then:
seed-nodes = ["akka.tcp://system#server1:2773", "akka.tcp://system#server2:2773"]
My ClusterClient would be initialize like so:
system.actorOf(
ClusterClient.props(
Set(
system.actorSelection("akka.tcp://system#server1:2773/user/receptionist"),
system.actorSelection("akka.tcp://system#server2:2773/user/receptionist")
)
),
"clusterClient"
)
Here are the scenarios that are happening for me:
If the StringServices start up on both servers first, then DoubleMessages from the Client Service just disappear into the ether.
If the DoubleServices start up on both servers first, then StringMessages from the Client Service just disappear into the ether.
If the StringService starts up first on serverX and the DoubleService starts up first on serverY, then all StringMessages will be sent to serverX and all DoubleMessages will be sent to serverY, which is not as bad as the above case, but it means it's not really scaling.
This isn't what I expected, it's possible it's just a defect in my code, so I would like to know if this IS expected behavior or not. And if not, then is there another Akka concept that could help me with this?
Arguably, I could just make one service type my entry point, like a RoutingService that could accept StringMessages or DoubleMessages, and then send that to the correct service. But if the Client Service can only send messages to the RoutingService instances that are in the initial contacts, then I can't dynamically scale the RoutingService because no matter how many nodes I add the Client Service can only send to the initial contacts.
I'm also thinking about subscribing to ClusterEvents in my Client Service and seeing if I can add and remove initial contacts from my cluster client as nodes are started up in the cluster, but I'm not sure if this is possible, and it feels like there should be a better solution.
This is what I found out upon more troubleshooting, in case it helps anyone else:
The ClusterClient will attempt to connect to the initial contacts in order, and then only sends it's messages across that connection. If you are deploying different services on each node, you will have problems as the messages sent from the ClusterClient will only be sent to the node that it makes its connection to. In this way, you can think of the ClusterClient a legitimate client, it will connect to a URL that you give it, and then continue to communicate with the server through that URL.
Reading the Distributed Workers example, I realized that my Frontend, or in this case my routing service, should actually be part of the cluster, rather than acting as a client. For this I used the DistributedPubSub method instead.