How to get RingCentral Call Queue Members Availability? - ringcentral

How can I get the Call Queue availability for queue members per queue which is shown in the Online Account Portal? I specifically want the Available, Busy and Unavailable statuses shown in the UI under "Members Availability" per queue.
I've found a few Call Queue APIs that can list queues and queue members but they provide member availability like the UI.
Call Queue APIs:
Get Call Queues API
Get Call Queue Members API
The image below is from the article on Call Queue - User Availability and Call Handling

The above is on the right track. Once the list of queue members is available, you can query each user for his or her queue availability.
Note: A user's queue availability as presented below is the same for all queues they are present on so to do a presentation by queue, this information needs to be combined with their queue membership list. This can be retrieved from the queue or user perspective:
Call Queue Members
User's Queue List
To manage individual queue availability, add/remove the user from the queues of interest which can be done using the Edit Call Queue Members API.
For both steps query the Get User Status API. An example is provided below.
Get User Status API:
https://developer.ringcentral.com/api-reference#Presence-getPresenceStatus
An example request and response looks like the following:
Request:
GET /restapi/v1.0/account/{accountId}/extension/{extensionId}/presence
Response:
HTTP 200 OK
{
"uri": "https://platform.ringcentral.com/restapi/v1.0/account/403228676008/extension/403228676008/presence",
"extension": {
"uri": "https://platform.ringcentral.com/restapi/v1.0/account/403228676008/extension/403228676008",
"id": 403228676008,
"extensionNumber": "101"
},
"presenceStatus": "Available",
"telephonyStatus": "NoCall",
"userStatus": "Available",
"dndStatus": "TakeAllCalls",
"allowSeeMyPresence": true,
"ringOnMonitoredCall": false,
"pickUpCallsOnHold": true
}
Use the following to get the user's queue availability:
1) User Queue Setting
The user's Do Not Disturb dndStatus property is used for indicating whether the user is accepting or not accepting calls, including for call queues. The user can set their dndStatus to be one of the four following values where "Department" is another name for Call Queue:
DoNotAcceptAnyCalls
DoNotAcceptDepartmentCalls
TakeAllCalls
TakeDepartmentCallsOnly
This can roughly be mapped to:
Unavailable for Queue Calls: DoNotAcceptAnyCalls or DoNotAcceptDepartmentCalls
Available for Queue Calls: TakeAllCalls or TakeDepartmentCallsOnly
2) User Overall Availability
The next step is to check the presenceStatus property which is an enumerated string with the following values: Offline, Busy, Available. Offline maps to Unavailable in the UI. This is an overall availability for both personal calls and queue calls.
3) Queue Member Availability
To create the queue member availability, combine the two properties above like the following pseudocode.
I added an extra "Available" condition below which is strictly not needed, but useful for explanation:
member_availability =
user.dndStatus == "DoNotAcceptAnyCalls" ? "Unavailable" :
user.dndStatus == "DoNotAcceptDepartmentCalls" ? "Unavailable" :
user.presenceStatus == "Offline" ? "Unavailable" :
user.presenceStatus == "Busy" ? "Busy" :
user.presenceStatus == "Available" ? "Available" : "Available"
This gives the user's availability for all queues they are on so this needs to be mapped to either a queue members list or the user's list of queues.
List Call Queue Members API
User Info API's departments property
Example Code
Here's some Ruby wrapper code I wrote to make it easier to update the user's queue status here:
RingCentral Ruby SDK extension_presence.rb

Related

Handling concurrent requests

I am building a recommendation service which recommend the items based on use case. For this client needs to call our API
Functionality of API:
Clients call with the list of items required and the use case.
Based on that we will return the exact items.
Stack:
AWS Lambda
Amazon DynamoDB
Problem: How do we handle concurrent fetch requests for the same use case.
Solutions:
Flow using pessimistic locking:
Acquire dbLock on the list of available items for use case.
Remove the items from the original db
Release the lock
This will increase the latency of the API
Flow using Optimistic locking:
Fetch the available items.
Remove from the list and return.
If any other thread tries to delete the items from available list that was already deleted then send an exception to client to call the API again.
Is there any other more efficient way of handling the concurrent requests?

How to self throttle large volume of calls hitting a queue using contact flow for AWS connect

If you are using AWS connect as a contact center solution one problem that happens often is when TOLL free number would receive a high amount of call volume that your currently staffed agents would not be able to handle it.
There are ways to handle it by setting Queue limits which can overflow to other queues or branch the call flow in other directions.
The solution I propose is to slow the calls into the queue using the tools available in connect and not depending on external services or Lambda.
The aim of this solution is to control the call flow into the agent queue based on two metrics Agent staffed and Agents available
The below is the contact flow structure and I am going to explain how each of them work
The first block Set working queue sets the Queue where the calls should be going to.
The second block Get queue metrics retrieves the real-time queue metrics
Retrieve metrics from a queue so you can make routing decisions. You
can route contacts based on queue status, such as number of contacts
in queue or agents available. Queue metrics are aggregated across all
channels by default and are returned as attributes. The current queue
is used by default. Learn more
The third block Check contact attributes is where I am checking the number of Agents Staffed
This is how the block looks on the inside
I am retrieving the Queue metrics to get the number of Agents Staffed
Now lets assume there are more than 100 agents staffed, which means there are more than 100 agents logged in and could be any status on their client.
The Check contact attributes would follow the first branch and go the second Check contact attributes block
This is how the second Check contact attributes looks on the inside.
Here also I am retrieving the Queue metrics to get the number of Agents available.
In this section I am checking how many agents are in Available status to answer the call.
If the number of Available agents are less than or equal to 20 the call flows back to the Get queue metrics block to follow the logic again.
Here it is important to use a Play prompt to play a audio for 5 seconds before looping it back. If I do not insert this delay / prompt the calls will move too fast for the contact flow to handle and it stops working for any call hitting the flow.
Now if the number of Available agents are greater than 20 the call flows into the Transfer to queue block after the Check staffing block.
This same logic applies to the other blocks based the number of agents staffed are in different sections as seen the picture above.
With this solution I do not have to worry about how many agents are actually working at any time in the day. The contact flow will branch the calls based on Agents staffed and only send in the calls if there are X amount agents Available to answer the call.
Hope this solution helps for any one looking for a simple solution

I want to know when a batch of messages has completed in a AWS SQS Queue

I think this is more of a 'architecture design' question.
I have a lambda producer that will put ~600 messages on a SQS queue (there are multiple producers) as a batch (so not 1 message with a body of ~600 messages). A consumer lambda that will take individual messages and deal with them (at scale). What I want to do is run another lambda when each batch is complete.
Initial ideas was to create a 'unique batch number', a 'total batch number' and a 'batch position number' and add it to the messages attributes for every message. And then in the consumer lambda check the these to decide if the batch is complete.
But does that mean I would need to use a FIFO queue and partition on the batch number and only have one lambda consumer per batch. Or do I run some sort of state management in DynamoDB (is the a pattern out there for this? please guide me on this).
Regards, J
It seems like the goal is to achieve Fork-Join capabilities in a distributed system. One way to handle this in AWS is using Step Functions. Assuming a queue service needs to be used, state of the overall operation will need to be tracked. Some ways to do this are:
Store state of the overall operation in a DB.
Put a 'terminatation' message in the queue after all others and process FIFO.
Create a metadata service which receives 'start' and 'stop' messages for each service and handles them accordingly.
Reference: Fork and Join with Amazon Lambda

How to add and remove a user from multiple Call Queues at a time using RingCentral?

I have multiple RingCentral Call Queues and I want to build an app that allows users add and remove themselves from a set of pre-configured queues. This is a mobile app that users will use and set their queue availability based on their physical location in a store, with each queue corresponding to a department so users can change the queues themselves as they move between departments.
Given a list of Call Queues, I can update each queue at a time using the following API:
Assign Multiple Call Queue Members API
POST /restapi/v1.0/account/{accountId}/call-queues/{groupId}/bulk-assign
However, this can be a bit inefficient as updating each user may result in one API call per queue.
Is there a way to add/remove a user from multiple queues with one API call?
The following API can be used. This adds and removes the user as a queue member.
Join/Leave Call Queue API
The following API will set a full use queue membership for all queues. The user will be a member of all queues listed and not a member for any queue not listed.
PUT /restapi/v1.0/account/{accountId}/extension/{extensionId}/call-queues
{
"records": [
{"id":"11111111"},
{"id":"22222222"}
]
}
The response will look something like the following:
{
"records": [ {
"id": "12345678",
"name": "Bakery"
}, {
"id": "87654321",
"name": "Cafe"
} ]
}

Using Amazon SQS with multiple consumers

I have a service-based application that uses Amazon SQS with multiple queues and multiple consumers. I am doing this so that I can implement an event-based architecture and decouple all the services, where the different services react to changes in state of other systems. For example:
Registration Service:
Emits event 'registration-new' when a new user registers.
User Service:
Emits event 'user-updated' when user is updated.
Search Service:
Reads from queue 'registration-new' and indexes user in search.
Reads from queue 'user-updated' and updates user in search.
Metrics Service:
Reads from 'registration-new' queue and sends to Mixpanel.
Reads from queue 'user-updated' and sends to Mixpanel.
I'm having a number of issues:
A message can be received multiple times when doing polling. I can design a lot of the systems to be idempotent, but for some services (such as the metrics service) that would be much more difficult.
A message needs to be manually deleted from the queue in SQS. I have thought of implementing a "message-handling-service" that handles the deletion of messages when all the services have received them (each service would emit a 'message-acknowledged' event after handling a message).
I guess my question is this: what patterns should I use to ensure that I can have multiple consumers for a single queue in SQS, while ensuring that the messages also get delivered and deleted reliably. Thank you for your help.
I think you are doing it wrong.
It looks to me like you are using the same queue to do multiple different things. You are better of using a single queue for a single purpose.
Instead of putting an event into the 'registration-new' queue and then having two different services poll that queue, and BOTH needing to read that message and both doing something different with it (and then needing a 3rd process that is supposed to delete that message after the other 2 have processed it).
One queue should be used for one purpose.
Create a 'index-user-search' queue and a 'send to mixpanels' queue,
so the search service reads from the search queues, indexes the user
and immediately deletes the message.
The mixpanel-service reads from the mix-panels queue, processes the
message and deletes the message.
The registration service, instead of emiting a 'registration-new' to a single queue, now emits it to two queues.
To take it one step better, add SNS into the mix here and have the registration service emit an SNS message to the 'registration-new' topic (not queue), and then subscribe both of the queues I mentioned above, to that topic in a 'fan-out' pattern.
https://aws.amazon.com/blogs/aws/queues-and-notifications-now-best-friends/
Both queues will receive the message, but you only load it into SNS once - if down the road a 3rd unrelated service needs to also process 'registration-new' events, you create another queue and subscribe it to the topic as well - it can run with no dependencies or knowledge of what the other services are doing - that is the goal.
The primary use-case for multiple consumers of a queue is scaling-out.
The mechanism that allows for multiple consumers is the Visibility Timeout, which gives a consumer time to process and delete a message without it being consumed concurrently by another consumer.
To address the "At-Least-Once Delivery" property of Standard Queues,
the consuming service should be idempotent.
If that isn't possible, one possible solution is to use FIFO queues, but this mode has a limited message delivery rate and is not compatible with SNS subscription.
They even have a tutorial on how to create a fanout scenario using the combo SNS+SQS.
https://aws.amazon.com/getting-started/tutorials/send-fanout-event-notifications/
Too bad it does not support FIFO queues so you have to be careful to handle out of order messages.
It would be nice if they had a consistent hashing solution to have multiple competing consumers while respecting the message order.