I am having a real headache on how to handle this one. Basically, the application got members that are projected to reach one million at the end of the year. It relies heavily on USSD but also have email. Actually for now, I would prefer to send the SMS first.
The issue is this: the members have groups based on their activities and a single member can have multiple groups. Currently, the highest number of members in a group is 17,000.00. The group can basically send SMS to those 17,000 members. The group leaders specify paramters ("All Members","Females","Age 24-28" etc) and send the SMS, which must save a copy in the database. Currently, there are 5 active groups but they will certainly increase in the future and they can all request to broadcast SMS to members at once.
The phone numbers of members is kept in:
class Profile(model.Models):
user=models.ForeignKey(User)
phone=models.CharField(max_length=13)
Similarly, the app should basically scan the member profiles to send them period notifications. For now, I am following the following:
Select the phone numbers of all members that satisfy the criteria
Create an id for the broadcast and wait for previously stacked SMS requests to finish. Then add the selected phones to a secondary table referencing to the broadcast
Loop through each phone and send one by one. Once finished, mark the broadcast as finished
class BroadCast(models.Model):
code=models.CharField(max_length=50) #rand generated
group=models.ForeignKey(Corporate)
finished=models.IntegerField(default=0)
message=models.CharField(max_length=200)
class Phone(models.Model):
broadcast=models.ForeignKey(BroadCast)
But am disappointed by its performance especially for multiple requests. What can i do to improve it?
I am using twilio paid SMS.
tele=models.CharField(max_length=13)
I had an sms application before that had scaled and slowed down when the user base was large, to fix that we used queue for the sms sending task. Worked wonders and should also work perfectly in your case.
Related
If you are using AWS connect as a contact center solution one problem that happens often is when TOLL free number would receive a high amount of call volume that your currently staffed agents would not be able to handle it.
There are ways to handle it by setting Queue limits which can overflow to other queues or branch the call flow in other directions.
The solution I propose is to slow the calls into the queue using the tools available in connect and not depending on external services or Lambda.
The aim of this solution is to control the call flow into the agent queue based on two metrics Agent staffed and Agents available
The below is the contact flow structure and I am going to explain how each of them work
The first block Set working queue sets the Queue where the calls should be going to.
The second block Get queue metrics retrieves the real-time queue metrics
Retrieve metrics from a queue so you can make routing decisions. You
can route contacts based on queue status, such as number of contacts
in queue or agents available. Queue metrics are aggregated across all
channels by default and are returned as attributes. The current queue
is used by default. Learn more
The third block Check contact attributes is where I am checking the number of Agents Staffed
This is how the block looks on the inside
I am retrieving the Queue metrics to get the number of Agents Staffed
Now lets assume there are more than 100 agents staffed, which means there are more than 100 agents logged in and could be any status on their client.
The Check contact attributes would follow the first branch and go the second Check contact attributes block
This is how the second Check contact attributes looks on the inside.
Here also I am retrieving the Queue metrics to get the number of Agents available.
In this section I am checking how many agents are in Available status to answer the call.
If the number of Available agents are less than or equal to 20 the call flows back to the Get queue metrics block to follow the logic again.
Here it is important to use a Play prompt to play a audio for 5 seconds before looping it back. If I do not insert this delay / prompt the calls will move too fast for the contact flow to handle and it stops working for any call hitting the flow.
Now if the number of Available agents are greater than 20 the call flows into the Transfer to queue block after the Check staffing block.
This same logic applies to the other blocks based the number of agents staffed are in different sections as seen the picture above.
With this solution I do not have to worry about how many agents are actually working at any time in the day. The contact flow will branch the calls based on Agents staffed and only send in the calls if there are X amount agents Available to answer the call.
Hope this solution helps for any one looking for a simple solution
We're doing everything we can think of to limit the number of complaints we receive and will immediately remove anyone who marks us as junk that does not need to receive our emails. However, the last handful of complaints we've received have come from transactional emails of people who are receiving our company's services and NEED to receive everything we send transactionally as a critical part of our service. (e.g. We are booking their travel on their behalf and we need to send them verification emails to confirm their booking details.)
We're assuming that most of these complaints are somehow either false positives or are being done on accident. One customer confirmed that they did not click the junk mail button but it ended up in their junk folder and they moved it to their inbox. Some questions:
Can a TiS complaint be triggered by any means other than the user manually marking an email as junk in their email client? (Can automatic spam filters trigger this complaint? AWS documentation specifies only clicking the junk button.)
Besides contacting each individual personally, what would you suggest we do? Our complaint rate is continuing to rise even though we are taking action on every one.
I am noticing AWS SES stats are not being updated in real-time. After sending email, it takes time for sent count to increase on SES Dashboard. Sometimes it takes few minutes and sometimes it takes long.
Has anyone also experienced this? Any thoughts?
On the assumption that the console is simply making a call to a standard API action (rather than using some kind a console-only backend service that is not documented or user-accessible -- such things are not unheard-of, but are pretty rare in AWS, so it's a reasonably safe assumption), it looks like this is not really designed to be real-time. The stats are reported in 15 minute windows.
From the SES API reference:
GetSendStatistics
Returns the user's sending statistics. The result is a list of data points, representing the last two weeks of sending activity.
Each data point in the list contains statistics for a 15-minute interval.
— http://docs.aws.amazon.com/ses/latest/APIReference/API_GetSendStatistics.html
AWS/SES dashboard stats are for pure hint performace but not to rely on them. In such case, if you want to have real time notifications of sent emails you will need to create SNS notifications. Keep in mind that Spam-Complaint notifications can take up to a couple of days as this is based on information provided by the ISP to Amazon. And complaints within the Gmail evil-system will NEVER get to you.
Please help selecting a MQ app/system/approach for the following use-case:
Check for incoming messages for a specific user -> read the message if available -> delete from the queue, ideally, staying within AWS.
Context:
Social networking app, users receiving messages, i.e.
I need to identify incoming messages by recipient ID.
The app is doing long-polls for new messages every 30 seconds.
Message size is <1Kb.
As per current estimates, I'll need 100M+ message checks per months in total (however, much less messages, these are just checks).
While users acknowledge messages choosing OK or Ignore, however not sure if ACK support is required from MQ system for that.
I'm in AWS. Initially thought of SQS, but the more I read the less it looks like a good match - cannot set message recipient ID in a way to filter by recipient, etc, however maybe I'm wrong.
One of the options I also thought about is to just use DynamoDB's "messages" table, partition key being userId and sort key being a messageId, thus I'll be able to easily query by a user, however concerned with costs.
If possible, I would much more prefer to stay within AWS or at least use SAAS like SQS, as being a 1-person startup I really want to avoid headaches supporting self-hosted system.
Thank you!
D
You are right on both these counts:
SQS won't work, because of the limitation you pointed.
DynamoDB would work, but cost a lot.
I can suggest the following:
Create a Redis cluster, possibly on Amazon ElastiCache.
In it, make one List per user.
Whenever a new message comes, append it to concerned User's list.
To deliver the message, just read from the User's list. Also, flush the queue if needed.
What I am suggesting is very similar to how Twitter manages each User's news-feed and home-feed.
It should also be cheap.
I'm creating a web app for handling various surveys. An admin can create his own survey and ask users to fill it up. Users are defined by target groups assigned to the survey (so only user in survey's target group can fill the survey).
One of methods to define a target group is a "Token target group". An admin can decide to generate e.g. 25 tokens. After that, the survey can be accessed by anyone who uses a special link (containing the token of course).
So now to the main question:
Every token might have an e-mail address associated with itself. How can I safely send e-mails containing the access link for the survey? I might need to send a few thousand e-mails (max. 10 000 I believe). This is an extreme example and such huge mailings would be needed only occasionally.
But I also would like to be able to keep track of the e-mail message status (was it send or was there any error?). I would also like to make sure that the SMTP server doesn't block this mailing. It would also be nice if the application remained responsive :) (The task should run in background).
What is the best way to handle that problem?
As far as I'm concerned, the standard Django mailing feature won't be much help here. People report that setting up a connection and looping through messages calling send() on them takes forever. It wouldn't run "in background", so I believe that this could have negative impact on the application responsiveness, right?
I read about django-mailer, but as far as I understood the docs - it doesn't allow to keep track of the message status. Or does it?
What are my other options?
Not sure about the rest, but regardless for backgrounding the task (no matter how you eventually do it) you'll want to look for Celery
The key here is to reuse connection and to not open it again for each email. Here is a documentation on the subject.