Is it possible to have multiple sessions running parallel to a single AWS Lex bot? I have a chatbot application with two intents: Order Pizza and Book Ticket in a single bot. A user gives the first query to book a ticket and simultaneously different user queries about ordering pizza from a different machine. How to track both the requests as separate sessions in Lex.
Thanks in advance.
in this aws docs
you can see "userId" attributes in Input Event Format
userId allocated by session(what you mean).
I heard it can be slack id or user account property (maybe)
so we can track by this userId
parallel processing is will supported by aws.
you just code lambda or something to hook function with userId.
Related
Consider sample chat application where user purchase monthly/annual subscriptions (subscriptions like Amazon Prime, etc).
As soon as the subscription expires, user should not be able to send messages in app.
User can end their subscription before the original subscription end date.
One solution in my mind (Frontend) - to cache the end date in app and before every "send message" operation, compare the end date and current date.
But the problem is - if user ends the subscription early, user will still be able to send the message.
How can I push update the new subscription end date in cache.
Another solution was (Backend) - I have a table in backend storing subscription details like subscription_id, user_id, subscription_enddate. So before any "send message" operation, query the subscription table and compare the dates and then continue/cancel further operations.
Q1. Should I go with Backend solution or can you please share some improvements to frontend method or any best practice for this scenario?
Q2. Also is storing subscription details in separate table best practice or any good design instead. ?
PS- Sample chat app is based on AWS Amplify Datastore
Let me try to breakdown the answer and give my opinion. I would also like to mention solutions to such problems are determined by the scale and various tradeoffs.
Q1-
If sending messages has an adverse effect, you should never rely on the frontend solution only as it is easy to bypass them. You can use a mixture to ensure that the load is not very high on the backend.
Adding a Frontend Cache for subscription will ensure you will be able to filter most of the messages on the frontend if the cache is not tampered with.
Adding a service before the queue, that validates whether the user subscription has expired adds one more layer of security. If the user subscription is valid it pushes the message to Queue else throws an error. This way any bad actor can also not misuse the system.
Q2-
Depending on the use-cases and load, you can have a separate table or a separate micro-service for the subscription itself.
When to have a separate micro-service?
When the subscription data is required from multiple applications in your system and needs to have its own scalability independent of others, it can be beneficial to have a separate micro-service.
When to have a separate table?
In other cases, where you feel adding a service would be overkill. You can keep the data separate in a different table/DB giving you the flexibility to change subscription and even extract it easily in the future.
I'm using aws congito for user storage. Now I need to get user names and avatars for a list of user posts. But as far as a know, the listUsers api does not accept a list of subs as a filter condition. So how can I achieve this?
I have some other ideas, like syncing user information to a dynamodb by lambda trigger, or storing user name and avatar in posts database (but it's hard to update user info).
Is there a better way to get user information for a list of posts?
Until filtering by list of subs is possible, I would use multiple https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_ListUsers.html calls with only one sub in parallel (fork join approach) and store the data in a cache (Redis would do nicely). This way, it will fetch data from Cognito only once for a user, making it very efficient after the first try.
I am trying to do something that would be relatively simple for a relational database but I don't know how to do it for a nonrelational one.
I am trying to make a simple task web app on AWS where people can post their tasks.
I have a table called tasks which uses the userid from the auth token provisioned by AWS Cognito. I am wondering how I can return the user information. I do not want to rely on Cognito by simply calling it every time a user sends a request. So, my thought would be to create another table to store all of the user information. That, however, is not a very nonrelational way of doing things since JOINS are so bad.
So, I was wondering if I should do any of the following
a) Using RDS instead
b) Not use Cognito and set up my own Auth system
c) Just doing the JOIN with a table containing all of the user info
d) Doing the request to Cognito each time
Although I personally like the idea of cognito, at this time it has some major drawbacks...
You can not backup / restore a user pool without loosing their password, also you have to implement your own backup/restore.
A way around is to save the user password in a cognito custom attribute.
I expected by using api gateway/lambda authorizer to have all the user data in the lambda context but its not there. Or am indoing something wrong with api gateway template mapping 😬
Good thing api gateway/lambda authorizer, can be cached by up to an hour, wont call the authorizer function again which seems like a top feature.
Does not work well with cloudformation, with every attribute update it recreates the user pool without restoring the users, thus loosing the users.
I used it only in one implementation and ended up duplicating the users in DynamoDB as well.
I'm avoiding it ever since. I wish they solve these issues as it looks like a service to be included with every project saving lot of time.
Reading your post I asked myself the same questions and not sure the answer either 😄
Pricing seems fair.
The default 5 requests/second to get user info seems strange as it woukd be consumed by one page load doing multiple ajax api requests .
For this in DynamoDB, there is no need for another table. If the access patterns dictate you store the information in another object, then so be it, but more than likely it should be in the same table. Sounds like you need two different item types in the same table.
For the task PK of userid and SK of task::your-task-id. This would allow you to get all of a user's tasks easily or even a specific task very easily if you knew the task ID. You might even have an attribute that is a timestamp and then have a GSI that is the userID as the PK and the timestamp as the SK. then you could use the begins_with operator on the SK and "paginate through all of the user's tasks that are in the month of 2019-04".
For the user information, have the userID be the PK and the SK be user_info and attributes be the user's information.
The one challenge for this is if you were to go to extremes and one single user is doing thousands of ops per second. e.g. "All tweets by very popular celebrity". If you have such a use case there are ways around that as well, e.g. write sharding. These are just examples for you to play with. Without knowing all your access patterns, I cannot model everything you might want to do. I highly recommend you go watch this presentation from reInvent 2018.
I have the following use case scenario for which I am considering aws services to see how I can come up with a scalable solution. Thanks in advance for your help!
Scenario:
Users can sign up to an application(which is named say 'Let's Remind' or something else) using their email and phone.
The app does one thing that is to send email and sms alerts to user.
User can create n number of tasks for which he wants to be reminded. For instance he can set up a
monthly reminder for paying card dues. Currently the value of n is from 5 to 10 per user.
The notifications are flexible meaning it can be daily, weekly, monthly, bi-weekly. User can also
specify the start date of a notification. The end date is the date when the event is due (for instance
the day the card payment is due). Once this date is expired the notification is rendered inactive for
the current month.
For weekly,daily,bi-weekly notifications, the notifications are deleted once the event date is passed.
These are not recurring in nature.
For monthly recurring events such as payment of apartment rent etc, notification itself is not
deleted but rendered inactive after the event due date. Once the next event cycle (typically next
month billing cycle for payments use case) starts, the notification comes back to life and starts all
over again.
Use can delete any event anytime he wants. If an event is deleted, the notifications for that event
will be deleted as well.
First of all, I hope the use case is clear. Now here's my thoughts so far about solving this use case -
1) Use SNS since I need to send email and sms both. SES only supports emails.
2) When a user registers for the app, create 2 subscriptions(one for his email and one for his sms endpoint) and also create a topic for the user(maybe a dynamically generated random userid)
3) Once user creates an event (e.g. reminder for monthly apartment rental), store the event data such as userid, startdate, duedate, frequency, isactive in a dynamodb table.
4) Create a lambda function that will wake up when an entry is written to the dynamodb table (step 3); it will do the following -
i) it will read the event data from the dynamodb table
ii) determine the next date of the notification to be sent based on the current date and event
data
iii) For active events (check isActive column of the dynamodb record) create a scheduled cron
expression rule based on ii above in cloudwatch events and add the
target as the user's topic (created in step 2 above). For now, the notification message is
static.
I have some doubts/queries about step iii -
Is it possible to create cloudwatch event cron rule dynamically and add the user's topic as target dynamically as I described? Or is it better to trigger a second lambda function dedicated for sending messages to the user's topic using SNS notification? Which approach will be better for this use case?
If user base grows large, is it recommended to create one topic per user?
Am I on the right track with my approach above in general from a scalability point of view?
If not, can anyone suggest any better idea for implementing this use case?
Thanks in advance!
This will not work.
For SNS email subscriber to receive email notification sent via SNS it has to first confirm the subscription. You cannot just create subscriptions on the fly and send them email notification
I don't think SNS fits your use case. You would be better off sending email notifications using SES.
You can write your scheduling logic in Lambda though
I have a cronjob that runs every hours and parse 150,000+ records. Each record is summarized individually in a MySQL tables. I use two web services to retrieve the user information.
User demographic (ip, country, city etc.)
Phone information (if landline or cell phone and if cell phone what is the carrier)
Every time I get 1 record I check if I have information and if not I call these web services. After tracing my code I found out both of these calls takes 2 to 4 seconds and it makes my cronjob very slow and I can't compile statistics on time.
Is there a way to make these web service faster?
Thanks
simple:
get the data locally and use mellissa data:
for ip: http://w10.melissadata.com/dqt/websmart/ip-locator.htm
for phone: http://www.melissadata.com/fonedata.html
you can also cache them using memcache or APC which will make it faster since he does not have to request the data from the api or database.
A couple of ideas... if the same users are returning, caching the data in another table would be very helpful... you would only look it up once and have it for returning users. Upon re-reading the question it looks like you are doing that.
Another option would be to spawn new threads when you need to do the look-ups. This could be a new thread for each request, or if this is not feasible you could have n service threads ready to do the look-ups and update the results.