I need some guidance about the best way to store a lot of information for a single user using AWS.
The problem is that every user after signing up to my website needs to pick abilities from a bank of about 40 abilities (properties that any user can chose) and I need to find a good way to store them per user.
I am currently using Cognito for user table, and the dynamoDB to store user information.
Theoretically, I can just have a column on my dynamoDB for every ability, and then have '1' if user chose it and '0' if not. But this will lead to about 40 extra columns, and I wanted to know if there is a better way of handling this.
Thank you for your time!
You're using a NoSQL database (Dynamo) but thinking relational (columns for everything). Why not have a table that has, for example, an optional column for each ability? It's conceptually similar to the relational but the columns don't have to exist for every user and, if an ability is added, there isn't a table upgrade issue. Something like:
That would allow you to read a user and determine which abilities they have. Yes, you'd still have to loop through all possible abilities for a user but not every user needs to have an explicitly set ability.
Related
I am trying to do something that would be relatively simple for a relational database but I don't know how to do it for a nonrelational one.
I am trying to make a simple task web app on AWS where people can post their tasks.
I have a table called tasks which uses the userid from the auth token provisioned by AWS Cognito. I am wondering how I can return the user information. I do not want to rely on Cognito by simply calling it every time a user sends a request. So, my thought would be to create another table to store all of the user information. That, however, is not a very nonrelational way of doing things since JOINS are so bad.
So, I was wondering if I should do any of the following
a) Using RDS instead
b) Not use Cognito and set up my own Auth system
c) Just doing the JOIN with a table containing all of the user info
d) Doing the request to Cognito each time
Although I personally like the idea of cognito, at this time it has some major drawbacks...
You can not backup / restore a user pool without loosing their password, also you have to implement your own backup/restore.
A way around is to save the user password in a cognito custom attribute.
I expected by using api gateway/lambda authorizer to have all the user data in the lambda context but its not there. Or am indoing something wrong with api gateway template mapping š¬
Good thing api gateway/lambda authorizer, can be cached by up to an hour, wont call the authorizer function again which seems like a top feature.
Does not work well with cloudformation, with every attribute update it recreates the user pool without restoring the users, thus loosing the users.
I used it only in one implementation and ended up duplicating the users in DynamoDB as well.
I'm avoiding it ever since. I wish they solve these issues as it looks like a service to be included with every project saving lot of time.
Reading your post I asked myself the same questions and not sure the answer either š
Pricing seems fair.
The default 5 requests/second to get user info seems strange as it woukd be consumed by one page load doing multiple ajax api requests .
For this in DynamoDB, there is no need for another table. If the access patterns dictate you store the information in another object, then so be it, but more than likely it should be in the same table. Sounds like you need two different item types in the same table.
For the task PK of userid and SK of task::your-task-id. This would allow you to get all of a user's tasks easily or even a specific task very easily if you knew the task ID. You might even have an attribute that is a timestamp and then have a GSI that is the userID as the PK and the timestamp as the SK. then you could use the begins_with operator on the SK and "paginate through all of the user's tasks that are in the month of 2019-04".
For the user information, have the userID be the PK and the SK be user_info and attributes be the user's information.
The one challenge for this is if you were to go to extremes and one single user is doing thousands of ops per second. e.g. "All tweets by very popular celebrity". If you have such a use case there are ways around that as well, e.g. write sharding. These are just examples for you to play with. Without knowing all your access patterns, I cannot model everything you might want to do. I highly recommend you go watch this presentation from reInvent 2018.
Iām looking at adding row-level permissions to a DynamoDB table using dynamodb:LeadingKeys to restrict access per Provider ID. Currently I only have one provider ID, but I know I will have more. However they providers will vary in size with those sizes being very unbalanced.
If I use Provider ID as my partition key, it seems to me like my DB will end up with very hot partitions for the large providers and mostly unused ones for the smaller providers. Prior to adding the row-level access control I was using deviceId as the partition key since it is a more random name, so partitions well, but now I think I have to move that to the sort key.
Current partitioning that works well:
HASHKEY: DeviceId
With permissions I think I need to go to:
HASHKEY: ProviderID (only a handful of them)
RangeKey: DeviceId
Any suggestions as to a better way to set this up?
In general, you no longer need to worry about hot partitions in DynamoDB, especially if the partition keys which are being requested the most remain relatively constant.
More Info: https://aws.amazon.com/blogs/database/how-amazon-dynamodb-adaptive-capacity-accommodates-uneven-data-access-patterns-or-why-what-you-know-about-dynamodb-might-be-outdated/
Expanding on Michael's comment...
If you don't need a range key now...why add one?
The only reason to have a range key is that you need to Query DDB and return multiple records.
If all you ever need is a single record using GetItem, then you don't need a range key.
Simply concatenate ${ProviderId}.${DeviceId} together to make up your hash key.
Edit
Since you want to be able to list device Ids for a single provider, then you do need providerID as the partition key and deviceID as the range key.
As Icehorn's answer mentions, "hot partitions" aren't as big a deal as they used to be. Unless you expect the data for a single providerID to go over 10GB, I'd start with the simple implementation of hashKey(providerID).
If you have expect more than 10GB of data or you end up with a hot partition...then consider concatenating (1..n) integer to the providerID.
This will mean that you'd have to query multiple partitions to get all the deviceIDs.
This approach is detailed in Multi Tenant SaaS Storage Strategies
Iām quite new to NoSQL and DynamoDB and I used to RDBMS. Iām designing database for a game and we're using DynamoDB and AWS Lambda for our backend. I created a table name āUsersā for player profile that contains the user information and resources. Because the game has inventory system I also created a table name āUserItemsā.
Itās all good until I realized DynamoDB donāt have transaction and any operation that is executed on both table (for example using an item that increase resource) has a chance of failure on one table while success on other and will cause missing data which affect our customers.
So I was thinking maybe my multiple tables design is not good since itās a habit of me to design multiple table when Iām working with RDBMS. Which let me to think of storing the entire āUserItemsā as hash in āUsersā but Iām not sure this is a good practice because the size of a single row in Users table will be really big (we may have 500 unique items per users) and each time I pull or put data from/to āUsersā (most of the time donāt need āUserItemsā data) the read/write throughput will be also really large.
What should I do, keep the multiple tables design and handle transaction manually or switch to single table design? Or maybe there is a 3rd option?
Updated: more information about my use case
Currently I have 2 tables
Users: UserId (key), Username, Gold
UserItems: UserId (partition key), ItemId (sort key), Name, GoldValue
Scenarios:
User buy an item: Users.Gold will be deduced, new UserItem will be add to UserItems table.
User sell an item: Users.Gold will be increased, the Item will be deleted from UserItems table.
In both scenarios above I will have to do 2 update operation for 2 tables which without transaction there is a chance one of them failed.
To solve that I consider using single table solution which is a single Users table with 4 columns UserId(key), Username, Gold, UserItems. However there are two things I'm worried about:
Data in UserItems might be come to big for a single cell because one user could have up to 500 items.
To add/delete item I have to pull the UserItems from dynamodb, add/delete item and then put it back into Users. So I have to do 1 read and 1 write operation for 1 action. And because of issue (1) the read/write data size could become really big.
FWIW, the AWS documentation on NoSQL Design for DynamoDB suggests to use a single table:
As a general rule, you should maintain as few tables as possible in a
DynamoDB application. As emphasized earlier, most well designed
applications require only one table, unless there is a specific reason
for using multiple tables.
Exceptions are cases where high-volume time series data are involved,
or datasets that have very different access patternsābut these are
exceptions. A single table with inverted indexes can usually enable
simple queries to create and retrieve the complex hierarchical data
structures required by your application.
NoSql database is best suited for non-trasactional data. If you bring normalization(splitting your data into multiple tables) into noSQL, then you are beating the whole purpose of it. If performance is what matters most, then you should consider only having a single table for your use case. DynamoDB supports Range Keys, and also supports Secondary Indices. For your usecase, it would be better to redesign your table to use Range Keys.
If you can share more details about your current table, maybe i can help you with more inputs.
This looks like it should be easy but I just can't find it.
I'm creating an application where I want to give admin site access to people from different departments. Those people will read and write the same tables, BUT they must only access rows belonging to their department! I.e. they must not see any records produced by the other departments and should be able to modify only the records from their own department. If they create a record, it should automatically "belong" to the department of the user which created it (they will create records only from the admin site).
I've found django-guardian, but it looks like an overkill - I don't really want to have arbitrary per-record permissions.
Also, the number of records will potentially be large, so any kind of front-end permission checking on a per-record basis is not suitable - it must be done by DB-side filtering. Other than that, I'm not really particular how it will be done. E.g. I'm perfectly fine with mapping departments to auth groups.
I am looking for a generic method of filtering a series of sitecore items based on the users current profile, I found one promising example:
How do I trigger a profile in Sitecore DMS?
However a few critical references are missing which is a shame as it looks to be a suitably generic function
Resources.Settings.AnalyticsUserProfileEnableSwitch I assume to simply be a boolean switch
The killer is ApplyUserProfile(filter)
Please keep in mind that user profiles are NOT the same thing as profiles in DMS. In DMS this is in reference to Analytics profiles related not to the specific user, but in visiting profiles... i.e. Marketing personas.
If you want to filter items based on user profiles, you simply get the Sitecore.Context.User.Profile and get whatever the property is and implement your logic to how you want to filter.
If you want to filter items based on DMS profiles, then that's something that's going to be difficult to do due to the fact that personas are not entered into the Analytics database real time. Those really aren't something you'll even be aware of at run time and therefore it's going to be difficult to categorize the persona at run time. You could, however, use the rules system to do some filtering based on other criteria (such as using the Engagement plans or something else)... but without more information, that's about as much as can be said.