I'm somewhat confused as to what the proper secondary index would look like for DynamoDB.
I have Name, Date, Period, Data attributes and want an index that lets me efficiently lookup by Name, Date, and Period.
I also want to efficiently lookup all Names for a given Date.
I tried setting my secondary index partition key to Name since I want those to be grouped together on nodes. And added attribute projections for Date and Period. Is this the way to go?
Every single access pattern needs to be enumerated and you need to think about its corresponding retrieval mechanism. Your base table provides one access mechanism. You can use GSIs for the additional mechanisms.
The base table and each GSI provide a PK and SK for you to use. The PK must be an individual value (sometimes composed of several values concatenated together with a separator like hash). The SK can be a sortable value, used either as a value or range. Those are the tools at your disposal.
"All names for a given date" might use a GSI where the date is the PK and the names are the SK.
At reasonable scale you don't have to think too much about hot partitions. At high scale (more than 1,000 write units needed per second) you'll have to think harder before putting everything under a single date PK for the GSI.
Related
Theoretical table with billions of entries.
Partition key is a unique uuid representing a given deviceId. There will be around 10k unique uuids.
Sort Key is a dateString for when the data was collected.
Each item has some data fields. There are dozens of fields such that making a GSI for each wouldn't be reasonable. For our example, let's say we are looking for the "dataOfInterest" field.
I'd like to search the DB for "all items where the dataOfInterest = 'foobar'" - and ideally do it within a date range. As far as I know, a scan operation is the only option. With billions entries... that's not going to be a fast process (though I understand I could split it out to run multiple operations at a time - it's stil going to eat RCU's like crazy)
Of note, I only care about a given uuid for each search, however. In other words, what I REALLY care about is "all items within a given partition where the dataOfInterest = 'foobar'". And futher, it'd be great to use the sort key to give "all items within a given partition where the dataOfInterest = 'foobar' that are between Jan 1 and Feb 28"
The scan operation allows you to limit the results with a filter expression such that I could get the results of just a single partition ... but it still reads the entire table and the filtering is done before returning the data to you. https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html
Is there an AWS API that does a scan-like operation that reads only a given partition? Are there other ways to achieve this (perhaps re-architecting the DB?)
As #jarmod says, you can use a Query and specify the PK of the UUID. You can then either put the timestamp into the SK and filter for the dataOfInterest value (unindexed), or for more efficiency and to make everything indexed you can construct a composite SK which is dataOfInterest#timestamp and then do a range query on the SK of foobar#time1 to foobar#time2. That makes this query perfectly index optimized.
Course, this makes purely timestamp-based queries less simple. So you either do multiple queries for those or, if you want both queries efficient, setup this composite SK in a GSI and use that to resolve this query.
I have a requirement to query data but sort by different fields (probably more than 30).
I know I can build a secondary index and use different field as sort key in different GSI. However, it will exceed the maximum gsi one table can have.
Is there a pattern to restructure the data to make it sortable via a single GSI or even without GSI?
The data I need to support looks like:
Table: OrderProductUser
# Order Items:
type
createdDate
updatedDate
amount (number)
fee (number)
tax (number)
# Product Items:
type
name
price
...
# User Items:
type
firstName
lastName
dob
gender
...
...
Since Dynamodb recommends using one table, I put all different records into one. The type field in each row indicates what the row is.
But I'd like to support sort on all different fields including string, date and number. If I sort them in application, it won't support pagination very well. Is there a patten to support that?
You only need 1 GSI per table...as you can overload them
simply concatenate the attribute name to the GSI Partition or Sort key...
ex.
Partition Sort
AMOUNT 99.99
FEE 1.50
xxx AMOUNT:00099.99
xxx FEE:001.50
But you'll only be able to sort by one column at a time, and you have to write multiple records out to DDB.
Given the limitation of sorting/filtering in DDB, a standard RDS is likely a better choice for a high functioning UI.
The usual recommendation is to front DDB with ElasticSearch... and if you truly need the kind of scaling DDB+ElasticSearch can provide, then go for it.
But for most users, RDS Aurora for instance is much more cost effective.
Our team has started to use AWS and one of our projects will require storing approval statuses of various recommendations in a table.
There are various things that identify a single recommendation, let's say they're : State, ApplicationDate, LocationID, and Phase. And then a bunch of attributes corresponding to the recommendation (title, volume, etc. etc.)
The use case will often require grabbing all entries for a given State and ApplicationDate (and then we will look at all the LocationId and Phase items that correspond to it) for review from a UI. Items are added to the table one at a time for a given Station, ApplicationDate, LocationId, Phase and updated frequently.
A dev with a little more AWS experience mentioned we should probably use State+ApplicationDate as the partition key, and LocationId+Phase as the sort key. These two pieces combined would make the primary key. I generally understand this, but how does that work if we start getting multiple recommendations for the same primary key? I figure we either are ok with just overwriting what was previously there, OR we have to add some other attribute so we can write a recommendation for the State+ApplicationDate/LocationId+Phase multiple times and get all previous values if we need to... but that would require adding something to the primary key right? Would that be like adding some kind of unique value to the sort key? Or for example, if we need to do status and want to record different values at different statuses, would we just need to add status to the sort key?
Does this sound like a reasonable approach or should I be exploring a different NAWS offering for storing this data?
Use a time-based id property, such as a ULID or KSID. This will provide randomness to avoid overwriting data, but also provide a time-based sorting of your data when used as part of a sort key
Because the id value is random, you will want to add it to your sort key for the table or index where you perform your list operations, and reserve the pk for known values that can be specified exactly.
It sounds like the 'State' is a value that can change. You can't update an item's key attributes on the table, so it is more common to use these attributes in a key for a GSI if they are needed to list data.
Given the above, an alternative design is to use the LocationId as the pk, the random id value as the sk, and a GSI with the GSI with 'State' as the pk and the random id as the sk. Or, if you want to list the items by State -> Phase -> date, the GSI sk could be a concatenation of the Phase and id property. The above pattern gives you another list mechanism using the LocationId + timestamp of the recommendation create time.
I want to create a DynamoDB table that allows me to save notes from users.
The attributes I have:
user_id
note_id (uuid)
type
text
The main queries I will need:
Get all notes of a certain user
Get a specific note
Get all notes of a certain type (the less used query)
I know that in terms of performance and DynamoDB partitions note_id would be the right choice because they are unique and would be distributed equally over the partitions but on the other hand is much harder to get all notes of a user without scanning all items or using a GSI. And if they are unique I suppose it doesn't make any sense to have a sort key.
The other option would be to use user_id as partition key and note_id as sort key, but if I have certain users that are a much larger number of notes than others wouldn't that impact my performance?
Is it better to have a partition key unique (like note_id) to scale well with DynamoDB partitions and use GSIs to create my queries or to use instead a partition key for my main query (user_id)?
Thanks
Possibly the simplest and most cost-effective way would be a single table:
Table Structure
note_id (uuid) / hash key
user_id
type
text
Have two GSIs, one for "Get all notes of a certain user" and one for "Get all notes of a certain type (the less used query)":
GSI for "Get all notes of a certain user"
user_id / hash key
note_id (uuid) / range key
type
text
A little note on this - which of your queries is the most frequent: "Get all notes of a certain user" or "Get a specific note"? If it's the former, then you could swap the GSI keys for the table keys and vice-versa (if that makes sense - in essence, have your user_id + note_id as the key for your table and the note_id as the GSI key). This also depends upon how you structure your user_id - I suspect you've already picked up on; make sure your user_id is not sequential - make it a UUID or similar.
GSI for "Get all notes of a certain type (the less used query)"
type / hash key
note_id (uuid) / range key
user_id
text
Depending upon the cardinality of the type field, you'll need to test whether a GSI will actually be of benefit here or not.
If the GSI is of little benefit and you need more performance, another option would be to store the type with an array of note_id in a separate table altogether. Beware of the 400k item limit with this one and the fact that you'll need to perform another query to get the text of the note.
With this table structure and GSIs, you're able to make a single query for the information you're after, rather than making two if you have two tables.
Of course, you know your data best - it's best to start with what you think is best and then test it to ensure it meets what you're looking for. DynamoDB is priced by provisioned throughput + the amount of indexed data stored so creating "fat" indexes with many attributes projects, as above, if there is a lot of data then it could become more cost effective to perform two queries and store less indexed data.
I would use user_id as your primary partition(hash) key and note_id as your primary range(sort) key.
You have already noted that in an ideal situation, each partition key is accessed with equal regularity to optimise performance see Design For Uniform Data Access Across Items In Your Tables. The use of user_id is perfectly fine as long as you have a good spread of users who regularly log in. Indeed AWS specifically encourage this option (see 'Choosing a Partition Key' table in the link above).
This approach will also make your application code much simpler than your alternative approach.
You then have a second choice which is whether to apply a Global Secondary Index for your get notes by type query. A GSI key, unlike a primary key, does not need to be unique (see AWS GSI guide, therefore I suggest you would simply use type as your GSI partition key without a range key.
The obvious plus side to using a GSI is a faster result when you perform the note type query. However you should be aware of the downsides also. A GSI has a separate throughput allowance than your table, so you need to provision this in addition to your table throughput (at extra cost). If you dont provision your GSI with enough read units it could end up slower than a scan on your table. If you dont provision enough write units, your table writes could be throttled, even if your table had enough write units.
Also, AWS warn that GSIs are updated asynchronously (usually within a fraction of a second but it can be longer). This means queries on your GSI might return the 'wrong' result if you have table writes and index reads very close together. If this was a problem you would need to handle it in your application code.
I see this as 2 tables. User and notes with a GSI on the notes table. Not sure how else you could do it. Using userId as primary key and note_id as sort key requires that you can only retrieve elements when you know both the user_id and the note_id. With DynamoDB if your not scanning you have to satisfy all the elements in the primary key, so both the partition and and sort if there is one. Below is how I would do this.
Get all notes of a certain user
When a user creates a note I would add this to the users table in the users notes attribute. When you want to get all of a users notes then retrieve the user and access the array/list of note_ids stored there.
{ userId: xxx,
notes: [ note_id_1,note_id_2,note_id_3]
}
Get a specific note
A notes table with node_id as the primary key would make that easy.
{
noteId: XXXX,
note: "sfsfsfsfsfsf",
type: "standard_note"
}
Get all notes of a certain type (the less used query)
I would use a GSI on the notes table for this with the attributes of "note_type" and note_id projected onto it.
Update
You can pull this off with one table and a GSI (See the two answers below for how) but I would not do it. Your data model is so simple why make it more complicated than users and notes.
Imagine that you need to persist something that can be represented with following schema:
{
type: String
createdDate: String (ISO-8601 date)
userId: Number
data: {
reference: Number,
...
}
}
type and createdDate are always defined/required, everything else such as userId, data and whatever fields within data are optional. Combination of type and createdDate does not guarantee any uniqueness. Number of fields within data (when data exists) may differ.
Now imagine that you need to query against this structure like:
Give me items where type is equal to something
Give me items where userId is equal to something
Give me items where type AND userId are equal to something
Give me items where userId AND data.reference are equal to something
Give me items where userId is equal to something, where type IS IN range of values and where data.reference is equal to something
As it seems to me HashKey needs to be introduced on table level to uniquely match the item. Only choice that i have is to use something like UUID generator. Based on that i can't query anything from table that i need described above. So i need to create several global secondary indexes to cover all fifth cases above as follows:
For first use case i could create GSI where type can be HashKey and createdDate can be RangeKey.What bothers me from start here as i mentioned, there is high chance for this composite key to NOT be unique.
For second use case i could crate GSI where userId can be HashKey and createdDate can be RangeKey
Here probably this composite key can match item uniquely.
For third use case, i have probably two solutions. Either to create third GSI where type can be HashKey and userId can be RangeKey. With that approach i'm losing ability to sort returned data and again same worries, this composite key does not guarantee uniqueness. Another approach would be to use one of two previous GSIs and using FilterExpression, right?
For fourth use case i have only one option. To use previous GSI with userId as HashKey and createdDate as a RangeKey and to use FilterExpression against data.reference. Index can't be created on fields from nested object right?
For fifth use case, because IN operator is only supported via FilterExpression (right?) only option again is to use GSI with userId as HashKey and createdDate as a RangeKey and to use FilterExpression for both type and data.reference?
So as only bright side of this problem i see using GSI with userId as HashKey and createdDate as RangeKey. But again userId is not mandatory field it can be NULL. HashKey can't be NULL right?
Most importantly, if composite key(HashKey and RangeKey) can't guarantee uniqueness that means that saving item with composite key that already exists in index will silently rewrite previous item which means i will lose the data?
The thing about DynamoDB: it is a no-SQL database. On the plus side, it is easy to dump pretty much anything into it so long as you have a unique index and it will be fairly efficiently stored for retrieve if you have a good partition key that sub-divides your data into chunks. On the downside, any query you do against fields that are not the partition key or index (primary or secondary) are slow table scans by definition. DynamoDB is not an SQL database and cannot give SQL-like performance when filtering non-indexed columns. If the performance you see is going to be reasonable, you need to delimit your query results to pre-calculated index values available before doing a query or you need to know the results you are looking for are delimited to a few partition keys.
First let's consider the delimited partition keys route. Once you have delimited the partition keys as much as you can and there are no more indexes to reference, everything else you ask DynamoDB is not really a query, but a table scan. You can ask DynamoDB to do it for you, but you may well be better off taking the full results from a partition key or index query and doing the filter yourself in whatever language you are using. I use Java for this purpose because it is simple to do a query for the keys I need through the Java->DynamoDB API and easy to then filter the results in Java. If this is interesting to you I can put together some simple examples.
If you go the index and filter route, understand that the hash key is still a partition key for the index, which is going to determine how much the GSI can be used in parallel. The bigger your DynamoDB table becomes and the more time sensitive your queries are, the bigger the issue this will become.
So yes, you can make the queries you want with indexes, though it will take some careful planning of those indexes.
1. For first use case i could create GSI where type can be HashKey and
createdDate can be RangeKey.What bothers me from start here as i
mentioned, there is high chance for this composite key to NOT be
unique.
GSI's do not have to be unique. You will receive multiple rows on a query, but nothing will be broken from DynamoDB's perspective. However, if you use type as your partition key (HashKey), the performance of this query will likely be poor unless you have few records for each of your type values.
2. For second use case i could crate GSI where userId can be HashKey and
createdDate can be RangeKey Here probably this composite key can match item
uniquely.
No problems here so long as your userId's will be unique on a given day.
3. For third use case, i have probably two solutions. Either to create third
GSI where type can be HashKey and userId can be RangeKey. With that approach
i'm losing ability to sort returned data and again same worries, this
composite key does not guarantee uniqueness. Another approach would be to
use one of two previous GSIs and using FilterExpression, right?
So the RangeKey is your sort key, at least from DynamoDB's perspective. And yes, if you use a GSI and then Filter, you are table scanning the contents of the GSI indexed rows. But yes, if you are combining two GSI's, you either generate a third index in advance or you filter/scan. DynamoDB has no provisions for doing an INNER JOIN on two indexes. And having type as your partition key and then filtering the results has serious performance issues.
4. For fourth use case i have only one option. To use previous GSI with
userId as HashKey and createdDate as a RangeKey and to use FilterExpression
against data.reference. Index can't be created on fields from nested object
right?
I am not sure about your nested object question, but yes, using the previous GSI with a filter/scan will work.
5. For fifth use case, because IN operator is only supported via
FilterExpression (right?) only option again is to use GSI with userId as
HashKey and createdDate as a RangeKey and to use FilterExpression for both
type and data.reference?
Yes, if you want DynamoDB to do the work for you, this is the way to approach your fifth query. But I go back to my original statement: why do this? If you can create a GSI that efficiently gets you to the records you are interested in, use a GSI. But when I never use filter expressions: I get the full partition, index or GSI results back from a query and do the filtering myself in my programming language of choice.
If you need to do everything in DynamoDB your methods will work, but they may not be very fast depending on how many rows are being filtered. I beat on the performance issue pretty hard because I have seen lots of work go into s database project and then had the whole thing not get used because poor performance made it unusable.