We're exploring options for reliably segregating customer data in Spanner. The most obvious solution is a customer per database, but the 100 database/instance limitation renders that impractical. Past experience leads me to be very suspicious of any plan to add a customer-id field to the primary key of each table, because it's far too easy to screw that up in SQL queries, leading to dangerous data cross-talk.
I'm considering weird solutions like using all 2k tables/instance, and taking the ~32 tables we need per customer and prefixing those. E.g., [cust-id]-Table1, [cust-id]-Table2, etc. At least then the customer segregation logic that needs to be iron-clad can be put in one place that's hard to screw up in queries. But is anyone aware of a less weird approach? E.g., "100" is a suspiciously-non-round number in a technical limitation -- is that adjustable somehow?
Unfortunately, 100 databases/instance is not an adjustable value.
Though, I don't seem to fully understand " very suspicious of any plan to add a customer-id field to the primary key of each table, because it's far too easy to screw that up in SQL queries, leading to dangerous data cross-talk." Are you concerned about query performance, data correctness, code correctness or schema ?
With this schema, ~32 tables per customer will only allow you to store ~6000 customers. Though I would suggest benchmarking with other schema choices Spanner exposes.
Would you be able to provide a high-level schema of these customer tables as well as your query patterns ?
Also, suggest to read into for more ideas that fit your usecase better:
Spanner Schema
Interleaved Tables
Secondary Indexes
SQL Best Practices
Related
This question already has answers here:
Why use a 1-to-1 relationship in database design?
(6 answers)
Closed 6 months ago.
I'm in the process of building a web app that takes user input and stores it for retrieval and data manipulation. There are essentially 100-200 static fields that the user needs to input to create the Company model.
I see how I could break the Company model into multiple 1-to-1 Django models that map back the a Company such as:
Company General
Company Notes
Company Finacials
Company Scores
But why would I not create a single Company model with 200 fields?
Are there noticeable performance tradeoffs when trying to load a Query Set?
In my opinion, it would be wise for your codebase to have multiple models related to each other. This will give you better scalability opportunities and easier navigation to your model fields. Also, when you want to make a custom serializer, or custom views that will deal with some of your fields, but not all, it would be ideal to not have to retrieve 100+ fields every time.
Turns out I wasn't asking the right question. This is the questions I was asking. It's more a database question than a Django question I believe: Why use a 1-to-1 relationship in database design?
From the logical standpoint, a 1:1 relationship should always be
merged into a single table.
On the other hand, there may be physical considerations for such
"vertical partitioning" or "row splitting", especially if you know
you'll access some columns more frequently or in different pattern
than the others, for example:
You might want to cluster or partition the two "endpoint" tables of a
1:1 relationship differently. If your DBMS allows it, you might want
to put them on different physical disks (e.g. more
performance-critical on an SSD and the other on a cheap HDD). You have
measured the effect on caching and you want to make sure the "hot"
columns are kept in cache, without "cold" columns "polluting" it. You
need a concurrency behavior (such as locking) that is "narrower" than
the whole row. This is highly DBMS-specific. You need different
security on different columns, but your DBMS does not support
column-level permissions. Triggers are typically table-specific. While
you can theoretically have just one table and have the trigger ignore
the "wrong half" of the row, some databases may impose additional
limits on what a trigger can and cannot do. For example, Oracle
doesn't let you modify the so called "mutating" table from a row-level
trigger - by having separate tables, only one of them may be mutating
so you can still modify the other from your trigger (but there are
other ways to work-around that). Databases are very good at
manipulating the data, so I wouldn't split the table just for the
update performance, unless you have performed the actual benchmarks on
representative amounts of data and concluded the performance
difference is actually there and significant enough (e.g. to offset
the increased need for JOINing).
On the other hand, if you are talking about "1:0 or 1" (and not a true
1:1), this is a different question entirely, deserving a different
answer...
I save my order data on dyanmodb table. And the partition key is orderId, sort key is timestamp. Each order has many other attributes like category, userName, price, items, status`. I am going to build a filter service to let clients query order based on these attributes. Also I'd like to add a limit on the query for pagination. But I find some limitations on dynamodb.
In order to support querying different fields, I have two options:
Create GSI for each attribute. It is very expensive but it supports query each attribute very performance. This solution doesn't support combine multiple attributes in the filter.
Attach a filter expression on the SCAN to include attribute condition. SCAN is not very performance in the first place. Also the filter expression is applied after limits. Which means it is very likely to response less than users request limits.
so what is the good way to achieve this in dynamodb?
There is unfortunately no magic way to solve your problems. There is no DynamoDB feature which you missed. Indeed, as you said, making each of the attributes available for efficient queries requires a GSI which will cost you additional money - but that's reasonable. Indeed, as you said, there is no efficient way to search for an intersection of requirements on two different attribute. And indeed, the "limit" feature doesn't quite do what you want and you'll need to emulate your page size need in the client code (asking for more pages until your desired amount is recieved), potentially with unacceptably high latency.
It sounds that what you really need is a search engine. These have exactly the features that you asked for. You'll still be paying for these features (indexing of individual columns still takes up CPU and disk space, intersection of multiple attribute searches still requires significant work at query time) but search engines are designed for exactly these operations, and do them more efficiently and with lower latency (which is important for interactive searches, which are the bread-and-butter of search engines).
You can add the limit for pagination using the limit attribute in the query. But can you please be more specific about your access patterns, is your clients going to query all the orders or only the orders belonging to them ?
I'm in the process of evaluating some different data stores for a project and I have a strange but inflexible requirement to check the existence of a 1500 keys per query... Basically the only query I'll be running is of the form:
SELECT user_id, name, gender
WHERE user_id in (user1, user2, ..., user1500)
I will have around 3.5 billion rows in the table. One data store that has caught my eye is Spanner. I was wondering if querying the data in this way would be feasible or if I would run into performance issues due to the large number of items in my WHERE clause. I have only been able to test these queries on a small amount of data so far so I'm leaning more on what the theoretical performance hit might look like instead having the luxury to just "try and found out".
Also, are there other data stores that might work better for this read pattern? I expected to run no more than 80 queries per second. Also, the data will be bulk loaded on a weekly basis. The data is structured by nature but we don't use it in a relational way (i.e. no joins).
Anyways, sorry if this question is vague in any way. I'm happy to provide more detail if needed.
1500 keys should not be a problem if you use a bound array parameter to specify the keys:
SELECT user_id, name, gender
FROM table
WHERE user_id in UNNEST(#users)
https://cloud.google.com/spanner/docs/sql-best-practices#write_efficient_queries_for_range_key_lookup
I've included some links along with our approaches to other answers, which seem to be the most optimal on the web right now.
Our records need to be categorized (eg. "horror", "thriller", "tv"), and randomly accessible both in specific categories and across all/some categories. We generally need to access about 20 - 100 items at a time. We also have a smallish number of categories (less than 100).
We write to the database for uploading/removing content, although this is done in batches and does not need to be real time.
We have tried two different approaches, with two different data structures.
Approach 1
AWS DynamoDB - Pick a record/item randomly?
Help selecting nth record in query.
In short, using the category as a hash key, and a UUID as the sort key. Generate a random UUID, query Dynamo using greater than or less than, and limit to 1. This is even suggested by an AWS employee in the second link. (We've also tried increasing the limit to the number of items we need, but this increases the probability of the query failing the first time around).
Issues with this approach:
First query can fail if it is greater than/less than any of the UUIDs
Querying on any specific category will cause throttling at scale (Small number of partitions)
We've also considered adding a suffix to each category to artificially increase the number of partitions we have, as pointed out in the following link.
AWS Database Blog
Choosing the Right DynamoDB Partition Key
Approach 2
Amazon Web Services: How do we get random item from the dynamoDb's table?
Doing something similar to this, where we concatenate the category with a sequential number, and use this as the hash key. e.g. horror-000001.
By knowing the number of records in each category, we're able to perform random queries across our entire data set, while also avoiding hot partitions/keys.
Issues with this approach
We need a secondary data structure to manage the sequential counts across each category
Writing (especially deleting) is significantly more complex, although this doesn't need to happen in real time.
Conclusion
Both approaches solve our main use case of random queries on category/categories, but the cons they offer are really deterring us from using them. We're leaning more towards approach #1 using suffixes to solve the hot partitioning issue, although we would need the additional retry logic for failed queries.
Is there a better way of approaching this problem? Specifically looking for solutions capable of scaling well (No scan), without requiring extra resources be implemented. #1 fits the bill, but needing to manage suffixes and failed attempts really deters us from using it, especially when it is being called inside a lambda (billed for time used).
Thanks!
Follow Up
After more research and testing, my team has decided to move towards MySQL hosted on RDS for these tables. We learned that this is one of the few use cases were DynamoDB does not fit, and requires rewriting your use case to fit the DB (Bad).
We felt that the extra complexity required to integrate random sampling on DynamoDB wasn't worth it, and we were unable to come up with any comparable solutions. We are, however, sticking with DynamoDB for our tables that do not need random accessibility due to the price and response times.
For anyone wondering why we chose MySQL, it was largely due to the Nodejs library available, great online resources (which DynamoDB definitely lacks), easy integration via RDS with our Lambdas, and the option to migrate to Amazons Aurora database.
We also looked at PostgreSQL, but we weren't as happy with the client library or admin tools, and we believe that MySQL will suit our needs for these tables.
If anybody has anything else they'd like to add or a specific question please leave a comment or send me a message!
This was too long for a comment, and I guess it's pretty much a full fledged answer now.
Approach 2
I've found that my typical time to get a single item from dynamodb to a host in the same region is <10ms. As long as you're okay with at most 1-2 extra calls, you can quite easily implement approach 2.
If you use a keys only GSI where the category is your hash key and the primary key of the table is your range key, you can quickly find the largest numbered single item within a category.
When you add a new item, find the largest number for that category from the GSI and then write the new item to the table with sequence number n+1.
When you delete, find the item with the largest sequence number for that category from the GSI, overwrite the item you are deleting, and then delete the now duplicated item from its position at the highest sequence number.
To randomly get an item, query the GSI to find the highest numbered item in the category, and then randomly pick a number since you now know the valid range.
Approach 1
I'm not sure exactly what you mean when you say "without requiring extra resources to be implemented". If you're okay with using a managed resource (no dev work to implement), you can also make Approach 1 work by putting a DAX cluster in front of your dynamodb table. Then you can query to your heart's content without really worrying about hot partitions. (Though the caching layer means that new/deleted items won't be reflected right away.)
Is there any hint or directive that can be used with EXPLAIN of a query on Azure SQL Data Warehouse that would return recommended statistics that were not available for the optimizer? Alternatively is there a tool that can analyze a workload and make any recommendation.
Today, no. Right now the recommendation is to create statistics on every column as these are needed to create an optimal parallel query plan (I.e. how to move data around between nodes to return a result since it's a MPP architecture).
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-best-practices#maintain-statistics
An example of how to script this out can be found here as well (example H).
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics#examples-create-statistics
As you know, statistics should be created (according to this article):
on columns involved in JOINs, GROUP BY, HAVING and WHERE clauses.
There are no tools to do this (yet), but if you have access to the EXPLAIN plans they give you certain information. For example the shuffle_columns element lists all columns involved in a SHUFFLE_MOVE:
<shuffle_columns>col;</shuffle_columns>
as well as myriad other information. Review the annotation I did of an Azure SQL Data Warehouse plan here.
Lastly, (and I haven't actually done this, I've only been thinking about doing it), you could set up a copy of your database on SQL Server 2016, bearing in mind the syntax differences (eg distribution, lack of unique indexes etc). this would give you access to certain useful resources like execution plans, including index suggestions, and certain trace flags which tell you what stats were used. I mean the database engines and indexing are really different so I don't know how worthwhile this might be. I'll post back if I progress my thinking on this. I do find the question "Why is this query going slow?" much harder to answer on this platform that ordinary "box product" SQL Server because the tools aren't as mature yet.