Date partition or date sharded - google-cloud-platform

I have many tables in BigQuery that are date sharded, including several years of Google Analytics data. I was recently told that this was the old method of optimization and that date partitioning is much faster.
Is this correct? I am always looking for ways to improve query speed over this data, if date partitioning allows for much faster querying should I rebuild all my date sharded GA tables as date partitioned instead? Should I do both? What kind of performance impact could I expect to see and is it really worth the effort?

This page in Google's documentation answers this relatively thoroughly: https://cloud.google.com/bigquery/docs/partitioned-tables#partitioning_versus_sharding
Most relavent section:
Partitioned tables perform better than tables sharded by date. When you create date-named tables, BigQuery must maintain a copy of the schema and metadata for each date-named table. Also, when date-named tables are used, BigQuery might be required to verify permissions for each queried table. This practice also adds to query overhead and impacts query performance. The recommended best practice is to use partitioned tables instead of date-sharded tables
Your performance improvements will depend most upon how many previous shards you have and how many of them you consistently access in single queries.

Related

Schema design for Google BigTable

In my project, Im using Google BigQuery that holds loots of data.
The BigQuery columns are:
account_id, session_id, transaction_id, username, event, timestamp.
In my dashboard, Im fetching the entire data based on time stamp (last 30 days).
Since I have very large data, the performance are pretty slow (13 sec to fetch the last 30 days data).
Lately, I try to look on Google BigTable and I saw they have an option to get data based on time.
In my tests, the performance of the BigTable are slower from the BigQuery.
Is any suggested schema that can improve the performance with BigTable?
This is example to my schema in BigTable:
const row = {
key: `transactions#${timestamp_micros}`,
data: {
identifiers: {
session_id: `session_id-${startCounter}`,
account_id: `acount-${startCounter}`,
device_id: `device-${startCounter}`,
transaction_id: `transaction_id-${startCounter}`,
runtime_id: 'AQW+2Xx5AQAAstvxskK0c8NTk+vP5eBM',
page_id: `page_id-${startCounter}`,
start_time: timestamp,
},
},
};
Is anyone can suggest a better schema that will help me to fetch the data (based on timestamp range) with the best performance?
A good schema results in excellent performance and scalability, and a bad schema can lead to a poorly performing system. However, no single schema design provides the best fit for all use cases and hence your question is opinionated and will vary from person to person. The patterns described on this page provide a starting point to decide a schema for BigTable. Your unique dataset and the queries you plan to use are the most important things to consider as you design a schema for your time-series data.
As you've discovered from our docs, the row key format is the biggest decision you make when using Bigtable, as it determines which access patterns can be performed efficiently. Having row key transaction_id#reverse_timestampgets your data sorted from the latest timestamp. This could avoid hotspotting issues, which is one of the big reasons for slow query results.
However, you're also coming from a SQL architecture, which isn't always a good fit for Bigtable's schema/query model. So here are some questions to get you started:
Are you planning to perform lots of ad hoc queries like "SELECT A
FROM Bigtable WHERE B=x"? If so, strongly prefer BigQuery. Bigtable
can't support this query without performing a full table scan. (hence
it is slower than BigQuery)
Will you require multi-row OLTP transactions? Again, use BigQuery, as
Bigtable only supports transactions within a single row.
Are you streaming in new events at high QPS? Bigtable is much better
for these sorts of high-volume updates. Do you want to perform any
sort of large-scale complex transformations on the data? Again,
Bigtable is likely better here, as you can stream data out and back
in faster.
You can also combine the two services if you need some combination of these features. For example, say you're receiving high-volume updates all the time, but want to be able to perform complex ad hoc queries. If you're alright working with a slightly delayed version of the data, it could make sense to write the updates to Bigtable, then periodically scan the table using Dataflow and export a post-processed version of the latest events into BigQuery. GCP also allows BigQuery to serve queries directly from Bigtable in a some regions: https://cloud.google.com/bigquery/external-data-bigtable
My personal choice for your use case is Big Query. You can leverage the pruning in Big Query where BigQuery scans the partitions that match the filter and skip the remaining partitions. Not only does it make it easier to manage and query your data. By dividing a large table into smaller partitions, you can improve query performance, and you can control costs by reducing the number of bytes read by a query. You can use time-unit column partitioning or ingestion time partitioning. When you create a table partitioned by ingestion time, BigQuery automatically assigns rows to partitions based on the time when BigQuery ingests the data. You can choose hourly, daily, monthly, or yearly granularity for the partitions.
So your query for fetching the entire data based on timestamp (last 30 days) should be something like this in BigQuery (when used partitioning):
SELECT
column
FROM
dataset.table
WHERE
_PARTITIONTIME BETWEEN TIMESTAMP('2016-01-01') AND TIMESTAMP('2016-01-02')

Is NoSQL just a marketing buzz on top of RDBMS from Software Design Perspective?

From architectural perspective: Could you please help me understand why NoSQL DynamoDB is so hype.
DynamoDB supports some of the world’s largest scale applications by
providing consistent, single-digit millisecond response times at any
scale.
I'm trying to critic, in order to understand WHY part of the question.
We always have to specify partition Key and key attribute while retrieving to get millisecond of performance
If I design RDBMS:
where primary key or alternate key (INDEXED) always needs to be specified by in the query
I can use partition key to find out in which database my data is stored.
Never do JOINs
Isn't it same as NoSQL kind of architecture without any marketing buzz around it?
We're shifting to DynamoDB anyways but this is my innocent curiosity, there must be a strong reason which RDMBS can't do. Let's skip backup and maintenance advantages etc.
You are conflating two different things.
The definition of NoSQL
There isn't one, at least not one that can apply in all cases.
In most uses, NoSQL databases don't force your data into the fixed-schema "rows and columns" of a relational database. Although modern relational databases, such as Postgres, support data types such as JSONB that would have E. F. Codd spinning in his grave.
DynamoDB is a document database: it is optimized for retrieving and updating single documents based on a unique key, and it does not restrict the fields that those documents contain (other than requiring the ones used for a key).
Distributed Databases
A distributed database stores data on multiple nodes, and is able to perform parallel queries on those nodes and combine the results.
There are distributed SQL database: Redshift and BigQuery are optimized for queries against large datasets that may include joins, while MySQL (and no doubt others) which can run multiple engines and distribute the queries between them. It is possible for SQL databases to perform joins, including joins that cross nodes, but such joins generally perform poorly.
DynamoDB distributes items on shards based on their partition key. This makes it very fast for retrieving single items, because the query can be directed to a single shard. It is much less performant when scanning for items that reside on multiple shards.
As you note in your question, you can implement a sharded document DB on top of a relational database (either using native JSON columns or storing everything in a CLOB that is parsed for each access). But enough other people have done this (including DynamoDB) that it doesn't make sense (to me, at least) to re-implement.

Query pre-created sub-folders in s3 using a single table schema in Athena

I am exploring AWS Athena to query files in s3. We have a separate service that writes data into s3 in the following structure:
data
/log1
/log2
/log3
All the files have the same schema.
Following is the schema of the files:
id (a random string id)
timestamp
value
However, we need to be able to query data in a single folder - log1, log2 along with querying all the data together.
One option is to create separate tables for these. However, the sub folders log1, log2, etc. correspond to a device and these could be in numbers of 100s or thousands. These names would be dynamic and will be entered by the user for querying. Also, there are other query capabilities we need such as querying data between two timestamps, etc. Such queries will be fired at the /data folder level.
What would be a good way to structure the folders and the corresponding tables? I have read multiple questions that suggest partitioning, but for my use case, I don't really understand how to partition the data. I am extremely new to Athena and still learning. Any advice would be much appreciated.
Thank you in advance.
Partitioning will have an impact on how much data will be scanned with by every query and therefore improving the performance and lowering the cost - a good explanation can be found in AWS Partitiong Data:
You can partition your data by any key. A common practice is to partition the data based on time, often leading to a multi-level partitioning scheme. For example, a customer who has data coming in every hour might decide to partition by year, month, date, and hour. Another customer, who has data coming from many different sources but loaded one time per day, may partition by a data source identifier and date.
If you query a partitioned table and specify the partition in the WHERE clause, Athena scans the data only from that partition.
There are also some good recommendations regarding the partitions in Top 10 Performance Tuning Tips for AWS Athena:
When deciding the columns on which to partition, consider the following:
Columns that are used as filters are good candidates for partitioning.
Partitioning has a cost. As the number of partitions in your table increases, the higher the overhead of retrieving and processing the partition metadata, and the smaller your files. Partitioning too finely can wipe out the initial benefit.
If your data is heavily skewed to one partition value, and most queries use that value, then the overhead may wipe out the initial benefit.
Athena released lately a new feature called Partition Projections which might be helpful in your case:
In partition projection, partition values and locations are calculated from configuration rather than read from a repository like the AWS Glue Data Catalog. Because in-memory operations are often faster than remote operations, partition projection can reduce the runtime of queries against highly partitioned tables.
Especially the Dynamic ID Partitioning could be interesting in your case.
How to partition in the end depends on the queries and how they are designed:
Most queries include a time frame? Then you should consider date as a partition
Most queries filter for a specific device (or a small amount of ids)? Then it might be a better choice to use device id as partition or at least trying to bucketing these. Also depends on the amount of rows per device to not make it too granular.
You can also partition by date and device id.
Since you already have a partition by device I would go for this in the beginning and use projections to query this data.

BigQuery - querying only a subset of keys in a table with key value schema

So I have a table with the following schema:
timestamp: TIMESTAMP
key: STRING
value: FLOAT
There are around 200 unique keys. I am partitioning the dataset by date.
I want to run several (5-6 currently, but I expect to add at least 15 more) queries on a daily basis on this database. Brute forcing these would cost me a lot daily, which I want to avoid.
The issue is that because of this key - value format, and BigQuery being a columnar database, each query queries the whole day's data, despite each query actually using a maximum of 4 keys. What is a best way to optimize this?
I am thinking the best way I can go about it right now is to create separate temp tables for each key as a daily batch process, run my queries on them and then delete them.
Ideal way I would want to go about it is partitioning by key, I am not sure there is any such provision?
You can try using recently introduced clustering partitioned tables
When you create a clustered table in BigQuery, the table data is automatically organized based on the contents of one or more columns in the table’s schema. The columns you specify are used to colocate related data. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
Clustering can improve the performance of certain types of queries such as queries that use filter clauses and queries that aggregate data. When data is written to a clustered table by a query job or a load job, BigQuery sorts the data using the values in the clustering columns. These values are used to organize the data into multiple blocks in BigQuery storage. When you submit a query containing a clause that filters data based on the clustering columns, BigQuery uses the sorted blocks to eliminate scans of unnecessary data.
Similarly, when you submit a query that aggregates data based on the values in the clustering columns, performance is improved because the sorted blocks colocate rows with similar values.
Update (moved from comments)
Also have in mind below
Feature Partitioning Clustering
--------------- ------------- -------------
Cardinality Less than 10k Unlimited
Dry Run Pricing Available Not available
Query Pricing Exact Best Effort
Pay special attention to Dry Run Pricing - unfortunately - clustered tables do not support dry run (validation) based on clustered keys - and rather show only validation based on partitions. but if you set your clustering properly - actual run will end up with lower cost. you should try with smaller data to get comfortable with this
See more at Clustering partitioned tables

Amazon Redshift schema design

We are looking at Amazon Redshift to implement our Data Warehouse and I would like some suggestions on how to properly design Schemas in Redshift, please.
I am completely new to Redshift. In the past when I worked with "traditional" data warehouses, I was used to creating schemas such as "Source", "Stage", "Final", etc. to group all the database objects according to what stage the data was in.
By default, a database in Redshift has a single schema, which is named PUBLIC. So, my question to those who have worked with Redshift, does the approach that I have outlined above apply here? If not, I would love some suggestions.
Thanks.
With my experience in working with Redshift, I can assert the following points with confidence:
Multiple schema: You should create multiple schema and create tables accordingly. When you'll scale, it'll be easier for you to pin-point where exactly the table is supposed to be. Let us say, you have 3 schema, named production, aggregates and rough. Now, you know that the table production will contain the tables that are not supposed to be changed (mostly OLTP data) - such as user, order, transactions tables. Table aggregates will have aggregated data built over raw tables - such as number of orders placed per user per day per category. Finally, rough will contain any table that doesn't hold a business logic but is required for some temporary work - let us say to check the genre of movies for a list of 1 lakh users, which is shared with you in an excel file. Simply create a table in rough schema, perform your operations and drop the table. Now you very clearly know where you'll find the tables based on whether they are raw, aggregated or simply temporary tables.
Public schema: Forget it exists. Any table that is not preceded with a schema name, gets created there. A lot of clutter - no point in storing any important data there.
Cross schema joins: There's no stopping here. You may join as many tables from as many schema as required. In fact, it is desirable you create dimension tables and join on a PK later, rather than to keep all the information in a single table.
Spend some quality time in designing the schema and underlying table structure. When you expand, it'll be easier for you to classify things better in terms of access control. Do let me know if I've missed some obvious points.
You can have multiple databases in a Redshift cluster but I would stick with one. You are correct that schemas (essentially namespaces) are a good way to divide things up. You can query across schemas but not databases.
I would avoid using the public schema as managing certain permissions there can be difficult (easier to deny someone access to public than prevent them from being able to create a table for example).
For best results if you have the time, learn about the permissions system up front. You want to create groups that have access to schemas or tables and add/remove users from groups to control what they can do. Once you have that going it becomes pretty easy to manage.
In addition to the other responses, here are some suggestions for improving schema performance.
First: Automatic compression encodings using COPY command
Improve the performance of Amazon Redshift using the COPY command. It will get data into Redshift database. The COPY command is clever enough. It automatically chooses the most appropriate encoding settings for the data it uploads. You don’t have to think about it. However, it does so only for the first data upload into an empty table.
So, make sure to use a significant data set while uploading data for the first time, which Redshift can assess to set the column encodings in the best way. Uploading a few lines of test data will confuse Redshift to know how best to optimize the compression to handle the real workload.
Second: Use Best Distribution Style and Key
Distribution-style decides how data is distributed across the nodes. Applying a distribution style at table level tells Redshift how you want to distribute the table and the key. So, how you specify distribution style is important for good query performance with Redshift. The style you choose may affect requirements for data storage and cluster. It also affects the time taken by the COPY command to execute.
I recommend setting the distribution style to all tables with a smaller dimension. For large dimension, distribute both the dimension and associated fact on their join column. To optimize the second large dimension, take the storage-hit and distribute ALL. You can even design the dimension columns into the fact.
Third: Use the Best Sort Key
A Redshift database maintains data in a table with an arrangement of a sort-key-column if specified. Since it’s sorted in each partition; each cluster node upholds its partition in predefined order. (While designing your Redshift schema, also consider the impact on your budget. Redshift is priced by amount of stored data and by the number of nodes.)
Sort key optimizes Amazon Redshift performance significantly. You can do it in many ways. First, use data filtering. If where-clause filters on a sort-key-column, it skips the entire data blocks. It’s because Redshift saves data in blocks. Each block header records the minimum and maximum sort key value. Filter outside of that range, the entire block may get skipped.
Alternatively, when joining two tables, sorted on their joint keys, the data is read in matching order. Also, you can merge-join without separate sort-steps. Joining large dimension to a large fact table will be easy with this method because neither will fit into a hash table.