Google Cloud Datastore - Delete pattern - google-cloud-platform

I using Google Datastore to store multiple objects. Millions. At some point, I no longer want to keep storing rows on the database. The criterion to delete - Delete all the rows that older from 10 days.
I saw that Google provide two options to make this job:
Send delete command in batch. Of cause that you should GET all the ids before. It sounds like a very slow idea when you have to remove millions of rows. It's also expensive.
Use Google Dataflow product and provide an option to Delete bulk from Datastore. The problem here is just the price - high price.
The problem of those two options above is the pricing. I calculated that the price of deleting 16M rows in a month will cost 480$ (datastore read operations + delete operations) - which is too much money for small tasks. Additional to this you have to add the dataflow operations costs.
It seems that there is no cheap option to delete data from Datastore - I'm wrong?

You don't have to read to delete. Deletes are based on keys. So, all you need is to identify keys. For this you can do keys only query which are much cheaper (just one operation for the entire projection, although there may be a limit on how many keys can be fetched at a time with a projection query).
Also, how did you compute $480? As per
https://cloud.google.com/datastore/pricing
for a Multi-region, it costs $0.06 for 100,000 reads and $0.02 for 100,000 deletes. Using these numbers, I get the following for 16M.
16*10^6 * ( (1/1000) * 0.06/10^5 + 0.02 / 10^5) = $3.2096
Here the 1/1000 factor is a single read operation for 1000 keys read using keysonly query.

Related

How does Cloud Bigtable read rows that are non-contiguous?

Given a large number of known row keys. How does bigtable read(not a scan operation) those rows? Does it read the rows one after the other or all at once? If I have a large number of non-contiguous rows that I want to read, is it better to make separate concurrent or parallel hits to read each or to give all rows to bigtable i.e. a "batch read"?
There are three options for a non-contiguous batch read which depend on your latency and CPU requirements. You can do all the reads as get requests in parallel, you can issue a read rows request/scan with multiple ranges that include only one row, or you can do a hybrid.
Reading with multiple parallel get requests
This option can be great if you have a lot of processing power or don't need to read a huge number of rows. This will issue multiple requests to Bigtable, so it's going to have an impact on your CPU utilization. One Bigtable node supports around 10K reads per second, but if you have 1000 rows you need to read individually that might make a dent in your capacity.
Also, if you need all of the requests to resolve before you can process the data, you may run into performance issues if one request is slow, it slows down the entire result.
Scan with multiple rows
Bigtable supports scanning with multiple filters. One filter is a row range based on the row key. You can create a row range filter that includes exactly one row and do a scan with a filter for each row.
The Bigtable client libraries support queries like this, so you can just pass the row keys and don't need to create all of those row range filters. However, it's important to know what is happening under the hood for performance. This one query will be performed sequentially on the Bigtable server, so it could take a lot more time than multiple gets.
In Java, to do this kind of query, you just pass multiple row keys to the Query builder like so:
Query query = Query.create(tableId).rowKey("phone#4c410523#20190501").rowKey("phone#4c410523#20190502");
ServerStream<Row> rows = dataClient.readRows(query);
for (Row row : rows) {
printRow(row);
}
Hybrid approach
Depending on the scale of rows you're working with, it may make sense to take your set of row keys, divide them up and issue multiple scans in parallel. You can get the benefit of fewer requests while still potentially getting better latency since the requests are parallelized.
I would recommend experimenting to see which scenario works best for your use case, or leave a comment with more information on your use case and I can see if there is more information I can offer you.

AWS Athena partition fetch all paths

Recently, I've experienced an issue with AWS Athena when there is quite high number of partitions.
The old version had a database and tables with only 1 partition level, say id=x. Let's take one table; for example, where we store payment parameters per id (product), and there are not plenty of IDs. Assume its around 1000-5000. Now while querying that table with passing id number on where clause like ".. where id = 10". The queries were returned pretty fast actually. Assume we update the data twice a day.
Lately, we've been thinking to add another partition level for day like, "../id=x/dt=yyyy-mm-dd/..". This means that partition number grows xID times per day if a month passes and if we have 3000 IDs, we'd approximately get 3000x30=90000 partitions a month. Thus, a rapid grow in number of partitions.
On, say 3 months old data (~270k partitions), we'd like to see a query like the following would return in at most 20 seconds or so.
select count(*) from db.table where id = x and dt = 'yyyy-mm-dd'
This takes like a minute.
The Real Case
It turns out Athena first fetches the all partitions (metadata) and s3 paths (regardless the usage of where clause) and then filter those s3 paths that you would like to see on where condition. The first part (fetching all s3 paths by partitions lasts long proportionally to the number of partitions)
The more partitions you have, the slower the query executed.
Intuitively, I expected that Athena fetches only s3 paths stated on where clause, I mean this would be the one way of magic of the partitioning. Maybe it fetches all paths
Does anybody know a work around, or do we use Athena in a wrong way ?
Should Athena be used only with small number of partitions ?
Edit
In order to clarify the statement above, I add a piece from support mail.
from Support
...
You mentioned that your new system has 360000 which is a huge number.
So when you are doing select * from <partitioned table>, Athena first download all partition metadata and searched S3 path mapped with
those partitions. This process of fetching data for each partition
lead to longer time in query execution.
...
Update
An issue opened on AWS forums. The linked issue raised on aws forums is here.
Thanks.
This is impossible to properly answer without knowing the amount of data, what file formats, and how many files we're talking about.
TL; DR I suspect you have partitions with thousands of files and that the bottleneck is listing and reading them all.
For any data set that grows over time you should have a temporal partitioning, on date or even time, depending on query patterns. If you should have partitioning on other properties depends on a lot of factors and in the end it often turns out that not partitioning is better. Not always, but often.
Using reasonably sized (~100 MB) Parquet can in many cases be more effective than partitioning. The reason is that partitioning increases the number of prefixes that have to be listed on S3, and the number of files that have to be read. A single 100 MB Parquet file can be more efficient than ten 10 MB files in many cases.
When Athena executes a query it will first load partitions from Glue. Glue supports limited filtering on partitions, and will help a bit in pruning the list of partitions – so to the best of my knowledge it's not true that Athena reads all partition metadata.
When it has the partitions it will issue LIST operations to the partition locations to gather the files that are involved in the query – in other words, Athena won't list every partition location, just the ones in partitions selected for the query. This may still be a large number, and these list operations are definitely a bottleneck. It becomes especially bad if there is more than 1000 files in a partition because that's the page size of S3's list operations, and multiple requests will have to be made sequentially.
With all files listed Athena will generate a list of splits, which may or may not equal the list of files – some file formats are splittable, and if files are big enough they are split and processed in parallel.
Only after all of that work is done the actual query processing starts. Depending on the total number of splits and the amount of available capacity in the Athena cluster your query will be allocated resources and start executing.
If your data was in Parquet format, and there was one or a few files per partition, the count query in your question should run in a second or less. Parquet has enough metadata in the files that a count query doesn't have to read the data, just the file footer. It's hard to get any query to run in less than a second due to the multiple steps involved, but a query hitting a single partition should run quickly.
Since it takes two minutes I suspect you have hundreds of files per partition, if not thousands, and your bottleneck is that it takes too much time to run all the list and get operations in S3.

Partitioned tables in BigQuery

I was wondering what the usage of using a partitioned table in BigQuery is. It seems most of the queries seem to take about the same time to finish regardless of size (ignoring extremes, I'm generalizing), is this mainly a matter of using it to reduce costs on the bytes processed, or what is the main use case of partitioning tables in BQ?
https://cloud.google.com/bigquery/docs/creating-column-partitions
There are multiple benefits, mainly costs.
by writing a query to read only eg: 7 days of partitions instead of 7 years you have lower costs
partitions you don't touch for older than 90 days are at lower costs
you can clearly reload a day's data much more easier than having to work around
you are still recommended to use YEARly tables eg mytable_2018, but you are no longer required to have daily tables eg: mytable_20180101, this further leads to have simpler queries, also no longer a problem to read more than 1000 tables (which is a hard limit).
when you modify schema, you need to modify a few tables, you no longer need to script alters on thousands of table
this also means it's lover bytes processed and in the cloud platform can be better optimized and needs fewer resources
by reorganizing data into partitioned tables the query times will benefit in the future. As customers will move data, the cloud engineering team will optimize the service for better usage.
you see clear cost wise benefits if your existing data is at least a couple of terabytes.

huge inserts, no updates, moderate reads with billions of rows datastore

Our system need 10 million inserts per day(structural data), storage upto 3 months(which adds up to 300 million records, after which we can purge older records), no updates are required, and it should support simple queries(like queries on some particular columns sorted by date). Which data storage solution is efficient for this case?. We are thinking RDBMS will be slow for billions of records.
MySql might work, if you can optimize the way you insert.
http://dev.mysql.com/doc/refman/5.7/en/optimizing-innodb-bulk-data-loading.html
Of course it depends if your 1M inserts are evenly spread or you have picks.
If you have picks of very height TPS I would recommend aerospike.
MongoDb can also work, however you will need to use sharding.

Storing Time Series in AWS DynamoDb

I would like to store 1M+ different time series in Amazon's DynamoDb database. Each time series will have about 50K data points. A data point is comprised of a timestamp and a value.
The application will add new data points to time series frequently (all the time) and will retrieve (usually the whole time series) time series from time to time, for analytics.
How should I structure the database? Should I create a separate table for each timeseries? Or should I put all data points in one table?
Assuming your data is immutable and given the size, you may want to consider Amazon Redshift; it's written for petabyte-sized reporting solutions.
In Dynamo, I can think of a few viable designs. In the first, you could use one table, with a compound hash/range key (both strings). The hash key would be the time series name, the range key would be the timestamp as an ISO8601 string (which has the pleasant property that alphabetical ordering is also chronological ordering), and there would be an extra attribute on each item; a 'value'. This gives you the abilty to select everything from a time series (Query on hashKey equality) and a subset of a time series (Query on hashKey equality and rangeKey BETWEEN clause). However, your main problem is the "hotspot" problem: internally, Dynamo will partition your data by hashKey, and will disperse your ProvisionedReadCapacity over all your partitions. So you may have 1000 KB of reads a second, but if you have 100 partitions, then you have only 10 KB a second for each partition, and reading all data from a single time series (single hashKey) will only hit one partition. So you may think your 1000 KB of reads gives you 1 MB a second, but if you have 10 MB stored it might take you much longer to read it, as your single partition will throttle you much more heavily.
On the upside, DynamoDB has an extremely high but costly upper-bound on scaling; if you wanted you could pay for 100,000 Read Capacity units, and have sub-second response times on all of that data.
Another theoretical design would be to store every time series in a separate table, but I don't think DynamoDB is meant to scale to millions of tables, so this is probably a no-go.
You could try and spread out your time series across 10 tables where "highly read" data goes in table 1, "almost never read data" in table 10, and all other data somewhere in between. This would let you "game" the provisioned throughput / partition throttling rules, but at a high degree of complexity in your design. Overall, it's probably not worth it; where do you new time series? How do you remember where they all are? How do you move a time series?
I think DynamoDB supports some internal "bursting" on these kinds of reads from my own experience, and it's possible my numbers are off, and you will get adequete performance. However my verdict is to look into Redshift.
How about dripping each time series into JSON or similar and store in S3. At most you'd need a lookup from somewhere like Dynamo.
You still may need redshift to process your inputs.