i am not able to use indexes on compressed chunks, so need to re configure compression policy so that at least 1 month data resides in hotchunks instead of 7 days
The easiest way to do this is to simply remove the current compression policy and create a new one with a new interval. Removing the policy won't decompress any chunks.
https://docs.timescale.com/api/latest/compression/remove_compression_policy/
That aside, you can see all policies (compression, continuous aggregates, data retention, etc.) through the Jobs view, including the configuration for each job. Technically you can alter_job and update the configuration (stored as JSONB), but in most cases it's easier to just remove the policy and setup a new one with the appropriate add... API.
Related
I have the following requirements:
For every deleted record in RDS we need to archive it into somewhere cheaper on AWS.
Reduce storage cost
Not using Glacier
Context oriented (e.g. a file per table)
re-import is not a requirement
I'm not an experienced user with AWS, so I'm still a bit lost among the amount of options it has to offer and I'd like to know if you have more ideas to help me clear it out.
Initial thoughts:
The microservice that deletes the record, might send it to a broker (RabbitMQ for e.g.) and another microservice (let's call it archiver) will listen to it, write into a file, zip and send to S3. This approach has some technical challenges though: in order to make sense create big files, I need to wait the queue to growth a bit, wrap it into a stream and zip inside S3. The transaction control is very weak as well, since file writing and ack on messages are signal based i.e. I'll remove the messages from the broker just after the file is created.
Add a new column to the "archiveble" tables as "deleted (bool)" and run a separate job fetching only those records and saving them into S3. Discarded they don't want the new microservice with access to other's databases.
Following the same approach as in the first item, but instead of save into S3, save into a cheaper database. SimpleDB?
option 1, but instead of rabbitmq, write it to a kinesis firehose and direct that to an s3 location - it doesn't get much cheaper or easier than that.
My Athena DB is in ap-south-1 region and AWS QuickSight doesn't exist in that region.
How can I connect QuickSight with Athena in that case?
All you need to do is to copy table definitions from one region to another. There are several ways to do that
With AWS Console
This approach is the most simple one and doesn't require additional setup as everything is based on Athena DDL statements.
Get table definition with
SHOW CREATE TABLE `database`.`table`;
This should output something like:
CREATE EXTERNAL TABLE `database`.`table`(
`col_1` string,
`col_2` bigint,
...
`col_n` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
's3://some/location/on/s3'
TBLPROPERTIES (
'classification'='parquet',
...
'compressionType'='gzip')
Change to a desired region
Create database where you want to store table definitions, or use default one.
Execute statement produced by SHOW CREATE TABLE. Note, you might need to change name of database with respect to previous step
If you table is partitioned then you would need to load all partitions.
If data on S3 adheres HIVE partitioning style, i.e.
s3://some/location/on/s3
|
├── day=01
| ├── hour=00
| └── hour=01
...
then you can use
MSCK REPAIR TABLE `database`.`table`
Alternatively, you can load partitions one by one
ALTER TABLE `database`.`table`
ADD PARTITION (day='01', hour='00')
LOCATION 's3://some/location/on/s3/01/00';
ALTER TABLE `database`.`table`
ADD PARTITION (day='01', hour='01')
LOCATION 's3://some/location/on/s3/01/01';
...
With AWS API
You can use AWS SDK, e.g. boto3 for python, which provide an easy to use, object-oriented API. Here you have two options:
Use Athena client. Like in a previous approach, you would need to get table definition statement from AWS Console. But all other steps, can be done in scripted manner with the use of start_query_execution method of Athena Client. There are plenty resources online, e.g. this one
Use AWS Glue client. This method is solely based on operation within AWS Glue Data Catalog, which is used by Athena during query execution. Main idea is to create two glue clients, one for source and one for destination catalog. For example
import boto3
KEY_ID = "__KEY_ID__"
SECRET = "__SECRET__"
glue_source = boto3.client(
'glue',
region_name="ap-south-1",
aws_access_key_id=KEY_ID,
aws_secret_access_key=SECRET
)
glue_destination = boto3.client(
'glue',
region_name="us-east-1",
aws_access_key_id=KEY_ID,
aws_secret_access_key=SECRET
)
# Or you can do it with creating sessions
glue_source = boto3.session.Session(profile_name="profile_for_ap_south_1").client("glue")
glue_destination = boto3.session.Session(profile_name="profile_for_us_east_1").client("glue")
Then you would need to use get and create type methods. This would also require parsing responses that would get from glue clients.
With AWS Glue crawlers
Although, you can use AWS Glue crawlers to "rediscover" data on S3, I wouldn't recommend this approach since you already know structure of you data.
The answer of #Ilya Kisil is correct but I would like to bring some more details and alternative solutions.
There are two different approaches you can take.
As suggested by Ilya, copy the table definitions from one region (source region) to another (destination region). The idea is to reference the data of the other region.
I found the Glue Crawlers much easier and faster. You need to create a Glue Crawler in the source region and specify the S3 bucket of the destination region where the metadata is located. Once you do it, you will see in the Athena source region all the tables of the destination region! Behind the scenes what Glue Crawler does is what Ilya explained in the "With AWS Console" section. So, instead of creating the table one by one and loading the partitions (if exist), you can just create one Glue Crawler.
Note, that it holds a reference to your destination region tables. So that it doesn't copy the data. At first glance, it seems to be great! Why should we copy the data if we could reference it? But when you take a deeper look, you can find that you are probably going to pay more money $$$. When you reference data, you will pay for the data each query returns and if you consume the data a lot, and you have TB/PB of data, it might be too expensive, and if cost is a consideration for you, I would recommend you consider the second solution.
Also note, that although the data is not being copied to the source region and just referenced, behind the scenes, when you execute a query, AWS saves the data temporarily in the source region. So, if you need to be GDPR compliant you might need to be aware of that.
Copy the data from the destination region to the source region and have a process that keeps synchronizing it. Then you will not pay for the Athena queries, but rather pay for the storage that is usually cheaper. If possible, you can also copy just what you need or aggregate the data, so you have less copied storage => and less cost.
A convenient way to do it is by creating a Glue Job that will be responsible for copying the data from the destination region S3 bucket to the source region S3 bucket. And then you can add it to a Glue Workflow that will run this job once a day or whatever is proper for you.
To Summarize:
There are lots of things to consider and I mentioned some of them. In each use case, you have advantages and disadvantages and you can find what is the right one for you.
(Solution 1) Advantages:
Easy. Just some clicks.
Fast.
Referencing the data and no need to have duplicated data.
(Solution 1) Disadvantages:
Might be way more expensive (depends on the data usage).
(Solution 2) Advantages:
Might be much cheaper
(Solution 2) Disadvantages:
Slow/Longer solution
Need to copy existing data and then have a process to copy new data
I have a use case where I have to back up a 200+TB, 18M object S3 bucket to another account that changes often (used in batch processing of critical data). I need to add a verification step, but due to the large size of both bucket, object count, and frequency of change this is tricky.
My current thoughts are to pull the eTags from the original bucket and archive bucket, and the write a streaming diff tool to compare the values. Has anyone here had to approach this problem and if so did you come up with a better answer?
Firstly, if you wish to keep two buckets in sync (once you've done the initial sync), you can use Cross-Region Replication (CRR).
To do the initial sync, you could try using the AWS Command-Line Interface (CLI), which has a aws s3 sync command. However, it might have some difficulties with a large number of files -- I suggest you give it a try. It uses keys, dates and filesize to determine which files to sync.
If you do wish to create your own sync app, then eTag is definitely a definitive way to compare files.
To make things simple, activate Amazon S3 Inventory, which can provide a daily listing of all files in a bucket, including eTag. You could then do a comparison between the Inventory files to discover which remaining files require synchronization.
For anyone looking for a way to solve this problem in an automated way (as was I),
I created a small python script that leverages S3 Inventories and Athena to do the comparison somewhat efficiently. (This is basically automation of John Rosenstein's suggestion)
You can find it here https://github.com/forter/s3-compare
I have a few GCP projects with log sinks to different storage buckets. I'd like to combine them into a single bucket. But the stackdriver export doesn't add any distinguishing information to the object names it creates; they all look like cloudaudit.googleapis.com/activity/2017/11/14/00:00:00_00:59:59_S0.json
What will happen if I start pushing them all to a single bucket? Will the different project sinks overwrite each other's objects? Is there any way to distinguish which project created the logs just from the object?
If not, I guess I should switch to pubsub sinks, and then write some code that produces objects with more desirable names. Are there any established patterns or examples for doing this?
Update: I filed https://issuetracker.google.com/issues/69371200 for this issue.
To enable this, just select custom destination on the sink and point to the bucket with this format: storage.googleapis.com/[BUCKET_ID].
I've just enabled this in a couple of my projects, as I'm curious to see the results when exporting to a bucket. However, I have been using a single BQ sink for all my projects, and the tables created have all the logs mixed, so no logs lost when using a single BQ sink.
I'm assuming for a GCS sink will work in the same way, but I'll tell you in a couple of days.
If a single bucket sink does not work, you can always use a single BQ sink (that will help in analyzing the logs), and when you no longer want to have them in BQ, export them and store the files wherever you want.
Also, since you'll be writing to your sink constantly, you can't use nearline or coldline, so the storage pricing is better in BQ than a regional bucket (0.02 USD/GB in BQ vs somewhere between 0.02 and 0.35 USD/GB for regional storage, depending on the region; BQ has 10GB free monthly, GCS 5GB).
I would generally recommend using a BQ sink, but I'll tell you what happens with my bucket logs.
Update:
A few hours later, and I've verified that shared bucket sinks work pretty much as you would expect. It concatenates logs chronologically regardless of the project origin, and only creates a single file for each time window. Hope this helps! (I still prefer BQ as a log sink...)
Update 2:
For the behavior you seek in the feature request, I would use BQ, but you could just as easily grep the project ID and separate the logs:
grep '"logName":"projects/<your-project-id>/' mixed-log.json > single-project-log.json
Or just get a cloud function triggered by bucket updates (so, every time you receive a log file in the sink) to run this for you.
Or namespace you buckets and have a cloud function moving them to wherever you need as soon as they are written.
The possibilities are endless!
If you have an organization or folder which includes all the projects that you want to collect logs from, then you can create a sink that collects from all projects in that org/folder.
Unfortunatlely, you cannot do this from the Cloud Console. Instead you must use gcloud with the --organization or --folder option or the API.
S3 FAQ mentions that "Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES." However, I don't know how long it takes to get eventual consistency. I tried to search for this but couldn't find an answer in S3 documentation.
Situation:
We have a website consists of 7 steps. When user clicks on save in each step, we want to save a json document (contains information of all 7 steps) to Amazon S3. Currently we plan to:
Create a single S3 bucket to store all json documents.
When user saves step 1 we create a new item in S3.
When user saves step 2-7 we override the existing item.
After user saves a step and refresh the page, he should be able to see the information he just saved. i.e. We want to make sure that we always read after write.
The full json document (all 7 steps completed) is around 20 KB.
After users clicked on save button we can freeze the page for some time and they cannot make other changes until save is finished.
Question:
How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
Is there a function to calculate save/load time based on item size?
Is the save/load time gonna be different if I choose another S3 region? If so which is the best region for Seattle?
I wanted to add to #error2007s answers.
How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
It's not only that you will not find the exact time anywhere - there's actually no such thing exact time. That's just what "eventual consistency" is all about: consistency will be achieved eventually. You can't know when.
If somebody gave you an upper bound for how long a system would take to achieve consistency, then you wouldn't call it "eventually consistent" anymore. It would be "consistent within X amount of time".
The problem now becomes, "How do I deal with eventual consistency?" (instead of trying to "beat it")
To really find the answer to that question, you need to first understand what kind of consistency you truly need, and how exactly the eventual consistency of S3 could affect your workflow.
Based on your description, I understand that you would write a total of 7 times to S3, once for each step you have. For the first write, as you correctly cited the FAQs, you get strong consistency for any reads after that. For all the subsequent writes (which are really "replacing" the original object), you might observe eventual consistency - that is, if you try to read the overwritten object, you might get the most recent version, or you might get an older version. This is what is referred to as "eventual consistency" on S3 in this scenario.
A few alternatives for you to consider:
don't write to S3 on every single step; instead, keep the data for each step on the client side, and then only write 1 single object to S3 after the 7th step. This way, there's only 1 write, no "overwrites", so no "eventual consistency". This might or might not be possible for your specific scenario, you need to evaluate that.
alternatively, write to S3 objects with different names for each step. E.g., something like: after step 1, save that to bruno-preferences-step-1.json; then, after step 2, save the results to bruno-preferences-step-2.json; and so on, then save the final preferences file to bruno-preferences.json, or maybe even bruno-preferences-step-7.json, giving yourself the flexibility to add more steps in the future. Note that the idea here to avoid overwrites, which could cause eventual consistency issues. Using this approach, you only write new objects, you never overwrite them.
finally, you might want to consider Amazon DynamoDB. It's a NoSQL database, you can securely connect to it directly from the browser or from your server. It provides you with replication, automatic scaling, load distribution (just like S3). And you also have the option to tell DynamoDB that you want to perform strongly consistent reads (the default is eventually consistent reads; you have to change a parameter to get strongly consistent reads). DynamoDB is typically used for "small" records, 20kB is definitely within the range -- the maximum size of a record would be 400kB as of today. You might want to check this out: DynamoDB FAQs: What is the consistency model of Amazon DynamoDB?
How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
You will not find the exact time anywhere. If you ask AWS they will give you approx timings. Your file is 20 KB so as per my experience from S3 usage the time will be more or less 60-90 Sec.
Is there a function to calculate save/load time based on item size?
No there is no any function using which you can calculate this.
Is the save/load time gonna be different if I choose another S3 region? If so which is the best region for Seattle?
For Seattle US West Oregon Will work with no problem.
You can also take a look at this experiment for comparison https://github.com/andrewgaul/are-we-consistent-yet