I understand that AWS doesn't support a direct copy from one cluster to another for a given table. We need to UNLOAD from one and then COPY to another. However this applies to a table. Does it apply to schema as well?
say I have a schema that looks like
some_schema
|
-- table1
-- table2
-- table3
another_schema
|
-- table4
-- table5
and I want to copy tsome_schema to another cluster, but don't need another_schema. Making a snapshot doesn't make sense if there are too many of another_schema (say, another_schema2, another_schema3, another_schema4, etc., each with multiple tables in it)
I know I can do UNLOAD some_schema.table1 and then COPY some_schema.table1, but what can I do if I just want to copy the entire some_schema?
I believe unload a schema is not available, but you have couple of options based on the size of your cluster and number of tables you like to copy to the new cluster.
Create a script to generate UNLOAD and LOAD commands based on your schemas you like to copy
Create a snapshot, restore tables selectively. https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html
If the number of tables which will be excluded from copy is not big you can CTAS them with BACKUP NO option, thus they will not be included when you create a snapshot.
To me, option 1 looks the easiest, let me know if you need any help with that.
UPDATE :
Here is the SQL to generate UNLOAD statements
select 'unload (''select * from '||n.nspname||'.'||c.relname||''') to ''s3_location''
access_key_id ''accesskey''
secret_access_key ''secret_key''
delimiter ''your_delimiter''
PARALLEL ON
GZIP ;' as sql
from pg_class c
left join pg_namespace n on c.relnamespace=n.oid
where n.nspname in ('schema1','schema2');
If you like to add an additional filter for tables use c.relname column
I agree with solution provided by #mdem7. I would like to provide bit different solution that I feel may be helpful to others.
There are two problems,
Copying the schema and table definition(meaning DDL)
Copying data
Here is my proposed solution,
Copying the schema and table definition(meaning DDL)
I think, pg_dump command suit best here and it will export full schema definition in SQL file that could directly imported to another cluster.
pg_dump --schema-only -h your-host -U redshift-user -d redshift-database -p port > your-schema-file.sql
Then import the same to other cluster.
psql -h your-other-cluster-host -U other-cluster-username -d your-other-cluster-database-name -a -f your-schema-file.sql
Copying data
As suggested in other answer, unload to S3 and Copy from S3 suits best.
Hope it helps.
You really only have two options -
What mdem7 suggested using UNLOAD/COPY.
I don't recommend using pg_dump to get schema as it will miss the Redshift specific table settings like DIST/SORT keys + column ENCODING.
Check out this view instead - Generate Table DDL
The alternative is what you mentioned the restore from a snapshot (manual or automated). However, the moment the new cluster comes online (while it's still restoring), log in and drop (with cascade) all the schemas you do not want. This will stop the restore on the dropped schemas/tables. The only downside to this approach is that the new cluster needs to be the same size as the original. Which may or may not matter. If the cluster is going to be relatively long lived once it's restored and it makes sense you can resize it downwards after the restore has completed.
Related
I have a lambda job which infrequently dumps a parquet file into an S3 bucket/Glue table using AWS Wrangler.
This Glue table appears to be increasing the table version number every time there is new data, even though the schema is unchanged.
I do not think the problem is with the lambda job/wrangler, since it deposits the parquet files as expected. I have also tested that code separately and it works as expected.
Something is going on with the Glue data catalogue table that makes it increase versions despite no changes to the schema.
I have checked for differences in the underlying parquet files to see if there are some schema, data type etc changes between updates, and there are none.
I have checked for differences between the Glue table versions via the console and AWS CLI (aws glue get-table-versions) and found no differences there either (only the UpdateTime and VersionId changes).
I have tried to recreate my setup with the same code and do not find this issue. I have tried to delete and recreate the Glue table in the same place, but the issue reoccurs.
Question: What could be causing my Glue table version numbers to increase when there are no schema changes?
Note:
The code in question looks like this. It's part of a bigger function (this is really just generating logs of what the main lambda function is doing). It works fine on its own and doesn't use variables etc from the rest of the code. I don't see how this could be the issue but including it here anyway.
#other functions do some things when triggered by a new file in another s3 bucket
#this function is just logging which files were processed. It's the Glue table from these log files which is having issues with the version number increasing every time a new log file is added.
import aws-wrangler as wr
def log(resource, filename):
log_df = build_log(resource, filename) # for building the log df, just columns of date, time, file used etc
wr.s3.to_parquet(
df=log_df,
path=log_path(), #s3 bucket where parquet logs are being put
dataset=True,
catalog_versioning=False,
database="MYDB",
partition_cols=['date'],
table='log',
mode='append'
)
This is, I think due to partitioning. You are partitioning based on date, so I guess for every day of time unit a new partition will be added. The new partitions are the reason why the table version is being incremented.
I'd like to UNLOAD data from Redshift table into already existing S3 folder, in a similar way of what happens in Spark with the write option "append" (so creating new files in the target folder if this already exists).
I'm aware of the ALLOWOVERWRITE option but this deletes the already existing folder.
Is it something supported in Redshift? If not, what approach is recommended? (it would be anyway a desired feature I believe...)
One solution that could solve the issue is to attach another unique suffix after the folder
e.g.
unload ('select * from my_table')
to 's3://mybucket/first_folder/unique_prefix_' iam_role
'arn:aws:iam::0123456789012:role/MyRedshiftRole';
If you add unique_prefix_ after the first folder level, all your new files, will start with your unique_prefix_ during the unload operation, therefore you don't need any ALLOWOVERWRITE.
The only issue of this approach is that if you unloaded data change, you might have mix schema for your unloaded data.
My Athena DB is in ap-south-1 region and AWS QuickSight doesn't exist in that region.
How can I connect QuickSight with Athena in that case?
All you need to do is to copy table definitions from one region to another. There are several ways to do that
With AWS Console
This approach is the most simple one and doesn't require additional setup as everything is based on Athena DDL statements.
Get table definition with
SHOW CREATE TABLE `database`.`table`;
This should output something like:
CREATE EXTERNAL TABLE `database`.`table`(
`col_1` string,
`col_2` bigint,
...
`col_n` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
's3://some/location/on/s3'
TBLPROPERTIES (
'classification'='parquet',
...
'compressionType'='gzip')
Change to a desired region
Create database where you want to store table definitions, or use default one.
Execute statement produced by SHOW CREATE TABLE. Note, you might need to change name of database with respect to previous step
If you table is partitioned then you would need to load all partitions.
If data on S3 adheres HIVE partitioning style, i.e.
s3://some/location/on/s3
|
├── day=01
| ├── hour=00
| └── hour=01
...
then you can use
MSCK REPAIR TABLE `database`.`table`
Alternatively, you can load partitions one by one
ALTER TABLE `database`.`table`
ADD PARTITION (day='01', hour='00')
LOCATION 's3://some/location/on/s3/01/00';
ALTER TABLE `database`.`table`
ADD PARTITION (day='01', hour='01')
LOCATION 's3://some/location/on/s3/01/01';
...
With AWS API
You can use AWS SDK, e.g. boto3 for python, which provide an easy to use, object-oriented API. Here you have two options:
Use Athena client. Like in a previous approach, you would need to get table definition statement from AWS Console. But all other steps, can be done in scripted manner with the use of start_query_execution method of Athena Client. There are plenty resources online, e.g. this one
Use AWS Glue client. This method is solely based on operation within AWS Glue Data Catalog, which is used by Athena during query execution. Main idea is to create two glue clients, one for source and one for destination catalog. For example
import boto3
KEY_ID = "__KEY_ID__"
SECRET = "__SECRET__"
glue_source = boto3.client(
'glue',
region_name="ap-south-1",
aws_access_key_id=KEY_ID,
aws_secret_access_key=SECRET
)
glue_destination = boto3.client(
'glue',
region_name="us-east-1",
aws_access_key_id=KEY_ID,
aws_secret_access_key=SECRET
)
# Or you can do it with creating sessions
glue_source = boto3.session.Session(profile_name="profile_for_ap_south_1").client("glue")
glue_destination = boto3.session.Session(profile_name="profile_for_us_east_1").client("glue")
Then you would need to use get and create type methods. This would also require parsing responses that would get from glue clients.
With AWS Glue crawlers
Although, you can use AWS Glue crawlers to "rediscover" data on S3, I wouldn't recommend this approach since you already know structure of you data.
The answer of #Ilya Kisil is correct but I would like to bring some more details and alternative solutions.
There are two different approaches you can take.
As suggested by Ilya, copy the table definitions from one region (source region) to another (destination region). The idea is to reference the data of the other region.
I found the Glue Crawlers much easier and faster. You need to create a Glue Crawler in the source region and specify the S3 bucket of the destination region where the metadata is located. Once you do it, you will see in the Athena source region all the tables of the destination region! Behind the scenes what Glue Crawler does is what Ilya explained in the "With AWS Console" section. So, instead of creating the table one by one and loading the partitions (if exist), you can just create one Glue Crawler.
Note, that it holds a reference to your destination region tables. So that it doesn't copy the data. At first glance, it seems to be great! Why should we copy the data if we could reference it? But when you take a deeper look, you can find that you are probably going to pay more money $$$. When you reference data, you will pay for the data each query returns and if you consume the data a lot, and you have TB/PB of data, it might be too expensive, and if cost is a consideration for you, I would recommend you consider the second solution.
Also note, that although the data is not being copied to the source region and just referenced, behind the scenes, when you execute a query, AWS saves the data temporarily in the source region. So, if you need to be GDPR compliant you might need to be aware of that.
Copy the data from the destination region to the source region and have a process that keeps synchronizing it. Then you will not pay for the Athena queries, but rather pay for the storage that is usually cheaper. If possible, you can also copy just what you need or aggregate the data, so you have less copied storage => and less cost.
A convenient way to do it is by creating a Glue Job that will be responsible for copying the data from the destination region S3 bucket to the source region S3 bucket. And then you can add it to a Glue Workflow that will run this job once a day or whatever is proper for you.
To Summarize:
There are lots of things to consider and I mentioned some of them. In each use case, you have advantages and disadvantages and you can find what is the right one for you.
(Solution 1) Advantages:
Easy. Just some clicks.
Fast.
Referencing the data and no need to have duplicated data.
(Solution 1) Disadvantages:
Might be way more expensive (depends on the data usage).
(Solution 2) Advantages:
Might be much cheaper
(Solution 2) Disadvantages:
Slow/Longer solution
Need to copy existing data and then have a process to copy new data
I am in the process of migrating a database from an external server to cloud sql 2nd gen. Have been following the recommended steps and the 2TB mysqlsump process was complete and replication started. However, got an error:
'Error ''Access denied for user ''skip-grants user''#''skip-grants host'' (using password: NO)'' on query. Default database: ''mondovo_db''. Query: ''LOAD DATA INFILE ''/mysql/tmp/SQL_LOAD-0a868f6d-8681-11e9-b5d3-42010a8000a8-6498057-322806.data'' IGNORE INTO TABLE seoi_volume_update_tracker FIELDS TERMINATED BY ''^#^'' ENCLOSED BY '''' ESCAPED BY ''\'' LINES TERMINATED BY ''^|^'' (keyword_search_volume_id)'''
2 questions,
1) I'm guessing the error has come about because cloud sql requires LOAD DATA LOCAL INFILE instead of LOAD DATA INFILE? However am quite sure on the master we run only LOAD DATA LOCAL INFILE so not sure how it changes to remove LOCAL while in replication, is that possible?
2) I can't stop the slave to skip the error and restart since SUPER privileges aren't available and so am not sure how to skip this error and also avoid it for the future while the the final sync happens. Suggestions?
There was no way to work around the slave replication error in Google Cloud SQL, so had to come up with another way.
Since replication wasn't going to work, I had to do a copy of all the databases. However, because of the aggregate size of all my DBs being at 2TB, it was going to take a long time.
The final strategy that took the least amount of time:
1) Pre-requisite: You need to have at least 1.5X the amount of current database size in terms of disk space remaining on your SQL drive. So my 2TB DB was on a 2.7TB SSD, I needed to eventually move everything temporarily to a 6TB SSD before I could proceed with the steps below. DO NOT proceed without sufficient disk space, you'll waste a lot of your time as I did.
2) Install cloudsql-import on your server. Without this, you can't proceed and this took a while for me to discover. This will facilitate in the quick transfer of your SQL dumps to Google.
3) I had multiple databases to migrate. So if in a similar situation, pick one at a time and for the sites that access that DB, prevent any further insertions/updates. I needed to put a "Website under Maintenance" on each site, while I executed the operations outlined below.
4) Run the commands in the steps below in a separate screen. I launched a few processes in parallel on different screens.
screen -S DB_NAME_import_process
5) Run a mysqldump using the following command and note, the output is an SQL file and not a compressed file:
mysqldump {DB_NAME} --hex-blob --default-character-set=utf8mb4 --skip-set-charset --skip-triggers --no-autocommit --single-transaction --set-gtid-purged=off > {DB_NAME}.sql
6) (Optional) For my largest DB of around 1.2TB, I also split the DB backup into individual table SQL files using the script mentioned here: https://stackoverflow.com/a/9949414/1396252
7) For each of the files dumped, I converted the INSERT commands into INSERT IGNORE because didn't want any further duplicate errors during the import process.
cat {DB_OR_TABLE_NAME}.sql | sed s/"^INSERT"/"INSERT IGNORE"/g > new_{DB_OR_TABLE_NAME}_ignore.sql
8) Create a database by the same name on Google Cloud SQL that you want to import. Also create a global user that has permission to access all the databases.
9) Now, we import the SQL files using the cloudsql-import plugin. If you split the larger DB into individual table files in Step 6, use the cat command to combine a batch of them into a single file and make as many batch files as you see appropriate.
Run the following command:
cloudsql-import --dump={DB_OR_TABLE_NAME}.sql --dsn='{DB_USER_ON_GLCOUD}:{DB_PASSWORD}#tcp({GCLOUD_SQL_PUBLIC_IP}:3306)/{DB_NAME_CREATED_ON_GOOGLE}'
10) While the process is running, you can step out of the screen session using Ctrl+a
+ Ctrl+d (or refer here) and then reconnect to the screen later to check on progress. You can create another screen session and repeat the same steps for each of the DBs/batches of tables that you need to import.
Because of the large sizes that I had to import, I believe it did take me a day or two, don't remember now since it's been a few months but I know that it's much faster than any other way. I had tried using Google's copy utility to copy the SQL files to Cloud Storage and then use Cloud SQL's built-in visual import tool but that was slow and not as fast as cloudsql-import. I would recommend this method up until Google fixes the ability to skip slave errors.
I have a Spark batch job which is executed hourly. Each run generates and stores new data in S3 with the directory naming pattern DATA/YEAR=?/MONTH=?/DATE=?/datafile.
After uploading the data to S3, I want to investigate it using Athena. Also, I would like to visualize them in QuickSight by connecting to Athena as a data source.
The problem is that after each run of my Spark batch, the newly generated data stored in S3 will not be discovered by Athena, unless I manually run the query MSCK REPAIR TABLE.
Is there a way to make Athena update the data automatically, so that I can create a fully automatic data visualization pipeline?
There are a number of ways to schedule this task. How do you schedule your workflows? Do you use a system like Airflow, Luigi, Azkaban, cron, or using an AWS Data pipeline?
From any of these, you should be able to fire off the following CLI command.
$ aws athena start-query-execution --query-string "MSCK REPAIR TABLE some_database.some_table" --result-configuration "OutputLocation=s3://SOMEPLACE"
Another option would be AWS Lambda. You could have a function that calls MSCK REPAIR TABLE some_database.some_table in response to a new upload to S3.
An example Lambda Function could be written as such:
import boto3
def lambda_handler(event, context):
bucket_name = 'some_bucket'
client = boto3.client('athena')
config = {
'OutputLocation': 's3://' + bucket_name + '/',
'EncryptionConfiguration': {'EncryptionOption': 'SSE_S3'}
}
# Query Execution Parameters
sql = 'MSCK REPAIR TABLE some_database.some_table'
context = {'Database': 'some_database'}
client.start_query_execution(QueryString = sql,
QueryExecutionContext = context,
ResultConfiguration = config)
You would then configure a trigger to execute your Lambda function when new data are added under the DATA/ prefix in your bucket.
Ultimately, explicitly rebuilding the partitions after you run your Spark Job using a job scheduler has the advantage of being self documenting. On the other hand, AWS Lambda is convenient for jobs like this one.
You should be running ADD PARTITION instead:
aws athena start-query-execution --query-string "ALTER TABLE ADD PARTITION..."
Which adds a the newly created partition from your S3 location
Athena leverages Hive for partitioning data.
To create a table with partitions, you must define it during the CREATE TABLE statement. Use PARTITIONED BY to define the keys by which to partition data.
There's multiple ways to solve the issue and get the table updated:
Call MSCK REPAIR TABLE. This will scan ALL data. It's costly as every file is read in full (at least it's fully charged by AWS). Also it's painfully slow. In short: Don't do it!
Create partitions by your own by calling ALTER TABLE ADD PARTITION abc .... This is good in a sense no data is scanned and costs are low. Also the query is fast, so no problems here. It's also a good choice if you have very cluttered file structure without any common pattern (which doesn't seem it's your case as it's a nicely organised S3 key pattern). There's also downsides to this approach: A) It's hard to maintain B) All partitions will to be stored in GLUE catalog. This can become an issue when you have a lot of partitions as they need to be read out and passed to Athena and EMRs Hadoop infrastructure.
Use partition projection. There's two different styles you might want to evaluate. Here's the variant with does create the partitions for Hadoop at query time. This means there's no GLUE catalog entries send over the network and thus large amounts of partitions can be handled quicker. The downside is you might 'hit' some partitions that might not exist. These will of course be ignored, but internally all partitions that COULD match your query will be generated - no matter if they are on S3 or not (so always add partition filters to your query!). If done correctly, this option is a fire and forget approach as there's no updates needed.
CREATE EXTERNAL TABLE `mydb`.`mytable`
(
...
)
PARTITIONED BY (
`YEAR` int,
`MONTH` int,
`DATE` int)
...
LOCATION
's3://DATA/'
TBLPROPERTIES(
"projection.enabled" = "true",
"projection.account.type" = "integer",
"projection.account.range" = "1,50",
"projection.YEAR.type" = "integer",
"projection.YEAR.range" = "2020,2025",
"projection.MONTH.type" = "integer",
"projection.MONTH.range" = "1,12",
"projection.DATE.type" = "integer",
"projection.DATE.range" = "1,31",
"storage.location.template" = "s3://DATA/YEAR=${YEAR}/MONTH=${MONTH}/DATE=${DATE}/"
);
https://docs.aws.amazon.com/athena/latest/ug/partition-projection.html
Just to list all options: You can also use GLUE crawlers. But it doesn't seemed to be a favourable approach as it's not as flexible as advertised.
You get more control on GLUE using Glue Data Catalog API directly, which might be an alternative to approach #2 if you have a lot of automated scripts
that do the preparation work to setup your table.
In short:
If your application is SQL centric, you like the leanest approach with no scripts, use partition projection
If you have many partitions, use partition projection
If you have a few partitions or partitions do not have a generic pattern, use approach #2
If you're script heavy and scripts do most of the work anyway and are easier to handle for you, consider approach #5
If you're confused and have no clue where to start - try partition projection first! It should fit 95% of the use cases.