Migrate and Transform MySQL rows from one table to another - amazon-web-services

I need to migrate all records 3 billions from one MySQL Aurora table to 5 different tables in same cluster .
There are transformation of 2 columns is also has to happen .
So when we migrate we need to convert xml to json and then json will be stored in one of the destination table .
We are looking for best way to migrate this data from one MySQL table to another and we are on AWS so we have flexibility to use any services which can help us achieve this .
So far this is what we have planned
MySQL TABLE ----DMS------>S3 ------LAMBDA to convert XML to JSON and create 5 types of files ---->Lambda on file create and Load data local to 5 Different MySQL table .
But one thing we would like to know how can we handle if Load data local fails in between ?So Lambda will submit the query for load data local from s3 to MySQL but how can we track in Lambda that Load data local success or failure ?
We can not use any direct way because we need to transform data in between .
Is there any better way we can use here ?
Can we use Data pipeline in place of Lambda function for load data local?
Or can we use DMS which will upload file from S3 to MySQL ?
Please suggest what can be the best way which will capability to handle failure scenario

What you are basically doing is an ETL process. I would advise you to look into either AWS EMR or AWS Glue. Since you don't seem to have that much experience, I would use Glue.
With Glue you could basically read from MySQL, do the transformation and write back directly to MySQL. Also, since Glue is running Spark in the background, you can leverage it's distributed computing, which will speed up your process instead of using a single thread lambda function.

Related

Truncate load in Redshift daily

Would like some suggestions on loading data to Redshift.
Currently we have an EMR cluster where RAW data is ingested regularly. We have a transformation job which runs daily and creates final modeled object. However, we are following truncate and load strategy in EMR . Due to business reasons there is no way to figure out which data has changed.
We are planning to store of this modeled object in Redshift.
Now my question is If we follow the same truncate and load strategy in
RedShift also, will that work?
I was able to find only articles which say use copy if you want to perform bulk copy, and then use insert command for small updates. But nothing on can and should we be using RedShift where the data is getting overwritten daily.

Issues while working with Amazon Aurora Database

My Requirments:
I want to store real-time events data coming from e-commerce websites into a database
In parallel to storing the data, i want to access the events data from a database
I want to perform some sort of ad-hoc analysis(SQL)
Using some sort of built-in methods(either from Boto3 or JAVA SDK), I want to access the events data
I want to create some sort of Custom-API's to access events data stored in database
I recently came across with Amazon Aurora(mysql) database.
I thought Aurora is one of the good example for my requirements. But when I dig into this Amazon Aurora(mysql), I noticed that we can create a database using AWS-CDK
BUT
1. No equivalent methods to create tables using AWS-CDK/BOTO3
2. No equivalent methods in BOTO3 or JAVA SDK to store/access the database data
Can anyone tell me how i can create a table using(IAC) in AURORA db?
Can anyone tell me how i can store realtime data into AURORA?
Can anyone tell me how i can access realtime data stored in AURORA?
No equivalent methods to create tables using AWS-CDK/BOTO3
This is because only Aurora Serveless can be accessed using Data API, not regular database.
You have to use regular mysql tools (e.g., mysql cli, phpmyadmin, mysql workbench etc) to create tables and populate them.
No equivalent methods in BOTO3 or JAVA SDK to store/access the database data
Same reason and solution as for point 1.
Can anyone tell me how i can create a table using(IAC) in AURORA db?
Terraform has mysql, but its not for tables, but users and databases.
Can anyone tell me how i can store realtime data into AURORA?
There is no out-of-the box solution for that, so you need custom solution for that. Maybe stream data to Kinesis Streams or Firehose, then to lambda and lambda will populate your DB? Seems easiest to implement.
Can anyone tell me how i can access realtime data stored in AURORA?
If you stream data to Kinesis Stream first, you can use Kinesis Analytics to analyze it in real time.
Since many of the above requires custom solutions, other architectures are possible.
Create connectoin manager as
DriverManager.getConnection(
"jdbc:mysql://localhost:3306/$dbName", //replace here with you endpoints & database name
"root",
"admin123"
) then
val stmt: Statement = con.createStatement()
stmt.executeQuery("use productcatalogueinfo;")
Whenever your lambda is triggering then it performs this connection and DDL operations too.

How data retrieved from metadata created tables in Glue Script

In AWS Glue, Although I read documentation, but I didn't get cleared one thing. Below is what I understood.
Regarding Crawlers: This will create a metadata table for either S3 or DynamoDB table. But what I don't understand is: how does Scala/Python script able to retrieve data from Actual Source (say DynamoDB or S3) using Metadata created tables.
val input = glueContext
.getCatalogSource(database = "my_data_base", tableName = "my_table")
.getDynamicFrame()
Does above line retrieve data from actual source via metadata tables?
I will be glad if someone can able to explain me behind the scenes of retrieving data in Glue script via metadata tables.
When you run a Glue crawler it will fetch metadata from S3 or JDBC (depends on your requirement) and creates tables in AWS Glue Data Catalog.
Now if you want to connect to this data/tables from Glue ETL job then you can do it in multiple ways depending on your requirement:
[from_options][1] : if you want to load directly from S3/JDBC with out connecting to Glue catalog.
[from_catalog][1] : If you want to load data from Glue catalog then you need to link it with catalog using getCatalogSource method as shown in your code. As the name infers it will use Glue data catalog as source and load particular table that you pass to this method.
Once it looks at your table definition which is pointed to a location then it will make a connection and load the data present in the source.
Yes you need to use getCatalogSource if you want to load tables from Glue catalog.
Does Catalog look into Crawler and refer to actual source and load data?
Check out the diagram in this [link][2] . It will give you an idea about the flow.
What if crawler deleted before I run getCatalogSource, then will I can able to load data in this case?
Crawler and Table are two different components. It all depends on when the table is deleted. If you delete the table after your job start to execute then there will not be any problem. If you delete it before execution starts then you will encounter an error.
What if my Source has lots of million of records? then will this load all records or how in this case?
It is good to have large files to be present in source so it will avoid most of the small files problem. Glue based on Spark and it will read files which can be fit in memory and then do the computations. Check this [answer][3] and [this][4] for best practices while reading larger files in AWS Glue.
[1]: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-dynamic-frame-reader.html
[2]: https://docs.aws.amazon.com/athena/latest/ug/glue-athena.html
[3]: https://stackoverflow.com/questions/46638901/how-spark-read-a-large-file-petabyte-when-file-can-not-be-fit-in-sparks-main
[4]: https://aws.amazon.com/blogs/big-data/optimize-memory-management-in-aws-glue/#:~:text=Incremental%20processing:%20Processing%20large%20datasets

Filtering data loaded into Redshift

We have raw data stored in S3 as parquet.
I want a subset of that data loaded into Redshift.
To be clear, the Redshift data would be the result of a query (joins, filters, aggregations) of the raw data.
I originally thought that I could build views in Athena, and load the results into Redshift - but seems that it's not that simple !
Glue ETL jobs need an S3 or RDS source - will not accept a view from Athena.
(Cannot crawl a view either).
Next solution, was to have a play with the Athena CTAS functionality, write the results of the view to S3, and then load into RedShift.
However, there is no 'overwrite' option with CTAS.
So questions ...
Is there an easier way to approach this ? (seems a simple requirement)
Is there an easy workaround to execute a CTAS with 'overwrite' behaviour ?
With that, would have to be a solution that could be bundled up into a scheduled job - and already I think is leading into a custom script.
When a simple job becomes so difficult - I cannot help but think I'm missing something simple !?
Thanks
Ol' reliable: use a lambda! Lambda functions can programmatically connect to both s3 and redshift to execute SQL statements, and you have many options for what will trigger the lambda (if it's just a one-time thing, you can just have it be a scheduled lambda). You will be able use cloudwatch logs to examine the process too.
But beware: I noticed that you stored your data as a parquet... Normal Redshift does not support parquet formatted data. So, if you want to store types like structs, etc. you will need to use Redshift Spectrum.

Loading parquet file from S3 to DynamoDB

I have been looking at options to load (basically empty and restore) Parquet file from S3 to DynamoDB. Parquet file itself is created via spark job that runs on EMR cluster. Here are few things to keep in mind,
I cannot use AWS Data pipeline
File is going to contain millions of rows (say 10 million), so would need an efficient solution. I believe boto API (even with batch write) might not be that efficient ?
Are there any other alternatives ?
Can you just refer to the Parquet files in a Spark RDD and have the workers put the entries to dynamoDB? Ignoring the challenge of caching the DynamoDB client in each worker for reuse in different rows, it some bit of scala to take a row, build an entry for dynamo and PUT that should be enough.
BTW: Use DynamoDB on demand here, as it handles peak loads well without you having to commit to some SLA.
Look at the answer below:
https://stackoverflow.com/a/59519234/4253760
To explain the process:
Create desired dataframe
Use .withColumn to create new column and use psf.collect_list to convert to desired collection/json format, in the new column in the
same dataframe.
Drop all un-necessary (tabular) columns and keep only the JSON format Dataframe columns in Spark.
Load the JSON data into DynamoDB as explained in the answer.
My personal suggestion: whatever you do, do NOT use RDD. RDD interface even in Scala is 2-3 times slower than Dataframe API of any language.
Dataframe API's performance is programming language agnostic, as long as you dont use UDF.