Federated BigQuery cost and performance optimization - google-cloud-platform

I am writing a scheduled federated query to load my BiqQuery tables on a daily basis. BigQuery table load strategy is Overwrite. My source is a Cloud SQL database (mysql instance).
I am wondering what would be the correct approach from a performance and cost-optimization perspective in the long run to load my BigQuery tables? Should I overwrite my BigQuery tables daily with source data or should I build a logic in my federated query itself using joins to detect just the new additions in source and then add them to my BigQuery table during daily scheduled runs?

Your second idea is the way to go.
I build a logic in my federated query itself using joins to detect just the new additions in source and then add them to my BigQuery table
The less amount of data BigQuery needs to read/write the cheaper it will be.
This is an approach generally referred to as incremental

Related

Export data from Google Cloud SQL, and append to BigQuery

We have production databases (postgresql and mysql) on Cloud SQL.
How could I export the data from the production databases, and then append to BigQuery datasets?
I DO NOT want to sync or replicate the data into BigQuery because we purge (after backing up) the production databases on regular basis.
The only method I could think of is:
Export to CSV and then drop into Google Cloud Storage
Python scrip to append into BigQuery.
Are there any other more optimal ways?
BigQuery supports external data sources, specifically federated queries which allow you to read data directly from a Cloud SQL instance.
You can use this feature to select from all the relevant tables in your Postgres/MySQL instances and copy them into BigQuery without any extra ETL process. You can append the data to your existing tables, create a new table every time, or use some other organization that works for you.
BigQuery also supports scheduled queries so you can automate this.
The actual SQL will depend on your data sources but it's not much more than...
INSERT INTO `your_bq_table`
SELECT *
FROM `external.postgres123.tablename`

What is the simplest way to extract all 26 tables from a single DynamoDB db into AWS Glue Catalog

I am trying to build AWS QuickSight reports using AWS Athena that builds the specific views for said reports. however, I seem to only be able to select a single table in creating the Glue job despite being able to select all tables i need for the crawler of the entire DB from Dynamo.
What is the simplest route to get a complete extract of all tables that is queryable in Athena.
I dont want to connect the reports direct to dynamoDB as it s a production database and want to create some separation to avoid any performance degradation by a poor query etc.

Cloud data fusion to sync tables from BigQuery to Cloud Spanner

I have a use case where I need to sync spanner table with Big Query tables. So I need to update the Spanner tables based on the updated data in Big Query tables. I am planning to using Cloud data fusion for this. But I do not see any example available for this scenario. Any pointers on this?

AWS Redshift or RDS for a Data warehouse?

Right now we have an ETL that extracts info from an API, transforms, and Store in one big table in our OLTP database we want to migrate this table to some OLAP solution. This table is only read to do some calculations that we store on our OLTP database.
Which service fits the most here?
We are currently evaluating Redshift but never used the service before. Also, we thought of some snowflake schema(some kind of fact table with dimensions) in an RDS because is intended to store 10GB to 100GB but don't know how much this approach can scale.
Which service fits the most here?
imho you could do a PoC to see which service is more feasible for you. It really depends on how much data you have, what queries and what load you plan to execute.
AWS Redshift is intended for OLAP on top of peta- or exa-bytes scale handling heavy parallel workload. RS can as well aggregate data from other data sources (jdbc, s3,..). However RS is not OLTP, it requires more static server overhead and extra skills for managing the deployment.
So without more numbers and use cases one cannot advice anything. Cloud is great that you can try and see what fits you.
AWS Redshift is really great when you only want to read the data from the database. Basically, Redshift in the backend is a column-oriented database that is more suitable for analytics. You can transfer all your existing data to redshift using the AWS DMS. AWS DMS is a service that basically needs your bin logs of the existing database and it will automatically transfer your data we don't have to do anything. From my Personal experience Redshift is really great.

Load data from Big query to Postgre cloud sql database everyday

I have some tables to load from big query to Postgre cloud sql database. I need to do this everyday and create some stored procedures in cloud sql. What is the best way to load tables from big query to cloud sql everyday? What are the costing implications for transferring the data and keeping cloud sql on 24/7? Appreciate your help.
Thanks,
J.
Usually, a Cloud SQL database is up full time to serve request anytime. It's not a serverless product that can start when a request comes in. You can have a look to the pricing page to calculate the cost (mainly: CPU, Memory and Storage. Size database according to your usage and expected performances)
About the process, we did that in my previous company:
Use a cloud scheduler to trigger a Cloud Functions
Create temporary table in BigQuery
Export BigQuery temporary tables to CSV in Cloud Storage
Run a Cloud SQL import of the files from GCS in temporary tables
Run a request in database to merge the imported data to the existing one, and to delete the table of imported data
If it takes too much time to perform that in only one functions, you can use Cloud Run (60 minutes of time out), or a dispatch functions. This functions is called by the Cloud Scheduler and will publish a message in PubSUb for each table to process. On PubSub, you can plug a Cloud Functions (or a Cloud Run) that will perform the previous process only on the table mentioned in the message. Like that, you process concurrently all the tables and not sequentially.
About cost you will pay
BigQuery query (volume of data that you process to create temporary tables)
BigQuery storage (very low, you can create temporary table that expire (automatically deleted) after 1h)
Cloud Storage storage (very low, you can set a lifecycle on the file, to delete them after few days)
File transfer: free if you stay in the same region.
Export and import: free
In summary, only the BigQuery query and the Cloud SQL instance are major costs.