Stop a BigQuery script using a condition - google-cloud-platform

I am running a BigQuery script to generate a table. The script assumes the existence of another table, performs some transformations, and places the transformed data into an output table. However, I want the script to terminate its execution (and possibly post a message) if the input table does not comply with some conditions. What is the best way of terminating a BigQuery script using a condition?

To achieve this without any external app that call the BigQuery API and perform the requirements checks (which is a nicer way, easier to maintain and evolve), is to create a schedule query. In this case, it's well designed for recurring request. If not, code this in your preferred language.
So, with BigQuery scheduled queries, you can perform your query, define the destination table and define a notification channel
Set the PubSub topic that you want. However, this message isn't custom. You will have the status and the reason of the latest execution. Then you will need to dig into to understand exactly what happened during the query and perform complex code to read the log and find the root cause.
If your check wants to know the status OK/KO, this solution is suitable, if not, prefer your own code, you will have a better granularity on the error management.

Related

Can you get data from a Big Query table to an outside stream

Because of companies policies, we have a lot of information that we need as input inserted into a BigQuery table that we need to SELECT from.
My problem is that doing a select directly into this table and trying to run a process (a virtual machine, etc) is prone to errors and reworking. If my process stops, I need to run the query again and reprocess everything.
Is there a way to export data from Big Query to a Kinesis-like stream (I'm more familiar with AWS)?
DataFlow + PubSub seems to be the way to go for this kind of issue.
Thank you jamiet!

How to monitor if a BigQuery table contains current data and send an alert if not?

I have a BigQuery table and an external data import process that should add entries every day. I need to verify that the table contains current data (with a timestamp of today). Writing the SQL-query is not a problem.
My question is how to best install such a monitoring in GCP? Can Stackdriver execute custom BigQuery SQL? Or would a CloudFunction be more suitable? An AppEngine application with a cronjob? What's the best practise?
Not sure what's the best practice here, but one simple solution is to use BigQuery scheduled query. Schedule query, make it fail is something is wrong using ERROR() function, configure scheduled query to notify (it sends email) if it fails.

Alternatives for Athena to query the data on S3

I have around 300 GBs of data on S3. Lets say the data look like:
## S3://Bucket/Country/Month/Day/1.csv
S3://Countries/Germany/06/01/1.csv
S3://Countries/Germany/06/01/2.csv
S3://Countries/Germany/06/01/3.csv
S3://Countries/Germany/06/02/1.csv
S3://Countries/Germany/06/02/2.csv
We are doing some complex aggregation on the data, and because some countries data is big and some countries data is small, the AWS EMR doesn't makes sense to use, as once the small countries are finished, the resources are being wasted, and the big countries keep running for long time. Therefore, we decided to use AWS Batch (Docker container) with Athena. One job works on one day of data per country.
Now there are roughly 1000 jobs which starts together and when they query Athena to read the data, containers failed because they reached Athena query limits.
Therefore, I would like to know what are the other possible ways to tackle this problem? Should I use Redshift cluster, load all the data there and all the containers query to Redshift cluster as they don't have query limitations. But it is expensive, and takes a lot of time to wramp up.
The other option would be to read data on EMR and use Hive or Presto on top of it to query the data, but again it will reach the query limitation.
It would be great if someone can give better options to tackle this problem.
As I understand, you simply send query to AWS Athena service and after all aggregation steps finish you simply retrieve resulting csv file from S3 bucket where Athena saves results, so you end up with 1000 files (one for each job). But the problem is number of concurrent Athena queries and not the total execution time.
Have you considered using Apache Airflow for orchestrating and scheduling your queries. I see airflow as an alternative to a combination of Lambda and Step Functions, but it is totally free. It is easy to setup on both local and remote machines, has reach CLI and GUI for task monitoring, abstracts away all scheduling and retrying logic. Airflow even has hooks to interact with AWS services. Hell, it even has a dedicated operator for sending queries to Athena, so sending queries is as easy as:
from airflow.models import DAG
from airflow.contrib.operators.aws_athena_operator import AWSAthenaOperator
from datetime import datetime
with DAG(dag_id='simple_athena_query',
schedule_interval=None,
start_date=datetime(2019, 5, 21)) as dag:
run_query = AWSAthenaOperator(
task_id='run_query',
query='SELECT * FROM UNNEST(SEQUENCE(0, 100))',
output_location='s3://my-bucket/my-path/',
database='my_database'
)
I use it for similar type of daily/weekly tasks (processing data with CTAS statements) which exceed limitation on a number of concurrent queries.
There are plenty blog posts and documentation that can help you get started. For example:
Medium post: Automate executing AWS Athena queries and moving the results around S3 with Airflow.
Complete guide to installation of Airflow, link 1 and link 2
You can even setup integration with Slack for sending notification when you queries terminate either in success or fail state.
However, the main drawback I am facing is that only 4-5 queries are getting actually executed at the same time, whereas all others just idling.
One solution would be to not launch all jobs at the same time, but pace them to stay within the concurrency limits. I don't know if this is easy or hard with the tools you're using, but it's never going to work out well if you throw all the queries at Athena at the same time. Edit: it looks like you should be able to throttle jobs in Batch, see AWS batch - how to limit number of concurrent jobs (by default Athena allows 25 concurrent queries, so try 20 concurrent jobs to have a safety margin – but also add retry logic to the code that launches the job).
Another option would be to not do it as separate queries, but try to bake everything together into fewer, or even a single query – either by grouping on country and date, or by generating all queries and gluing them together with UNION ALL. If this is possible or not is hard to say without knowing more about the data and the query, though. You'll likely have to post-process the result anyway, and if you just sort by something meaningful it wouldn't be very hard to split the result into the necessary pieces after the query has run.
Using Redshift is probably not the solution, since it sounds like you're doing this only once per day, and you wouldn't use the cluster very much. It would Athena is a much better choice, you just have to handle the limits better.
With my limited understanding of your use case I think using Lambda and Step Functions would be a better way to go than Batch. With Step Functions you'd have one function that starts N number of queries (where N is equal to your concurrency limit, 25 if you haven't asked for it to be raised), and then a poll loop (check the examples for how to do this) that checks queries that have completed, and starts new queries to keep the number of running queries at the max. When all queries are run a final function can trigger whatever workflow you need to run after everything is done (or you can run that after each query).
The benefit of Lambda and Step Functions is that you don't pay for idle resources. With Batch, you will pay for resources that do nothing but wait for Athena to complete. Since Athena, in contrast to Redshift for example, has an asynchronous API you can run a Lambda function for 100ms to start queries, then 100ms every few seconds (or minutes) to check if any have completed, and then another 100ms or so to finish up. It's almost guaranteed to be less than the Lambda free tier.
As I know Redshift Spectrum and Athena cost same. You should not compare Redshift to Athena, they have different purpose. But first of all I would think about addressing you data skew issue. Since you mentioned AWS EMR I assume you use Spark. To deal with large and small partitions you need to repartition you dataset by months, or some other equally distributed value.Or you can use month and country for grouping. You got the idea.
You can use redshift spectrum for this purpose. Yes, it is a bit costly but it is scalable and very good for performing complex aggregations.

Is intermediate IO in Bigquery queries charged?

I have a fairly complex Bigquery query and it seems to cost more than I expect. It has 97 intermediate stages... are those charged?
You can get the value of how much data will be scanned (and therefore charged) by your query using the --dry-run switch from the CLI or by looking at the right end of the UI section where you run and set up your query.
BigQuery pricing model is per byte read. For my understanding, at the moment, if you reference a table in multiple CTE you will get charged one. But this might depend on how the query is written.
The best practice is always using the --dry-run feature which is very accurate.

Unload status in Redshift

When you load data to your Amazon Redshift tables, you can check the load status using the table STV_LOAD_STATE.
I would like to know if there's a way to achieve the same, but with the unload operation. In other words, I'd like to know if there's a way to find out the current stage of an unload process.
Unlike loading data into Redshift, Unloading actually has to run a select statement. Therefore it can't tell us a status like it does when it's loading.
e.g if the select statement has to join multiple tables and scan a lot of tables to generate the output then it might take long even though the actual unload part might not be the long part.
So I usually check the query execution steps in AWS console to have a rough idea about where the unload is.
I also check the S3 folder that I am unloading to see if the files start coming in yet. They usually come in batches so it can give you an idea as well.
2021, and we have a solution
STL_UNLOAD_LOG
https://docs.aws.amazon.com/redshift/latest/dg/r_STL_UNLOAD_LOG.html