bigQuery data export costs - google-cloud-platform

I'm working on a project that consists of receiving data in BigQuery, do some aggregations and export it to Qlik Sense to make dashboards. I'm still on a exploratory phase with only a few Gb of data but I want to calculate how much I will pay to export data from BigQuery to an external platform in a production phase.
When I pass data from Cloud Storage buckets (where the data comes from) to BigQuery, data compression occurs, i.e., the tables in bigQuery occupy less space than in the buckets of Cloud Storage.
Thus, my question is, when I then export the data from BigQuery to an external paltform, what volume of traffic will be considered? The data's original size (before compression), or the size (compressed) it has on bigQuery?
Will the egress(export) volume depend on the format that Qlik uses to export the data? If yes, what is the format Qlik uses to export the data?

If you follow BigQuery Exporting Data and export BigQuery table data to GCS. Exporting itself is free of cost.
Although free, exporting has a limit of 10 TB of data per day per project. You are suggested to use Storage API if exporting quota doesn't work for you.
In case that you are trying to use Qlik/BigQuery connector. I noticed that the maximum response size is 128MB compressed, it might not be able to export a few GB of data for you. From the documentation, the pricing model on BigQuery side is simple, since the connector is just placing a query and read response:
Query cost: see BigQuery query pricing, which is based on the size of the table that the query scanned
Transfer cost: free

Related

Bigquery Pricing Comparison : Loading data into Bigquery vs Using Create External Table

My team is working on developing data platform using Google Cloud Platform.
We uploaded our company's data on Google Cloud Storage and try to make data mart on Bigquery.
However, in order to save GCP usage cost, we are considering to load all data from gcs to bigquery or create external table on bigquery.
Which way is more cost efficienct?
BigQuery and the external table capacity make the border between datalake (file) and data warehouse (structured data) blurry, and your question is relevant.
When you use external table, several feature are missing, like clustering and partitioning, and your file are parsed on the fly (with type casting) -> the processing time is slower and you can't control/limit the volume of data that your process. In addition of possible errors in file that will break your query
When you use native table, the data storage is optimize for the BigQuery processing, the data already clean and parsed, the table partitioned and clustered.
The question of cost is hard multiple. Firstly, we can talk about data storage. if you have file in GCS and the same data in BigQuery, you will pay the storage twice. However, after 90 days without any update, the data goes to "archive" storage mode in BigQuery and are 2 time cheaper. In addition, you can also move your GCS file to a cold storage after their integration in BigQuery.
That's for the storage. Then the processing. First of all, the processing roughly cost 10 times more than the storage, and it's the most important things to focus on. When you perform a BigQuery request, you pay for the volume of data that your query scan. If you have partitions or clusters, with BigQuery native tables, you can limit the amount of data that you scan and therefore reduce a lot the cost. With external tables, you can't use partitioning and clustering feature and therefore you always pay for the full amount of data.
Therefore, it depends (as always) on your volume of data and the frequency of the requests.
Don't forget something additional: with external table you can have error that can break your queries. In production mode, it can be dramatic. Think smart on that.
Finally, requesting external table is slower that native table (no partitioning, therefore more data to process and parsing/casting duration). Because time is money (if you have time critical queries), and that immaterial cost can also influence your choices.
The #guillaume blaquiere answer is okay, but he forget mention something important: it is possible to do partitioned queries. You can create partitioned external tables linked to a bucket in the storage. Eg:
gs://myBucket/myTable/dt=2019-10-31/lang=en/foo
gs://myBucket/myTable/dt=2018-10-31/lang=fr/bar
Then, you can use "dt" or "lang" filters in SQL queries from BigQuery.
https://cloud.google.com/bigquery/docs/hive-partitioned-queries-gcs

How to trigger auto update Google BigQuery Dataset every time CSV upload on Google Cloud Storage

I am trying to automate the entire data loading, that means whenever I upload a file to Google Cloud storage, it automatically triggers the data to be uploaded into the BigQuery dataset. I know that there is a daily set timing update available, but I want something where it only triggers whenever the CSV file is re-uploaded.
You have 2 possibilities:
Either you react on event. I mean you can plug a function on Google Cloud Storage events. In the event message you have the file stored in GCS and you can do what you want with it, for exemple run a load job from Google Cloud Storage.
Or, do nothing! Let the file in GCS and create a BigQuery federated table to read into GCS
With this 2 solutions, your data are accessible by BigQuery. Your Datastudio graph can query BigQuery, the data are here. However.
The load job is more efficient, you can partition and clusterize your data for optimize the speed and the cost. However, you duplicate your data (from GCS) and you have to code and to run your function. Anyway, cost is very low and function very simple. For Big Data it's my recommended solution
The federated table are very useful when the quantity of data is low and for occasional access or for prototyping. You can't clusterize and partition your data and the speed is lower than data loaded into BigQuery (because the CSV parsing is performing on the fly).
So, Big Data is a wide area: do you need to transform the data before the load? can you transform them after the log? How can you link query the ones after the others? ....
Don't hesitate if you have other questions on this!

How to retrive Bigquery billing details from GCP console or UI?

My team is using bigquery for our product development. Other bill of Rs 5159 got generated for one days transaction.
I checked the transaction details and
BigQuery Analysis: 15.912 Tebibytes [Currency conversion: USD to INR using rate 69.155]
Is is possible to somehow find out more details about the transactions like table name, queries that were executed and exact time of execution?
BigQuery automatically sends audit logs to Stackdriver Logging and provide the ability to do aggregated analysis on logs data. You can see BigQuery schema for exported logs for details
As quick example: Query cost breakdown by identity
This query shows estimated query costs by user identity. It estimates costs based on the list price for on-demand queries in the US. This pricing may not be accurate for other locations or for customers leveraging flat-rate billing.
#standardSQL
WITH data as
(
SELECT
protopayload_auditlog.authenticationInfo.principalEmail as principalEmail,
protopayload_auditlog.servicedata_v1_bigquery.jobCompletedEvent AS jobCompletedEvent
FROM
`MYPROJECTID.MYDATASETID.cloudaudit_googleapis_com_data_access_YYYYMMDD`
)
SELECT
principalEmail,
FORMAT('%9.2f',5.0 * (SUM(jobCompletedEvent.job.jobStatistics.totalBilledBytes)/POWER(2, 40))) AS Estimated_USD_Cost
FROM
data
WHERE
jobCompletedEvent.eventName = 'query_job_completed'
GROUP BY principalEmail
ORDER BY Estimated_USD_Cost DESC
As of last year BigQuery provides INFORMATION_SCHEMA tables that also give access to job information via JOBS_BY_* views. The INFORMATION_SCHEMA.JOBS_BY_USER and INFORMATION_SCHEMA.JOBS_BY_PROJECT views even include the exact query alongside the processed bytes. It might not be 100% accuracte (because bytes processed != bytes billed) but it should allow you to gain a good overview over your costs, which queries triggered them an who the initiator was.
Example
SELECT
creation_time,
job_id,
project_id,
user_email,
total_bytes_processed,
query
FROM
`region-us`.INFORMATION_SCHEMA.JOBS_BY_USER
The most "efficient" way to keep an eye on the cost is using the INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION view as it automatically includes all projects of the organization. You need to be Organization Owner or Organization Administrator to use that view, though.
From there you can figure out which jobs were the most expensive (= get their job id) and form there drill down via JOBS_BY_PROJECT to get the exact query.
See https://www.pascallandau.com/bigquery-snippets/monitor-query-costs/ for a more comprehensive explanation.
You need to Export Billing Data to BigQuery
Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to
export data to a CSV,JSON file
However, if you use regular file export, you should be aware that regular file export captures a smaller dataset than export to BigQuery. For more information about regular file export and the data it captures, see Export Billing Data to a File.
to a BigQuery dataset you specify.
After you enable BigQuery export, it might take a few hours to start seeing your data. Billing data automatically exports your data to BigQuery in regular intervals, but the frequency of updates in BigQuery varies depending on the services you're using. Note that BigQuery loads are ACID compliant, so if you query the BigQuery billing export dataset while data is being loaded into it, you will not encounter partially loaded data.
Follow the step by step guide: How to enable billing export to BigQuery
https://cloud.google.com/billing/docs/how-to/export-data-bigquery

What are the pros and cons of loading data directly into Google BigQuery vs going through Cloud Storage first?

Also, is there anything wrong with doing transforms/joins directly within BigQuery? I'd like to minimize the number of components and steps involved for a data warehouse I'm setting up (simple transaction and inventory data for a chain of retail stores.)
Well, if you go through GCS it means you are not streaming your data, and loading from file to BQ is free, and files can be up to 5TB in size. Which is sometimes and advantage, the large file capability and being free. Also streamin is realtime, and going through GCS means it's not realtime.
If you want to directly stream data into BQ tables that has a cost. Currently the price for streaming is $0.01 per 200 MB (June 2018), so around $50 for 1TB.
On the other hand, transformation can be done with SQL if you can express the task. Otherwise you have plenty of options, people most of the time us a Dataflow to transform things. See the linked tutorial for an advanced example.
Look also into
Cloud Dataprep - Data Preparation and Data Cleansing and
Google Data Studio: Easily Build Custom Reports and Dashboards
Also an advanced example:
Performing ETL from a Relational Database into BigQuery
Loading data via Cloud Storage is the fastest (and the cheapest) way.
Loading directly can be done via app (using streaming insert which add some additional cost)
For the doing transformation - if what are you plan/need to do can be done in BigQuery - you should do it in BigQuery :) - it is the best and fastest way of doing ETL.
But you should take in account cost of running query (if you not paying Google for slots - it could be 5$ per 1TB scans)
Another good options for complex ETL is using Data Flow - but it can became expensive very quick - in exchange of more flexibility.

Best storage format to backup hive internal table

I have one hive internal table which has around 500 million records.
My hive is deployed on top of AWS EMR. I do not want to keep the AWS EMR always running. Hence I want to backup the hive internal table data.
One easy way of doing it to create an external table pointing to S3 Location and then moving all records into that external table using insert command.
When ever I need internal table back, I can use this external S3 table to get all the data back.
Since this table only purpose is for backup, I want to ask which stored as format will be best choice for me.
Hive as of now supports following formats
TEXTFILE
SEQUENCEFILE
ORC
PARQUET
AVRO
RCFILE
Also is there any other way to backup your internal tables other than the approach mentioned above.
In Short
I'd think changing file format(the list you mentioned) will not have much difference in size. But file size and type of access you want on that file plays crucial role your cloud account billing.
So consider following,
Compression - To reduce the size
Amazon Glacier - Cost effective solution than S3 in AWS, as the data is less likely to access (archival)
Things to consider when choosing a solution, How much time you can buy
To access file from archival storage.
to convert data format to Hive managed table (if you change during archival)
to data uncompress(each compression is trade of between time and size)
Extended answer
Here are some of the file formats with their decompression speed and space efficiency, pick the balanced(means time/space as per above questions) and available compression format for you.
more compress and compress benchmarks at