I want to analyze about 50GB of data (constantly growing data) using Google BigQuery. But I'm wondering 2 things about bigquery pricing and analytics.
My data content (each row)
COLUMN | ROW
USER_ID --> Unique User ID (e.g zc5zta5h7a6sr)
BUY_COUNT --> INT(e.g 35)
TOTAL_CURRENCY --> USD (e.g. 500$)
etc.
The things I want to show in the chart; TOTAL CURRENCY Number of unique users with $1-999 and 1000-10,000+$.
I know that there is a $5 pricing for each 1TB processed in analysis, but;
1-) 1 GB of new data will be added to the BigQuery table every day. I want to create a live graph on each new data. Will Google bigquery only bill for 1GB of analytics added every day, or will it repeatedly analyze 50GB of data and bill 50+1GB with each new data?
2-) Data with the same id can be added to my constantly updated data set. Is it possible to combine them automatically? For example;
Can I update the BUY_COUNT column in the table id when the user with id zc5zta5h7a6sr makes a new purchase? If possible, how will I be billed for it?
Thank you.
The BigQuery analysis billing occur every time you run a query. About your points:
If your query scans the all table every time, you will be billed for the current size of the table each time the query runs. Are some ways to optimize this, such materialized views, partitioned tables, build an aggregated table etc.
If your aggregation is not much complex, materialized view can help with this point. For e.g., you can have an raw table with unnagregated data and an materialized view which aggregate BUY_COUNT by user. You will be billed for the bytes scanned during the automatic maintance plus the bytes scanned every time a query runs on the view.
More info about the pricing: https://cloud.google.com/bigquery/pricing
Related
I have a problem related to storing data on GCP Bigquery. I have a partition table which is size 10 Terabyte and increasing day by day. How can I store this data with minimum cost and max performance?
Fisrt Option : I can store last 1 month's data on Bigquery and the rest of the data on GCS.
Second Option: Deleting after the last 1 month's data but this option is illogical to me.
What do you think about this issue?
BigQuery Table
The best solution is to use a BigQuery table which is partition by a usefull date column. A huge part of this table will be charged with the lower long time storage rate. Please consider for your whole project a region-zone, which has lower costs, if this is possible for your organisation, because all needed data needs to be in the same region.
For each query only the needed time and the needed columns are charged.
GCS files
There is an option for using external tables for files stored in GCS. These have some drawbacks: for each query the complete data is read and charged. There are some partition possibilities using hive partition keys (https://cloud.google.com/bigquery/docs/hive-partitioned-queries). It is also not possible to precalculate the cost of a query, which is very bad for testing and debugging.
use cases
If you need only your monthly data for daily reports, it is enough to store these data in BigQuery and the rest in gcs. If you only need to run a query over longer times once a month, you can load the data from gcs into BigQuery and delete the table after your queries.
what I have seen so far is that the aws glue crawler creates the table based on the latest changes in the s3 files.
let's say crawler creates a table and then I upload a CSV with updated values in one column. the crawler is run again and it updates the table's column with the updated values. I want to be able to show a comparison of the old and new data in quick sight eventually, is this scenario possible?
for example,
right now my csv file is set up as details of one aws service, like RDS is the csv file name and the columns are account id, account name, what region is it in, etc etc
there was one column of percentage with a value 50%, it gets updated with 70%. would I be able to somehow get the old value as well to show in quicksight, to say like previously it was 50% and now its 70%
Maybe this scenerio is not even valid? because I want to be able to show like what account has what cost in xyz month and show how the cost is different in other months. If I make separate tables on each update of csv then there would be 1000+ tables at one point.
If I have understood your question correctly, you are aiming to track data over time. Above you suggest creating a table for each time series, why not instead maintain a record in a table for each time series, you can then create various Analysis over the data, comparing specific months or tracking month-by-month values.
Recently unlinked and re-linked a Firebase project with a different Google Analytics account.
The BigQuery integration configured to export GA data created the new dataset and data started populating into that.
The old dataset corresponding to the unlinked, "default" GA account, which contained ~2 years of data is still accessible in the BigQuery UI, however only the 5 most recent event_ tables are visible in the dataset. (5 days worth of event data)
Is it possible to extract historical data from the old, unlinked dataset?
What I could suggest, it's to do some queries for further validate the data that you have within your BigQuery dataset.
In this case, I would start by getting the dates for each table to see the amount (days) of data contained on the dataset.
SELECT event_date
FROM `firebase-public-project.analytics_153293282.events_*`
GROUP BY event_date ORDER BY event_date
EDIT
A better way to do this, and get all the tables within the dataset, is using the bq command line tool, see reference here.
bq ls firebase-public-project:analytics_153293282
You'll get something like this:
You could also do a COUNT(event_date), so you can see how many records you have per day, and compare this to the content that you have or you can see on your Firebase project.
SELECT event_date, COUNT(event_date) ...
On the case that there's data missing, you could use table decorators, to try to recover that data, see example here.
About the table's expiration date you can see this, in short, expiration time can be set by default at dataset level and it would be applied for new tables (existing tables require a manual update of their expiration time one by one), and expiration time can be set during the creation of the table. To see if there was any change on the expiration time you could look into your logs for protoPayload.methodName="tableservice.update", and see if there was set an expireTime as follows:
tableUpdateRequest: {
resource: {
expireTime: "2020-12-31T00:00:00Z"
...
}
}
Besides this, if you have a GCP support plan, you could reach them looking for further assistance on what could have happened with your tables on that dataset. Otherwise, you could open an issue tracker. Keep in mind that Firebase doesn't delete your data when unlinking a Firebase project from BigQuery, so in theory the data should be there.
We have a campaign management system. We create and run campaigns on various channels. When user clicks/accesses any of the Adv (as part of campaign), system generates a log. Our system is hosted in GCP. Using ‘Exports’ feature logs are exported to BigQuery
In BigQuery the Log Table is partitioned using ‘timestamp’ field (time when log is generated). We understand that BigQuery stores dates in UTC timezone and so partitions are also based on UTC time
Using this Log Table, We need to generate Reports per day. Reports can be like number of impressions per each day per campaign. And we need to show these reports as per ETC time.
Because the BigQuery table is partitioned by UTC timezone, query for ETC day would potentially need to scan multiple partitions. Had any one addressed this issue or have suggestions to optimise the storage and query so that its takes complete advantage of BigQuery partition feature
We are planning to use GCP Data studio for Reports.
BigQuery should be smart enough to filter for the correct timezones when dealing with partitions.
For example:
SELECT MIN(datehour) time_start, MAX(datehour) time_end, ANY_VALUE(title) title
FROM `fh-bigquery.wikipedia_v3.pageviews_2018` a
WHERE DATE(datehour) = '2018-01-03'
5.0s elapsed, 4.56 GB processed
For this query we processed the 4.56GB in the 2018-01-03 partition. What if we want to adjust for a day in the US? Let's add this in the WHERE clause:
WHERE DATE(datehour, "America/Los_Angeles") = '2018-01-03'
4.4s elapsed, 9.04 GB processed
Now this query is automatically scanning 2 partitions, as it needs to go across days. For me this is good enough, as BigQuery is able to automatically figure this out.
But what if you wanted to permanently optimize for one timezone? You could create a generated, shifted DATE column - and use that one to PARTITION for.
I'd like to give my partners the results of simple COUNT(*) ... GROUP BY items.color type queries and perhaps joins over items and orders or some such. I'd like query response time to be sub-second (on the order of a second, at worst), and scale to billions of rows counted.
My current approach is to either backup my GCDatastore data and load it into BigQuery and provide daily analytics or use GCDataflow to maintain a set of pre-defined counters.
Is this something Spanner has as a use-case for, if I transition my backend from Datastore to Spanner?
Today, running counting queries in Cloud Spanner requires a full table scan. Depending on the size of the table this could take more than a second.
One thing you could do is to track the count in a separate table, and whenever you update the items table, update the count in the same transaction.