I am trying to upload a compressed file from my GCS bucket into BigQuery.
In the new UI it is not clear how should I specify to uncompress the file.
I get an error specifying as if the gs://bucket/folder/file.7z is a .csv file.
Any help?
Unfortunately, .7z files are not supported by Bigquery, only gzip files and the decompression process is made automatically, after selecting the data format and creating the table.
If you consider that BigQuery should accept 7z files too, you could fill a feature request so the BigQuery engineers have it in mind for further releases.
Related
I have one folder in ADLS Gen2 which has more than one part parquet files. I need to read all these parquet files in one shot with Informatica Developer and i need to write all of them into another folder in ADLS Gen2.
Do you have any suggestion?
Thank you
Ozge
1- I took only last day's data from one folder under ADLS Gen2 which has only 1 file for each day. (with using parameterization) Since i run this mapping with Databricks, at the end i have multiple part parquet files.
2- As a 2.step i need to read these all part parquet files. If i use dataobject that i created at 1.step, i thought it will read all files. But it does not.
I created a permanent Big Query table that reads some csv files from a Cloud Storage Bucket sharing the same prefix name (filename*.csv) and the same schema.
There are some csvs anyway that make fail BigQuery queries with a message like the following one: "Error while reading table: xxxx.xxxx.xxx, error message: CSV table references column position 5, but line starting at position:10 contains only 2 columns.
Moving all the csvs one-by-one from the bucket I devised the one responsible for that.
This csv file doesn't have 10 lines...
I found this ticket BigQuery error when loading csv file from Google Cloud Storage, so I thought the issue was having an empty line at the end. But also others csvs in my bucket do, so this can't be the reason.
On the other hand this csv is the only one with content type text/csv; charset=utf-8, all the others being text/csv,application/vnd.ms-excel,application/octet-stream.
Furthermore downloading this csv to my local Windows machine and uploading it againt to Cloud Storage, content type is automatically converted to application/vnd.ms-excel.
Then even with the missing line Big Query can then query the permanent table based on filename*.csvs.
Is it possible that BigQuery had issues querying csvs with UTF-8 encoding, or is it just coincidence?
Use Google Cloud Dataprep to load your csv file. Once the file is loaded, analyze the data and clean it if requires.
Once all the rows are cleaned, you can then sink that data in BQ.
Dataprep is GUI based ETL tool and it runs a dataflow job internally.
Do let me know if any more clarification is required.
Just to remark the issue, the CSV file had gzip as encoding which was the reason that BigQuery doesn't interpret as a CSV file.
According to documentation BigQuery expects CSV data to be UTF-8 encoded:
"encoding": "UTF-8"
In addition, since this issue is relate to the metadata of the files in GCS you can edit the metadata directly from the Console.
I've configures my DMS to read from a MySQL database and migrate its data to S3 with replication. Everything seems to work fine, it creates big CSV files for all the data and starts to create smaller CSV files with the deltas.
The problem is when I read this CSV files with AWS Glue Crawlers, they don't seem to get these deltas or even worse, they seem to get only the deltas, ignoring the big CSV files.
I know that there is a similar post here: Athena can't resolve CSV files from AWS DMS
But it is unaswered and I can't comment there, so I'm opening this one.
Does anyone have found the solution to this?
Best regards.
We are able to load uncompressed CSV files and gzipped files completely fine.
However, if we want to load CSV files compressed in ".zip" - what is the best approach to move ahead?
Will we need to manually convert the zip to gz or BigQuery has added some support to handle this?
Thanks
BigQuery supports loading gzip files
The limitation is - If you use gzip compression BigQuery cannot read the data in parallel. Loading compressed CSV data into BigQuery is slower than loading uncompressed data.
You can try 42Layers.io for this. We use it to import ziped CSV files directly from FTP into BQ, and then set it on a schedule to do it every day. They also let you do field mapping to your existing tables within BQ. Pretty neat.
For a project we've inherited we have a large-ish set of legacy data, 600GB, that we would like to archive, but still have available if need be.
We're looking at using the AWS data pipeline to move the data from the database to be in S3, according to this tutorial.
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-copyactivity.html
However, we would also like to be able to retrieve a 'row' of that data if we find the application is actually using a particular row.
Apparently that tutorial puts all of the data from a table into a single massive CSV file.
Is it possible to split the data up into separate files, with 100 rows of data in each file, and giving each file a predictable file name, such as:
foo_data_10200_to_10299.csv
So that if we realise we need to retrieve row 10239, we can know which file to retrieve, and download just that, rather than all 600GB of the data.
If your data is stored in CSV format in Amazon S3, there are a couple of ways to easily retrieve selected data:
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.
S3 Select (currently in preview) enables applications to retrieve only a subset of data from an object by using simple SQL expressions.
These work on compressed (gzip) files too, to save storage space.
See:
Welcome - Amazon Athena
S3 Select and Glacier Select – Retrieving Subsets of Objects