I am creating an application on appsheet where in I am trying to bulk upload data to google bigquery through appsheet by uploading a csv file. The functionality is working fine if there is only 1 row in the csv file but if there are more records then bulk upload is failing bigquery error
appsheet error
The restriction I have is that I can't use google sheets for any work around.
I directly connected appsheet with google bigquery and tried to bulk upload data into bigquery by uploading csv file in appsheet but it failed
Related
Error Message: "Failed to create table: Error while reading data, error message: CSV table references column position"
I'm having issues loading data from a CSV in Google Cloud Storage into BigQuery and creating an associated table. I'm starting with Cloud Storage, adding my raw CSV file there. Then, moving to BigQuery, Create Dataset > Create Table using the CSV in the Cloud Storage.
My CSV format is;
enter image description here
The Parameters in my BigQuery table are;
enter image description here
I can't get the data to load while in this format and this setup. the original dataset goes to 10k+ plus rows, however I've reduced the scope to troubleshoot the format error.
Any response or guidance would be greatly appreciated
We are currently using Apache sqoop once daily to export an oracle DB table containing a CLOB column into HDFS. As part of this we first map the CLOB column to java string(using --map-column-java) and have the imported data to be saved in the format of parquet. We have this scheduled as an oozie workflow.
There is a plan to move from apache hive to bigquery. I am not able to find a way to get this table into bigquery and would like help on the best approach to get this done.
If we go withreal time streaming from oracle DB into bigquery using google datastream, can you tell me if the clob column will get streamed correctly, as it has some malformed xml data (close to xml structure but might have some discrepancies in obeying the structure).
Another option i read was to have the table extracted as a csv file,and have it transferred to GCS and have the bigquery table refer it there.But since mydata in CLOB column is very large and is wild with multiple commas and special chsracters in between, i think there will be issues with parsing or exporting. Any options to do it in parquet or ORC formats?
The preferred approach is to have a scheduled batch upload performed daily from oracle to bigquery. Appreciate any inputs on how to achieve the same.
We can convert CLOB data from Oracle DB to desired format like ORC, Parquet, TSV, Avro files through Enterprise Flexter.
Also, you can refer to this on how to ingest on-premises Oracle data with Google Cloud Dataflow via JDBC, using the Hybrid Data Pipeline On-Premises Connector.
For your other query moving from apache hive to bigquery-
The fastest way to import to BQ is using GCP resources. Dataflow is a scalable solution to read and write. Dataproc is also another option that is more flexible and you can use more open source stacks to read from the Hive cluster.
You can also use this Dataflow template, which would require a connection to be established directly between the Dataflow workers and the Apache Hive nodes.
There is also a plugin for moving data from Hive into BigQuery which utilises GCS as a temporary storage and uses BigQuery Storage API to move data to BigQuery.
You can also use Cloud SQL to migrate your Hive data to BigQuery.
I have a list of CSV Files in GCS Bucket. I want to Load those files in BQ.
BEFORE LOADING ALL COLUMNS NEEDED CONVERT INTO STRING TYPE
I am using Dataflow Template Text File on Cloud Storage To BigQuery
Where a JavaScript UserDefinedFunction(UDF) needed to be mentioned and a JSON For Defining the BigQuerytable Schema.
In JSON Schema needs to Convert each Column as a String. (It's a Tedious Task as each CSV has 50+ column and I have to manually write column name in UDF and JSON )
Is there any Approach to DO this?
I created a permanent Big Query table that reads some csv files from a Cloud Storage Bucket sharing the same prefix name (filename*.csv) and the same schema.
There are some csvs anyway that make fail BigQuery queries with a message like the following one: "Error while reading table: xxxx.xxxx.xxx, error message: CSV table references column position 5, but line starting at position:10 contains only 2 columns.
Moving all the csvs one-by-one from the bucket I devised the one responsible for that.
This csv file doesn't have 10 lines...
I found this ticket BigQuery error when loading csv file from Google Cloud Storage, so I thought the issue was having an empty line at the end. But also others csvs in my bucket do, so this can't be the reason.
On the other hand this csv is the only one with content type text/csv; charset=utf-8, all the others being text/csv,application/vnd.ms-excel,application/octet-stream.
Furthermore downloading this csv to my local Windows machine and uploading it againt to Cloud Storage, content type is automatically converted to application/vnd.ms-excel.
Then even with the missing line Big Query can then query the permanent table based on filename*.csvs.
Is it possible that BigQuery had issues querying csvs with UTF-8 encoding, or is it just coincidence?
Use Google Cloud Dataprep to load your csv file. Once the file is loaded, analyze the data and clean it if requires.
Once all the rows are cleaned, you can then sink that data in BQ.
Dataprep is GUI based ETL tool and it runs a dataflow job internally.
Do let me know if any more clarification is required.
Just to remark the issue, the CSV file had gzip as encoding which was the reason that BigQuery doesn't interpret as a CSV file.
According to documentation BigQuery expects CSV data to be UTF-8 encoded:
"encoding": "UTF-8"
In addition, since this issue is relate to the metadata of the files in GCS you can edit the metadata directly from the Console.
I am trying to upload a compressed file from my GCS bucket into BigQuery.
In the new UI it is not clear how should I specify to uncompress the file.
I get an error specifying as if the gs://bucket/folder/file.7z is a .csv file.
Any help?
Unfortunately, .7z files are not supported by Bigquery, only gzip files and the decompression process is made automatically, after selecting the data format and creating the table.
If you consider that BigQuery should accept 7z files too, you could fill a feature request so the BigQuery engineers have it in mind for further releases.