How to insert records into BigQuery Linked server - google-cloud-platform

I have used Simba ODBC driver to connect SQL server to Bigquery as linked server in SQL Server Management Studio.
Not able to insert into BigQuery, only able to select data from BigQuery. I have checked 'AllowInProcess' and 'NonTransactedUpdate' too.
select * from openquery([GoogleBigQuery], 'select * from first.table2' )
The above select query is working.
Query:
insert into OPENQUERY([GoogleBigQuery], 'select * from first.table2') values (1,'c')
Error generated:
"The OLE DB provider "MSDASQL" for linked server "GoogleBigQuery"
could not INSERT INTO table "[MSDASQL]" because of column "id". The
user did not have permission to write to the column."
Query:
INSERT INTO [GoogleBigQuery].[midyear-byway-252503].[first].[table2] select * from Learning_SQL.dbo.demo
Error generated:
OLE DB provider "MSDASQL" for linked server "GoogleBigQuery" returned message "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
The OLE DB provider "MSDASQL" for linked server "GoogleBigQuery" could not INSERT INTO table "[GoogleBigQuery].[midyear-byway-252503].[first].[table2]" because of column "id". The user did not have permission to write to the column.
Was wondering if anyone has tried inserting into a dataset in BigQuery using Linked server.

This error is due to this limitation. It seems that Microsoft's SQL Server "Linked Servers" option does not support making INSERT, UPDATE, or DELETE calls to the external database being linked to unless the connection supports transactions.
Since BigQuery does not support explicit transactions, MSSQL would not allow INSERT, UPDATE, or DELETE calls to BigQuery.
If you would like to insert data into BigQuery, consider exporting the data into a file, and load that file into BigQuery.
The import file can be in Avro, CSV, JSON (newline delimited only), ORC, or Parquet format.
For more information, refer to importing data into BigQuery,

Related

BigQuery Multi Table has no outputs. Please check that the sink calls addOutput at some point error from Multiple database table plugin

I'm trying to ingest data from different tables with in same database using Data fusion Multiple database tables plugin to bigquery tables using multiple big query tables sink. I write 3 different custom SQL and add them inside the plugin section which is under "Data Section Mode" > "Custom SQL Statements".
The problem is When I preview or deploy and run the pipeline I get the error "BigQuery Multi Table has no outputs. Please check that the sink calls addOutput at some point."
What I try to figure out this problem;
Run custom SQL on database and worked properly.
Create pipelines that are specific for custom SQLs but it's like 1 table ingestion from sql server to bigquery table as sink. it worked properly.
Try different Data Section Mode under multiple database tables plugin that is Table Allow List , works but it's just insert all data with no option to transform any column or filtering. Did that one to see if plugin can reach the database and able to read data ,it can read.
Data Pipeline - Multiple Database Tables Plugin Config - 1
Data Pipeline - Multiple Database Tables Plugin Config - 2
As a conclusion I would like to ingest data from one database with multiple tables with in one data pipeline. If possible I would like to do it with writing custom sqls for each tables.
Open for any advice and try.
Thank you.

BigQuery: DDL statement not executing via client API

I am executing CREATE TABLE IF NOT EXIST via client API using following JobConfigurationQuery:
queryConfig.setUseLegacySql(false)
queryConfig.setFlattenResults(false)
queryConfig.setQuery(query)
As I am executing CREATE TABLE DDL, I cannot specify destination table, write dispositions, etc. In my Query History section of Web UI, I see job being executed successfully without any exceptions, and with no writes happening. Is DDL statement not supported via client API?
I am using following client: "com.google.apis" % "google-api-services-bigquery" % "v2-rev397-1.23.0"
From BigQuery docs which says it seems that no error is returned when table exists:
The CREATE TABLE IF NOT EXISTS DDL statement creates a table with the
specified options only if the table name does not exist in the
dataset. If the table name exists in the dataset, no error is
returned, and no action is taken.
Answering your question, DDL is supported from API which is also stated in doc, to do this:
Call the jobs.query method and supply the DDL statement in the request
body's query property.

SAS OLE DB connection in Power BI

I need to connect Power BI to SAS using an OLE DB connection (can't use ODBC nor the native connection). Here is the string from the build:
provider=sas.IOMProvider.9.45;data source="iom-name://SASApp - Logical Workspace Server";mode="ReadWrite|Share Deny None";sas cell cache size=10000;sas port=0;sas protocol=0;sas server type=1;sas metadata user id=dxru984;sas metadata password=XXXXX;sas metadata location=iom-bridge://lhwappb1.xxx-xx.xxx:8561
I also tried with this one:
Provider=sas.IOMProvider.9.45;Data Source=iom-name://SASApp - Logical Workspace Server;SAS Cell Cache Size=10000;SAS Port=0;SAS Protocol=0;SAS Server Type=1;SAS Metadata User ID=dxru984;SAS Metadata Password=xxxxxxx;SAS Metadata Location=iom-bridge://lhwappb1.xxx-xx.xxx:8561
The first string works perfectly with Excel but not in PowerBI with that error message:
OLE DB : Format of the initialization string does not conform to the
OLE DB specification
Any idea?
I managed to connect to SAS Federation Server data using the following connection string:
provider=sas.IOMProvider.9.45;data source=blablabla2.abc.pt;sas port=1234;sas protocol=2;sas metadata user id=******;sas metadata password=**********;sas metadata location=blablabla1.abc.pt:5678
Hope this helps,
Rita Dias

How to read data from a POSTGRESQL database using DAS?

We are working on the ETL. How to read data from the POSTGRESQL data base using streams in DATA ANALYTICS SERVER and manipulate some operations using the streams and insert the manipulated data into another POSTGRESQL data base on a scheduled time. Please share the procedures to follow.
Actually, you don't need to publish data from your PostgreSQL server. Using WSO2 Data Analytics Server (DAS) you can pull data from your database and do the analysis. Finally, you can push results back to the PostgreSQL server. In DAS, we have a special connector called "CarbonJDBC" and using that connector you can easily do this.
The current version of the "CarbonJDBC" connector supports following database management systems.
MySQL
H2
MS SQL
DB2
PostgreSQL
Oracle
You can use following query to pull data from your PostgreSQL database and populate a spark table. Once spark table is populated with data, you can start you data analysis tasks.
create temporary table <temp_table> using CarbonJDBC options (dataSource "<datasource name>", tableName "<table name>");
select * from <temp_table>;
insert into / overwrite table <temp_table> <some select statement>;
For more information regarding "CarbonJDBC" connector please refer following blog post [1].
[1]. https://pythagoreanscript.wordpress.com/2015/08/11/using-the-carbon-spark-jdbc-connector-for-wso2-das-part-1/

Anyone using a web service as a data source in Excel 2007?

Can I use a web service as a data source for creating Excel pivot tables?
Currently, the soure data for the pivot table is being exported from our SQL db to a CSV file. Then, the CSV file is loaded into a worksheet. From there, a pivot table is created in the same workbook.
Customers login to a website, click some links, and an excel file (with data and pivot table) is generated. This is a public app so the preference is to not connect directly to the DB.
We control the database and generate the output. We are looking to streamline this process. The SQL db and pivot tables can not / will not change.
See http://www.vertex42.com/News/excel-web-query.html
What format does the "public-facing website" use in making the data available? A data file, a table on a web page? This issue will determine how much of a scraping operation you'll need to do.
You'll still need to write the web service and have it run on a server. A possible alternative is to use Yahoo Pipes to do the conversions for you.