How to add column with query folding using snowflake connector - powerbi

I am trying to add a new column to a power query result that is the result of subtracting one column from another. according to the power bi documentation basic arithmetic is supported with query folding but for some reason it is showing a failure to query fold. I also tried simply adding a column populated with the number 1 and it still was not working. Is there some trick to getting query folding a new column to work on snowflake?

If the computation is made based only on data from source, then it could be computed during table import as SQL Statement:
SELECT col1, col2, col1 + col2 AS computed_total
FROM my_table_name
EDIT:
The problem with this solution is that native SQL statement for snowflake is only supported on PBI desktop and I want to have this stored in a dataflow (so pbi web client) for reusability and other reasons.
Option 1:
Create a view istead of table at source:
CREATE OR REPLACE VIEW my_view
AS
SELECT col1, col2, col1 + col2 AS computed_total
FROM my_table_name;
Option 2:
Add computed column to the table:
ALTER TABLE my_table_name
ADD COLUMN computed_total NUMBER(38,4) AS (col1 + col2);

Related

DAX way to get all column names of a table as a list

Is there a way in DAX to get all column names of a table as a list of values? Say we have in a table with 3 columns:
col1 | col2 |col3
I want a dax table (or dax variable) that would retrieve those column names to a single column:
SingleColumn
----
col1
col2
col3
I know that there is solution in PowerQuery for that. I am interested in DAX solution.
If you have deployed a PBI data model in a SSAS server, you can write following DAX query to retrieve all column names for all tables within that SSAS DB with
EVALUATE COLUMNSTATISTICS()
Can only be used as a DAX query

Informatica - SQ transformation

What will be the expected result of the below.
I have table A with column1,
I'm trying to map column1 to SQ, which has 3 columns - col1, col2 and col3.
Link Column1 to col1,col2 and col3 in SQ. Now when I try to generate SQL query for SQ, what will be the result?
Since OP is waiting for answer and doesnt have informatica to test it out, let me answer to that.
if you connect one column to three columns in SQ and then connect all those three columns to next transformation, then your generated SQL will contain one column repeated thrice from source.
Here are some screenshot from a dummy map i created.
mapping screenshot -
Then here is generate SQL -
SELECT
ITEM.ITEM_NUM, ITEM.ITEM_NUM, ITEM.ITEM_NUM
FROM
ITEM

Referencing single value of another column in Power Query Editor

I would like to get a single value from "table2.MappedValue" for every record in table1 in Power Query Editor,
I have two tables, that have a many to one relationship, table2 is just a mapping table:
table1: ID | Values
table2: ID | MappedValue
when I try Table.Column(#"table2","MappedValue"), I get a list and not a single value.
I can do that from Table tools-> New Column, but I was wondering if that is possible in Power Query Editor.
You can do this by merging queries. In the query editor go to Home tab and select table1 click on merge and merge with table2. Next step is to expand your new column by selecting the dubble arrow in the column and select the column you want.

sql alchemy Update resultset of raw query

Am new to Sql Alchemy. I have a raw sql which i need to execute by passing bind parameters. Resulting rows from the query, i need to update a particular column value. How do i do this in the efficient way?
Below are the columns in my table metrics
TABLE
id,total,pass,fail,category,ref_id
query = "Select * from table where id in(select max(id) from table ...)"
sql = text(query)
result = db.engine.execute(sql, CATEGORY=category)
for row in result:
//update here
So i have this complex query, that i need to execute as an inline query. Let's say i get three rows from my query and i need to update ref_id for all the 3 rows with a values. How can i achieve this preferably bulk update.
Am using python 2.7,SQLAlchemy==0.9.9,SQLAlchemy-Utils==0.29.8

Is it possible to remove a column from a partitioned table in Google BigQuery?

I'm trying to remove a column from a partitioned table in BigQuery using this command
bq query --destination_table [DATASET].[TABLE_NAME] --replace --use_legacy_sql=false 'SELECT * EXCEPT(column) FROM [DATASET].[TABLE_NAME]'
As a result the unwanted column is removed, the schema is changed but the data is no more partitioned.
Any suggestion on how to keep the data partitioned after the column is removed? Docs are clear only for non partitioned tables.
There are two workarounds you can use:
Use a column-partitioning table, which means it's partitioned on a value of a regular column in a table. You can create a new column-partitioned table and copy the data deleting the column:
bq mk --time_partitioning_field=pt --schema=... [DATASET].[TABLE_NAME2]
bq query --destination_table=[DATASET].[TABLE_NAME2] "SELECT _PARTITIONTIME as pt, * EXCEPT(column) from [DATASET].[TABLE_NAME]"
You can also still use day-partitioned tables, but copy the data using DML. You can set or copy _PARTITIONTIME column inside the DML INSERT statement, which is not possible with regular SELECT. Here is an example:
INSERT INTO
dataset1.table1 (_partitiontime,
a,
b)
SELECT
TIMESTAMP(DATE "2008-12-25") AS _partitiontime,
"a" AS a,
"b" AS b
This requires DML over partitioned tables, which is currently in alpha: https://issuetracker.google.com/issues/36383555
BigQuery now supports DROP COLUMN in partitioned tables:
ALTER TABLE mydataset.mytable
DROP COLUMN column
It's in beta at the time of writing, but it worked for me.