Informatica - SQ transformation - informatica

What will be the expected result of the below.
I have table A with column1,
I'm trying to map column1 to SQ, which has 3 columns - col1, col2 and col3.
Link Column1 to col1,col2 and col3 in SQ. Now when I try to generate SQL query for SQ, what will be the result?

Since OP is waiting for answer and doesnt have informatica to test it out, let me answer to that.
if you connect one column to three columns in SQ and then connect all those three columns to next transformation, then your generated SQL will contain one column repeated thrice from source.
Here are some screenshot from a dummy map i created.
mapping screenshot -
Then here is generate SQL -
SELECT
ITEM.ITEM_NUM, ITEM.ITEM_NUM, ITEM.ITEM_NUM
FROM
ITEM

Related

How to add column with query folding using snowflake connector

I am trying to add a new column to a power query result that is the result of subtracting one column from another. according to the power bi documentation basic arithmetic is supported with query folding but for some reason it is showing a failure to query fold. I also tried simply adding a column populated with the number 1 and it still was not working. Is there some trick to getting query folding a new column to work on snowflake?
If the computation is made based only on data from source, then it could be computed during table import as SQL Statement:
SELECT col1, col2, col1 + col2 AS computed_total
FROM my_table_name
EDIT:
The problem with this solution is that native SQL statement for snowflake is only supported on PBI desktop and I want to have this stored in a dataflow (so pbi web client) for reusability and other reasons.
Option 1:
Create a view istead of table at source:
CREATE OR REPLACE VIEW my_view
AS
SELECT col1, col2, col1 + col2 AS computed_total
FROM my_table_name;
Option 2:
Add computed column to the table:
ALTER TABLE my_table_name
ADD COLUMN computed_total NUMBER(38,4) AS (col1 + col2);

PowerBI concatatenate 3 tables

I'm trying to concatenate 3 different tables in powerBI, with powerQuery but I couldn't find the solution for it. The closest I could find is appending the query, however it does not concatenate the table shown below.
Example:
Im trying to concatenate table Code1 to Code2 to Code3 (bottom 3 tables) to look like the top one.
Thank you very much for your help
Alternate method
In Table3, add column, custom column,
= Table1
then expand
Add column, custom column
= Table2
then expand
Add a custom column to each of your queries, all with the same value. For my test I just used the value of 1.
Then merge Query 1 to Query 2 on your custom column. Next, merge that query to Query 3 again on the custom column. Once you rearrange the columns you will get an output that looks like this:

DAX way to get all column names of a table as a list

Is there a way in DAX to get all column names of a table as a list of values? Say we have in a table with 3 columns:
col1 | col2 |col3
I want a dax table (or dax variable) that would retrieve those column names to a single column:
SingleColumn
----
col1
col2
col3
I know that there is solution in PowerQuery for that. I am interested in DAX solution.
If you have deployed a PBI data model in a SSAS server, you can write following DAX query to retrieve all column names for all tables within that SSAS DB with
EVALUATE COLUMNSTATISTICS()
Can only be used as a DAX query

Query exhausted resources on this scale factor

I am trying to left join a very big table (52 MIllion rows) to a massive table with 11,553,668,111 observations, but just two columns
Simple left join commands err out with "Query exhausted resources at this scale factor."
-- create smaller table to save $$
CREATE TABLE targetsmart_idl_data_mi_pa_maid AS
SELECT targetsmart_idl_data_pa_mi_pa.idl, targetsmart_idl_data_pa_mi_pa.grouping_indicator, targetsmart_idl_data_pa_mi_pa.vb_voterbase_dob, targetsmart_idl_data_pa_mi_pa.vb_voterbase_gender, targetsmart_idl_data_pa_mi_pa.ts_tsmart_urbanicity, targetsmart_idl_data_pa_mi_pa.ts_tsmart_high_school_only_score,
targetsmart_idl_data_pa_mi_pa.ts_tsmart_college_graduate_score, targetsmart_idl_data_pa_mi_pa.ts_tsmart_partisan_score, targetsmart_idl_data_pa_mi_pa.ts_tsmart_presidential_general_turnout_score, targetsmart_idl_data_pa_mi_pa.vb_voterbase_marital_status, targetsmart_idl_data_pa_mi_pa.vb_tsmart_census_id,
targetsmart_idl_data_pa_mi_pa.vb_voterbase_deceased_flag, idl_maid_base.maid
FROM targetsmart_idl_data_pa_mi_pa
LEFT JOIN idl_maid_base
ON targetsmart_idl_data_pa_mi_pa.idl = idl_maid_base.idl
I was able to overcome the issue by having the large table as driving table
For example.
select col1, col2 from table a join table b on a.col1 =b.col1
table a is small with less than 1000 records where as table b has millions of records. The above query error out
Re-write the query as
select col1, col2 from table b join table a on a.col1 =b.col1

Cassandra only returning a subset of my rows when using varchar

I have encountered a problem when using Apache Cassandra, in that I have 500k rows of entries in a 4 column table. 3 of the columns make up the compound key and the last one is a help column for indexing so that I can search in between the other ones using greater than or lesser than operators. The 3 components of the compund key are integers, and the help column is a varchar filled with help for all 500k entries. Now, when i use:
select count(*) from table where help='help' limit 1kk allow filtering;
I should have gotten as result 500k, but I get 36738.
Any ideas as to why this is happening?
If the table has for columns: id, column1, column2, help; my query needs to be something similar to:
select * from table where column1 > 15 and column1 < 1000 and column2 > 200 and column2 < 10000 and help='help' limit 1kk allow filtering;
Also when I created the table, I used PRIMARY KEY(id, column1, column2)