CALCULATE - how does AND logic work when multiple FILTERS are used? - powerbi

When there are multiple filters, they're evaluated by using the AND logical operator. That means all conditions must be TRUE at the same time.
I understand this when the filters are like:
AMOUNT>100, CATEGORY='Sales'
However, when one or all of the filters are given by FILTER formula, I am unable to visualise how the AND logic works (and what does all conditions must be true mean because a condition [FILTER] itself is a table). Please can you give an example.

All of the filters will be part of an expanded table and ultimately each condition is a set of rules for specific columns on this expanded table.

Related

To filter specific records

I have a requirement to filter (flatfile) only those records who has the colA values as 1,2,3,4,5,6 and also ColB as 'N'. The records that satisfy this condition from the source file should process to target.
Earlier it was said to check for only one value from colA. So therefore i applied
IIF(COLA='1' AND COLB'N',TRUE)
How to filter with multiple values for the same column? I am new to informatica power center.
There are two ways you can achieve this in an expression : using OR logical operator or using IN function.
With OR
IIF((COLA='1' OR COLA='2' OR COLA='3' OR COLA='4' OR COLA='5) AND COLB='N',TRUE)
Parenthesis are essential to group conditions on COLA.
With IN
IIF(IN(COLA,'1','2,'3','4','5') AND COLB='N',TRUE)
I find this one easier to read.

Azure logic apps Odata filter query with if statment on two columns

Good morning,
I have a unique requirement where I have to apply a filter on "Get entities" from Azure table based on a condition, filters come from HTTP get request.
There are two filters - a and b.
If both filters passed to the flow are empty, no filter is applied.
If either one of the filters is not empty, the filter must be applied on that column.
If both filters are not empty, the filters must be applied on both columns.
Is it possible to apply an If statement in an ODATA filter query?
I can't seem to find a good answer.
For this requirement, we can just use "Condition" in logic app to implement it. It's not a smart solution, but it works.
First I use two variables to simulate your two filters from http request.
Then use another variable to store the result filter(s).
Now add a "Condition" to judge if filter1 is not equal to "empty".
If true, add another condition "Condition 2" to judge if filter2 is not equal to "empty" and set the value for filterResult.
If false, also add another condition "Condition 3" to judge if filter2 is not equal to "empty" and set the value for filterResult. Note: use expression string(' ') in "Set variable 4", or it will not allow us to save the logic app.
After that, we can use filterResult in "Get entities". The expression in below screenshot is trim(variables('filterResult')).

How to get row count for large dataset in Informatica?

I am trying to get the row count for a dataset with 280 fields with out having affect on the performance. Looking for best possible ways to perform.
The better option to avoid performance issue is, use sorter transformation and sort the columns and pass the pipeline to aggregator transformation. In aggregator transformation please check the option sorted input.
In terms if your source is a database then, index the required conditional columns in the table and also partition the table if required.
For your solution, I have in mind 2 options:
Using Aggregator (remember to use a predefined order by to improve performance with the next trans), SQ > Aggregator > Target. Inside the aggregator add new ports with the sum() and/or count() functions. Remember to select the columns to group
Check this out this example:
https://www.guru99.com/aggregator-transformation-informatica.html
Using Source Qualifier query override. Use a traditional select count/sum with group by from the database- SQ > Target.
By the way. Informatica is very good with the performance, more than the columns you need to review how many records you are processing. A best practice is always to stress the datasource/database more than the Infa app.
Regards,
Juan
If all you need is just to count the rows, use the Aggregator. That's what it's for. However, this will create cache - to limit it's size, use a single port.
To avoid caching, you can use a variable in expression and just increment it. This however will give you an extra column with all rows numbered, not just a single value. You'll still need to aggregate it. Here it would be possible to use aggregater with no function to return just the last value.

Select Statement Vs Find in Ax

while writing code we can either use select statement or select field list or find method on table for fetching the records.
I wonder which of the statement helps in better performance
It really depends on what you actually need.
find() methods must return the whole table buffer, that means, all of the columns are projected into the buffer returned by it, so you have the complete record selected. But sometimes you only need a single column, or just a few. In such cases it can be a waste to select the whole record, since you won't use the columns selected anyway.
So if you're dealing with a table that has lots of columns and you only need a few of them, consider writing a specific select statement for that, listing the columns you need.
Also, keep in mind that select statements that only project a few columns should not be made public. That means that you should NOT extract such statements into a method, because imagine the surprise of someone consuming that method and trying to figure out why column X was empty...
You can look at the find() method on the table and find out the same 'select'-statement there.
It can be the same 'select; statement as your own an the performance will be the same in this case.
And it can be different select statement then your own and the performance will be depend on indexes on the table, select statement, collected statistics and so on.
But there is no magic here. All of them is just select statement - no matter which method do you use.

how to implement row level summmary function in informatica

Can someone please explain me how to implement the following logic in informatica. but Not with source qualifier with other transformations inside the mapping.
SUM(WIN_30_DUR) OVER(PARTITION BY AGENT_MASTER_ID ORDER BY ROW_DT ROWS BETWEEN 30 PRECEDING AND 1 PRECEDING)
Basically this is sql(oracle) level requirement but i want at informatica level.
Use the Aggregator to calculate sums groupped by AGENT_MASTER_ID and self-join it on AGENT_MASTER_ID. Make sure to have the data sorted and use 'sorted input' property for aggregator - it will also be mandatory for a self join.