Aggregator and sorter - informatica

Is it possible to use a sorter and and aggregator with two (group by) ports? I have done it with one (group by) port and succeeded, however when I add two group by ports, it fails stating that it was expecting an ascending port, the problem is my sorter already has both ports as ascending.

Yes you can definitely sort and aggregate on multiple ports. Make sure the order of the ports in both the transformations are same.

Related

Unconnected lookup input value not working

In Informatica, I am trying to get the date after certain working days (say 10,20,30) based on another conditions ( say prio 1,2,3). Already I have one DIM_DATE table where holidays and working days are configured. There is no relation with priority table and DIM_DATE table.Here I am using one unconnected lookup with doing the query override. Below the query I used:
select day_date as DAY_DATE
--,rank1
--,PRIORITY_name
from (
select day_date as DAY_DATE,DENSE_RANK() OVER (ORDER BY day_date) as RANK1,PRIORITY_name as PRIORITY_NAME from (
select date_id,day_date from dim_date where day_date between to_date('10.15.2018','MM.DD.YYYY') and to_date('10.15.2018','MM.DD.YYYY') +interval '250' DAY(3) and working_day=1
)
,DIM_PRIORITY
where DIM_PRIORITY.PRIORITY_name='3'
) where rank1=10
order by RANK1 --
In this example I have hardcoded the day_date,priority_name,rank1. But I need to pass all of them as input coming from mapping.
This hardcode is working but while taking as input like ?created? then it is not working. Here created is the date which will come from mapping flow.
Could you please suggest if this is feasible which I am trying?
?created? is giving error missing right paranthesis but the hardcoded query is running fine in sql.
You match your incoming port against one of the return fields of one of the records in the cache via the lookup condition (not by feeding ports into the override itself).
If this is not possible for you for some unexplained reason then you could define 3 mapping variables and set them to be equal to each of the input ports you care about (using setvariable) before feeding the record in to the lookup. Then use the variables in your lookup override

Is there a way to use value from CUST_DDA port as input port for lookup?

I'm trying to use a Lookup transformation to extract ACCT_ID from ACCT table based on the port CUST_DDA which is an output port from an expression.
I'm using an sqloverride as below. The initial lookup condition :
SUBSTR_ACCT_ID = IN_CUST_DDA
Override:
SELECT
ACCT.ACCT_ID as ACCT_ID,
ACCT.ALT_ACCT_ID as ALT_ACCT_ID,
substr(acct.acct_id,-1*(length(IN_CUST_DDA))) as SUBSTR_ACCT_ID
FROM ACCT
WHERE ACCT.ALT_ACCT_ID LIKE '%'||TO_CHAR(IN_CUST_DDA)
AND ACCT.ACCT_ID LIKE '%'||TO_CHAR(IN_CUST_DDA)
The above sql override is failing due to the error : ORA-00904: "IN_CUST_DDA": invalid identifier
Is there a way to use the value from CUST_DDA port as an input port for the lookup. CUST_DDA is not a field that belongs to the ACCT table. Is there a way to do this.
Thanks.
From the override I can see that you are trying to convert IN_CUST_DDA into CHAR, also at the same time your using IN_CUST_DDA in the length.
Might be the length function causing the issue, because length function can be used along with a string.
In order to use CUST_DDA(from source) in your lookup override. You need to join the lookup table with source with a common field in the override.
You cant use the port in the way you mentioned. When you run the workflow informatica integration service will run the lookup override query in the database and get the data into the cache file (that is the reason you are receiving the error "IN_CUST_DDA": invalid identifier.) . Once the cache file is ready it will apply the conditions and then get the output for you.
Let me know if you are not clear on this
Regards
Raj
To achieve this you need to configure your lookup as non-cached, so the query will be executed for each input row. Note, that this degrades performance a lot.
Next, you need to use a bit different syntax, enclosing the input port in question marks. Here's an example. In your case it should be something like (this might need a little adjustment):
SELECT
ACCT.ACCT_ID as ACCT_ID,
ACCT.ALT_ACCT_ID as ALT_ACCT_ID,
substr(acct.acct_id,-1*(length(?IN_CUST_DDA?))) as SUBSTR_ACCT_ID
FROM ACCT
WHERE ACCT.ALT_ACCT_ID LIKE '%'||TO_CHAR(?IN_CUST_DDA?)
AND ACCT.ACCT_ID LIKE '%'||TO_CHAR(?IN_CUST_DDA?)

Aggregator transformation in Informatica

Is it compulsory to select group-by while performing count operation in the aggregator transformation in Informatica
In order to perform a count, you have to specify atleast one column in group by to AGGREGATOR transformation to let it know that it has to perform grouping on that column.
Even if you don't provide GROUP BY also, the mapping will not fail, but you won't get the expected result.
While using Aggregator transformation, you need to check group by as the result returns each row by performing aggregation one by one and the passes to the pipeline. If no group by is checked, the last row will be processed and it will return only single row (last row) as it has no command to aggregate data. In order to perform count with respect to specific column, it is manditory to check group by for required columns.
If you hesitate to group by, you can use expression transformation and use count function to perform aggregation for required column without grouping.
Thank you
It's not mandatory to select atleast one port as group by. Hiwever, if you don't choose any group by port - Infa will return only last row.
Hope this helps

Informatica update partial data using UPDATE strategy

Is it possible to update partial data (primary key + just one or more columns) in a matching target row while leaving others intact in target as shown below or do I have to do a lookup and port all the columns?
Yes, simply only connect the required ports. Note, that you need to define in your target the ID column as primary key.

Powercenter - concurrent target instances

We have a situation where we have a different execution order of instances of the same target being loaded from a single source qualifier.
We have a problem when we promote a mapping from DEV to TEST when we execute in TEST after promoting there are problems.
For instance we have a router with 3 groups for Insert, Update and Delete followed by the appropriate update strategies to set the row type accordingly followed by three target instances.
RTR ----> UPD_Insert -----> TGT_Insert
\
\__> UPD_Update -------> TGT_Update
\
\__> UPD_Delete ---------> TGT_Delete
When we test this out using data to do an insert followed by an update followed by a delete all based on the same primary key we get a different execution order in TEST compared to the same data in our DEV environment.
Anyone have any thoughts - I would post an image but I don't have enough cred yet.
Cheers,
Gil.
You can not controll the load order as long as you have a single source. I you could separate the loads to use separate sources the target load order setting in the mapping could be used, or you could even create separate mappings for them.
As it is now you should use a single target and utilize the update strategy transformation to determine the wanted operation for each record passing through. It is then possible to use a sort to define in what order the different operations is made to the physical table.
You can use the sorter transformation just before update strategy......based on update strategy condition you can sort the incoming rows....So first date will go through the Insert, than update at last through delete strategy.
Simple solution is try renaming the target definition in alphabetical order... like INSERT_A, UPDATE_B, DELETE_C then start loading
This will load in A,B,C order. Try and let me know