informatica scenario complex - informatica

Scenario: if you run for the first time,first target table must load,if you run for the second time,second target table must load,
similarly third,fourth How to solve this scenario?
Thanks in advance

I would create a sequence generator that increments each time the job is run. Just have a list of the target tables (in order) and do a lookup on the current sequence value.

Related

Created one session bt it took too time in no more lookup cache to build by additional concurrent pipeline in the current concurrent source set

What will be the solution for TT_11185 no more lookup cache to build additional concurrent pipeline in the current concurrent source set becuz it taking too much time to run the session
This normally happens when one or more lookup SQLs are taking too long to fetch the data and cache it. You can do below two things -
Tune SQL of the lookups. Check the session log carefully, identify which lookup or lookup SQL is taking time. Tune it up by adding more filters or add inner join to the source, remove unwanted columns from lookup, join on indexed columns, order by only keys, put date filter if you think its appropriate. This will help overall performance of the session and your session will take much less time.
Now, if its a flat file lookup, then try to reduce number of rows in the file.
You can set session property Additional Concurrent Pipelines for Lookup Cache Creation to Auto or some numeric value like 5. This will ensure your lookups gets cached in parallel so whole session takes less time.
You can also increase DTM Buffer Size but its not necessary if there is issue with point #1.

Can Informatica's Stored Procedure Transformer process stored procedures that have multiple resultsets?

I have a stored procedure that returns two resultsets. I know Informatica has a Stored Procedure Transformer, but I cannot find anywhere it is possible to handle a stored procedure that returns more than one resultset.
Is this an Informatica capability?
It's not possible, I'm afraid. Informatica will not be able to 'guess' what to do with each dataset.
In general, whatever it is that you need to do with the results, e.g. if you need to:
join them, or
use just one of them in a particular mapping, or
switch between them with every run,
I'd recommend to wrap this stored procedure with another one, that would perform the logic required and return the appropriate result set.
Informatica SP transformation can produce only return value not a result set as far as I am aware of.
The possible solution is, store the result-set data into a table/flat-file and use it as a source (either using SQ override or flat file source) in the following mapping

Informatica : taking very long time when doing insert

i have one mapping which just includes one source table and one target table. The source table has 100 columns and around 33xxxx records, i need to use this tool to insert to the target table and the logic is insert only. The version of informatica is 9.6.1 version and Database is SQL Server 2012.
After i run the workflow, it takes 5x/s to insert. the speed is too slow. I think it may be related to the number of columns
Can anyone help me how to increase the speed?
Thanks a lot
I think i know the reason why it happened. It is there are two fields which are ntext field in this table. That's why it takes very long time.
You can try the below options
1) Use bulk option for 'Target Load type' attribute in session if the target table doesn't have any indexes or keys on it
2) If there is any SQL override in the SOURCE QUALIFIER try to tune the query
3) Find for 'BUSY' in the session log and note down the busy percentages of each thread. Based on the thread percentages you will be able to identify the exact thread which is taking more time (Reader, Transformation, Writer)
4) Try to use informatica partitions through which you can achieve parallel processing.
Thanks and Regards,
Raj
Consider following points to increase the performance:
Increase the "commit interval" size in the session level properties.
Use the "bulk load" in session level properties.
You can also use the "partitioning" in session level, to do this you need partitioning license.
If your source is a database and you are doing sql override in source qualifier transformation , then you can also use the "Hints" for increasing the performan

Can a map side join have reducers?

I want to write a map-side join and want to include a reducer code as well. I have a smaller data set which I will send as distributed cache.
Can I write the map-side join with reducer code?
Yes!! Why not. Look, reducer is meant for aggregation of the key values emitted from the map. So you can always have a reducer in your code whenever you want to aggregate your result (say you want to count or find average or any numerical summarization) based on certain criteria that you've set in your code or in accordance with the problem statement. Map is just for filtering the data and emitting some useful key value pairs out of a LOT of data. Map side join is just needed when one of the dataset is small enough to fit the memory of the commodity machine. By the way reduce-side join serves your purpose too!!

Powercenter - concurrent target instances

We have a situation where we have a different execution order of instances of the same target being loaded from a single source qualifier.
We have a problem when we promote a mapping from DEV to TEST when we execute in TEST after promoting there are problems.
For instance we have a router with 3 groups for Insert, Update and Delete followed by the appropriate update strategies to set the row type accordingly followed by three target instances.
RTR ----> UPD_Insert -----> TGT_Insert
\
\__> UPD_Update -------> TGT_Update
\
\__> UPD_Delete ---------> TGT_Delete
When we test this out using data to do an insert followed by an update followed by a delete all based on the same primary key we get a different execution order in TEST compared to the same data in our DEV environment.
Anyone have any thoughts - I would post an image but I don't have enough cred yet.
Cheers,
Gil.
You can not controll the load order as long as you have a single source. I you could separate the loads to use separate sources the target load order setting in the mapping could be used, or you could even create separate mappings for them.
As it is now you should use a single target and utilize the update strategy transformation to determine the wanted operation for each record passing through. It is then possible to use a sort to define in what order the different operations is made to the physical table.
You can use the sorter transformation just before update strategy......based on update strategy condition you can sort the incoming rows....So first date will go through the Insert, than update at last through delete strategy.
Simple solution is try renaming the target definition in alphabetical order... like INSERT_A, UPDATE_B, DELETE_C then start loading
This will load in A,B,C order. Try and let me know