In my mapping i have two filter transformations and two target tables but target object writing data into first target and second target is empty - informatica

enter image description hereI have one source and two target tables without Primary key. both source and targets are Oracle database. i created a mapping using two filter transformations to load data into two target tables. in both Filter Txn's same condition given i.e., Sal>1500 which satisfies 7 records. When i ran the workflow by keeping target load type 'Bulk' and in run properties it shows 7 records are loaded into each target table but when i check in oracle data base only second table data getting loaded. when i change the Load type to 'Normal' both tables are loaded.
what makes the difference in database ?

For oracle, bulk load succeeds when there is no index on the table in database.
So, when you are running in bulk mode, load to one target is failing because it has index on it in database.
Mode Normal will work with or without index so its working properly.

Related

What does Is Staged mean in Informatica Connection object definition?

I am trying to replicate an Informatica Powercenter mapping in Informatica Cloud. When looking at the Target table properties, found the attribute "Is Staged" in the target connection object definition.
The property Truncate Target Table can be inferred easily, it means Truncate table before it is being loaded with data. What does the property "Is Staged" mean?
is Staged the name says infa will stage the data into a staging area flat file. And then read from the file and load into target table. If its unchecked, data will be loaded using a direct targte writing pipeline.
This is done to make sure data is extracted from source asap and if there is a failure in load, you can restart and re-load.
But this is set for certain data sources. Also you need to setup stage directory.

How to run a simple query using Informatica PowerCenter

I have never used Informatica PowerCenter before and just don't know where to begin. To summarize my goal, I need to run a simple count query against a Teradata database using Informatica PowerCenter. This query needs to be ran on a specific day, but doesn't require me to store or manipulate the data returned. Looking at Informatica PowerCenter Designer is a bit daunting to me as I'm not sure what to be looking for.
Any help is greatly appreciated in understanding how to setup (if needed):
Sources
Targets
Transformations
Mappings
Is a transformation the only way to query data using PowerCenter? I've looked at a lot of tutorials, but most seem to be oriented to familiar users.
You can run a query against a database using informatica, only if you create a mapping, session and workflow to run that. But you cannot see the result unless you store it somewhere, either in a flatfile or a table.
Here are the steps to create it anyway.
Import your source table in source analyzer from Teradata.
Create a flat file target or import a relational target in target analyzer
Create a mapping m_xyz, drag and drop your source into the mapping.
You will see your source and source qualifier in the mapping. Write your custom query in source qualifier, say select count(*) as cnt from table
Remove all the ports from SQ except one numeric port from source to SQ and name it as cnt, count from your select will be assigned to this port.
Now drag and drop this port to an expression transformation.
Drag and drop your target into the mapping
Propagate the column from expression to this flat file/relational target.
Create a workflow and a session for this mapping.
In workflow you can schedule it to run on specific date.
When you execute this, count will be loaded into the column of that flat file or table.

Insert static data along with data loading wizard utility in Oracle APEX

I am new with Oracle APEX and trying to explore all options in APEX (5.1). My query is related to Data loading wizard in Oracle APEX. I created one table which has three columns, and I set up that table as Data Load Definitions.
This is the process that I expect through the data loading wizard:
In the first page of Data load Source, I created one radio page item and by selecting that, it should be assigned to the first column in the table.
I will upload a CSV file with two columns which will be assigned to the second and third columns.
So, whatever records are there in the CSV file, by selecting page item that static strings need to be inserted along with file data.
I Googled the same thing but I didn't find any proper solution for this requirement. If you can help me then it would be appreciated.
My preferred approach for this sort of thing is to use a staging table as the target of the Data Load wizard; then add a process at the end that copies the rows from the staging table to the final table, setting the static column(s) at the same time; then delete the rows from the staging table.
Note: add a column SESSION_ID to the table with a trigger that sets it to v('SESSION') so that the process will only pick up rows for the current user session.

Dynamic Mapping changes

When there is any change in DDL of any table, we have to again import source and target definition and change mapping. Is there a way to dynamically fetch the DDL of the table and do the data copy using Informatica mapping.
The ETL uses an abstractive layer, separated from any physical database. It uses Source and Target definition that indicate what should be expected to find in DB to which the job will be connecting. Keep in mind that the same data mapping can be applied to many different source and / or target systems. It's not bound to any of them, it just defines what data to fetch and what to do with them.
In Informatica this is reflected by separating Mappings, that define data flow, and Sessions, which indicate where the logic should be applied.
Imagine you're transferring data from multiple servers. A change applied on one of them should not break the whole data integration. If the changes would be dynamically reflected, then a column added on one server would make it impossible to read data from the others.
Of course if perfectly fine to have such requirement as you've mentioned. It's just not something Informatica supports with their approach.
The only way workaround is to create your own application that would fetch table definitions, generate the Workflows and import them into Informatica prior to execution.

Issue with Informatica Loading into Partitioned Oracle Target Table

I m facing a issue in regard to loading into Partitioned Oracle Target table.
We have 2 sessions having same table in Oracle as Target
a. INSERT data into Partition1
b. UPDATE data in Partition2
We are trying to achieve parallelism in the workflow, and there are more Partitions and sessions to be created for different data but into same table, but different partitions..
Currently when we run both these sessions parallely, the Update session runs successfully, but the INSERT session gets a NOWAIT error.
NOTE: both are loading data for different partitions.
we made the mapping logic into 2 differnt stored procedures(one does INSERT, and another UPDATE), and they run parallely without any lock when executed from DB directly.
We tried mentioning the partition name in Target override too. but with same result.
Can you advice what are the alternatives we have inorder to achieve parallelism into same target table from Informatica.
Thanks in advance