How to Implement Insert , Update and Delete In the Same Mapping in Informatica - informatica

We have a requirement where we have to Implement Insert, Update and delete Logic under the Same Mapping In Informatica. The Source and target table structures are the same and there is no difference. There is no primary key in the target and we are Defining PERSON_KEY as the primary key in the Informatica mapping to update records, I am currently using a look upon the target table and comparing the records and in the router transformation, The Following logic is used for insert and Updates. I have also provided the Table structure for reference.
NOTE: The Source and target are from Different Database
INSERT : ISNULL(LKP_PERSON_KEY)
UPDATE : IIF(PERSON_KEY=LKP_PERSON_KEY,TRUE,FALSE)
PERSON_KEY NUMBER(7, 0) ,
SOURCE_CD VARCHAR2(16 CHAR) ,
SOURCE_INSTANCE NUMBER(2, 0) ,
TYPE_CD VARCHAR2(32 BYTE) ,
TYPE_INSTANCE NUMBER(2, 0) ,
VALUE VARCHAR2(255 CHAR) ,
VALUE_TEXT VARCHAR2(255 CHAR) ,
UPDATE_SOURCE VARCHAR2(32 CHAR) ,
UPDATE_ACCOUNT VARCHAR2(32 CHAR) ,
UPDATE_DATETIME DATE ,
UPDATE_SUNETID VARCHAR2(64 CHAR) ,
UPDATE_COMMENT VARCHAR2(255 CHAR)

use update strategy.
Make sure PERSON_KEY is defined as PK in target designer.
Then in the mapping, right before target, add expression transformation. Pull in all data columns and PERSON_KEY,LKP_PERSON_KEY columns. Create a column with below logic. I assumed, if none of above condition met, delete the data.
out_insert_update_flag =
IIF(ISNULL(LKP_PERSON_KEY), 'INSERT',IIF(PERSON_KEY=LKP_PERSON_KEY,'UPDATE','DELETE')
)
Then add the update strategy between expression transformation and target. Pull in all required columns and out_insert_update_flag. And create a logic like this-
IIF(out_insert_update_flag ='INSERT', DD_INSERT, IIF(out_insert_update_flag='UPDATE', DD_UPDATE, DD_DELETE
))
Then add all required columns to target.
In session - pls set load strategy to data driven.
Mapping should look like -
.... --> EXP-->UPD-->TGT

Related

Informatica - SQL query to extract data from JSON format value

My source is Oracle table 'SAMPLE' which contain the JSON value in the column REF_VALUE which becomes as a source in Informatica mapping. The data looks as below
PROPERTY | REF_VALUE
CTMappings | {CTMappings": [
{"CTId":"ABCDEFGHI_T2","EG":"ILTS","Type":"K1J10200"},
{"CTId":"JKLMNOPQR_T1","EG":"LAM","Type":"K1J10200"}
"}]}"
I have the SQL query to explore the JSON Data into rows as below
select
substr(CTId,1,9) as ID,
substr(CTId,9,11) as VERSION,
type as INTR_TYP,
eg as ENTY_ID,
from
(
select jt.*
from SAMPLE,
JSON_TABLE (ref_value, '$.Mappings[*]'
columns (
CTId varchar2(50) path '$.formId',
eg varchar2(50) path '$.eg',
type varchar2(50) path '$.Type'
)) jt
where trim(pref_property) = 'Ellipse.Integration.FSM.FormMappings')
The result of table as below
CTiD VERSION EG Typ
======== ======= ==== ======
ABCDEFGHI T2 ILTS K1J102001
KLMNOPQR T1 LAM K1J102000
which is required as an output ( JSON into rows)
Im new to Informatica so I have used source table as 'SAMPLE' and further I want to use this query to extract the data into row format in Informatica but I dont know how to proceed. I have the image please refer
If anyone answers quickly, it will be a great help
Declare the fields in Source Qualifier and use SQL Override property to place your query - that should do the trick.

I want to Assign 'Y' to the Duplicate Records and 'N' to the Unque Records, And Display those 'Y' and 'N' Flags in 'Duplicate' Column

I want to Assign 'Y' to the Duplicate Records and 'N' to the Unique Records, And Display those 'Y' and 'N'
Flags in 'Duplicate' Column.
Like Below
Source Table:
Name,Location
Vivek,India
Vivek,UK
Vivek,India
Vivek,USA
Vivek,Japan
Target Table:
=============
Name,Location,Duplicate
Vivek,India,Y
Vivek,India,Y
Vivek,Japan,N
Vivek,UK,N
Vivek,USA,N
How to Create a Mapping in Informatica Powercenter?
Which Logic I Should use?
[See the Image for More Clarification][1]
[1]: https://i.stack.imgur.com/2F20A.png
You need to calculate count grouping by key columns using aggregator. And then join back to original flow based on key columns.
use Sorter sort the data based on key columns like name and country in your example.
use Aggregator to calculate count() group by key columns.
out_count= count(*)
in_out - key_column
use Joiner to join aggregator data and sorter data based on key columns. Drag out_count and key columns from aggregator to joiner. Drag all columns from sorter. Do a inner join on key columns.
use Expression and create an out expression. Use out_count column to calculate your duplicate flag.
out_Duplicate = iif( out_count>1, 'Y','N')
Whole map should look like this
SRC -->SRT ---->AGG-->\
|------------->JNR-->EXP-->TGT
There's one more way to solve it without the Joiner, which is costly. I'm going to use the Name, Location sample columns from your example.
Use the Sorter on the Name and Location
Add an Expression with variable port for each key column called e.g. v_prev_Name and v_prev_Location.
Assign the expressions accordingly:
v_prev_Name = Name
v_prev_Location = Location
Next create another variable v_is_duplicate with following expression:
IIF(v_prev_Name = Name and v_prev_Location = Location, 1, 0)
Move v_is_duplicate up the list of ports so that it is before v_prev_Name and v_prev_Location - THIS IS IMPORTANT. The order needs to be:
v_is_duplicate
v_prev_Name
v_prev_Location
Add output port is_duplicate with expression simply matching v_is_duplicate.

How to update redshift column: simple text replacement

I have a large target table with columns (id, value). I want to update value='old' to value='new'.
The simplest way would be to UPDATE target SET value='new' WHERE value='old';
However, this deletes and creates new rows and is not recommended, possibly. So I tried to do a merge column update:
# staging
CREATE TABLE stage (LIKE target INCLUDING DEFAULTS);
INSERT INTO stage (SELECT id, value FROM target WHERE value=`old`);
UPDATE stage SET value='new' WHERE value='old'; # ??? how do you update value?
# merge
begin transaction;
UPDATE target
SET value = stage.value FROM stage
WHERE target.id = stage.id and target.distkey = stage.distkey; # collocated join?
end transaction;
DROP TABLE stage;
This can't be the best way of creating the table stage: I have to do all these UPDATE delete/writes when I update this way. Is there a way to do it in the INSERT?
Is it necessary to force the collocated join when I use CREATE TABLE LIKE?
Are you updating all the rows in the table?
If yes you can use CTAS (create table as) which is recommended method
Assuming you table looks like this
table1
id, col1,col2, value
You can use the following SQL to create a new table
CREATE TABLE tmp_table AS
SELECT id, col1,col2, 'new_value'
FROM table1;
After you verify data in tmp_table
DROP TABLE table1;
ALTER TABLE tmp_table RENAME TO table1;
If you are not updating all the rows you can use a filter to do a CTAS and insert the rest of the rows to the new table, let me know if you need more info if this is the case
CREATE TABLE tmp_table AS
SELECT id, col1,col2, 'new_value'
FROM table1
WHERE value = 'old'
INSERT INTO tmp_table SELECT * from table1;
Next step would be DROP the tmp table and rename table1
Update: Based on your comment you can do the following, let me know if this solves your case.
This method basically creates a new table to replace your existing table.
I have used some of your code
CREATE TABLE stage (LIKE target INCLUDING DEFAULTS);
INSERT INTO stage SELECT id, 'new' FROM target WHERE value=`old`;
Above INSERT inserts rows to be updated with 'new', no need to run an UPDATE after this.
Bring unchanged rows
INSERT INTO stage SELECT id, value FROM target WHERE value!=`old`;
After this point you have target table which is your original table intact
stage table will have both sets of rows, updated rows with 'new' value and rows you did not want to change
To replace your target with stage
DROP TABLE target;
or to keep it further verification
ALTER TABLE target RENAME TO target_old;
ALTER TABLE stage RENAME TO target;
From a redshift developer:
This case doesn't require an upsert, or update+insert, and it is fine to just run the update:
UPDATE target SET value='new' WHERE value='old';
Another way would be to INSERT the rows you need and DELETE the other rows, but that's unnecessarily complicated.

Informix: Modify from CLOB to LVARCHAR

I have a table
CREATE TABLE TEST
(
test_column CLOB
)
I want to change the datatype of test_column to LVARCHAR. How can I achieve this? I tried several things until now:
alter table test modify test_column LVARCHAR(2500)
This works, but the content of test_column gets converted from 'test' to '01000000d9c8b7a61400000017000000ae000000fb391956000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000'.
alter table test add tmp_column LVARCHAR(2500);
update test set tmp_column = DBMS_LOB.SUBSTR(test_column,2500,1);
This does not work and I get the following exception:
[Error Code: -674, SQL State: IX000] Method (substr) not found.
Do you have any further ideas?
Using a 12.10.xC5DE instance to do some tests.
From what i could find in the manuals, there isn't a cast from CLOB to other data types.
CLOB data type
No casts exist for CLOB data. Therefore, the database server cannot convert data of the CLOB type to any other data type, except by using these encryption and decryption functions to return a BLOB. Within SQL, you are limited to the equality ( = ) comparison operation for CLOB data. To perform additional operations, you must use one of the application programming interfaces from within your client application.
The encryption/decryption functions mentioned still return CLOB type objects, so they do not do what you want.
Despite the manual saying that there is no cast for CLOB, there is a registered cast in the SYSCASTS table. Using dbaccess , i tried an explicit cast on some test data and got return values similar to the ones you are seeing. The text in the CLOB column is 'teste 01', terminated with a line break.
CREATE TABLE myclob
(
id SERIAL NOT NULL
, doc CLOB
);
INSERT INTO myclob ( id , doc ) VALUES ( 0, FILETOCLOB('file1.txt', 'client'));
SELECT
id
, doc
, doc::LVARCHAR AS conversion
FROM
myclob;
id 1
doc
teste 01
conversion 01000000d9c8b7a6080000000800000007000000a6cdc0550000000001000000000
0000000000000000000000000000000000000000000000000000000000000000000
0000000000
So, there is a cast from CLOB, but it does not seem to be useful for what you want.
So back to the SQL Packages Extension . You need to register this datablade on the database. The files required are located in the $INFORMIXDIR/extend and you want the excompat.* module. Using the admin API, you can register the module by executing the following:
EXECUTE FUNCTION sysbldprepare('excompat.*', 'create');
If the return value is 0 (zero) then the module should now be registered.
SELECT
id
, DBMS_LOB_SUBSTR(doc, DBMS_LOB_GETLENGTH(doc) - 1, 1) as conversion
FROM
myclob;
id 1
conversion teste 01
Another way would be to register your own cast from CLOB to LVARCHAR, but you would have to code an UDR to implement it.
P.S:
Subtracting 1 from the CLOB length to remove the line break.

Informatica : something like CDC without adding any column in target table

I have a source table named A in oracle.
Initially Table A is loaded(copied) into table B
next I operate DML on Table A like Insert , Delete , Update .
How do we reflect it in table B ?
without creating any extra column in target table.
Time stamp for the row is not available.
I have to compare the rows in source and target
eg : if a row is deleted in source then it should be deleted in target.
if a row is updated then update in target and if not available in source then insert it in the target .
Please help !!
Take A and B as source.
Do a full outer join using a joiner (or if both tables are in the same databse, you can join in Source Qualifier)
In a expression create a flag based on the following scenarios.
A key fields are null => flag='Delete',
B key fields are null => flag='Insert',
Both A and B key fields are present - Compare non-key fields of A and B, if any of the fields are not equal set flag to 'Update' else 'No Change'
Now you can send the records to target(B) after applying the appropriate function using Update Strategy
If you do not want to retain the operations done in target table (as no extra column is allowed), the fastest way would simply be -
1) Truncate B
2) Insert A into B