How to create Target Files Dynamically, If Deptno=10 then create Target file as DEPT10.txt, If Deptno=20 then create Target file as DEPT20.txt, If Deptno=30 then create Target file as DEPT30.txt
You can achieve this by following below steps in Informatica.
Add a column in target 'Add FileName column to this table' called out_file_name.
User sorter to Order by Dept ID.
Then Use an expression transformation. In the expression transformation create the below ports and assign expressions. Here v_* are variable ports, o_ are output ports.
v_curr_dept_id= dept_id
v_flag = IIF(v_curr_dept_id=v_prev_dept_id,0,1)
v_prev_dept_id = dept_id
o_flag = v_flag
o_file_name = dept_id||'.txt'
Use a transaction control to create different files now.
IIF(o_flag = 1, TC_COMMIT_BEFORE, TC_CONTINUE_TRANSACTION)
Link o_file_name to column out_file_name in step1. Link other columns accordingly.
Whole mapping should look like this -
SQ.... SRT > EXP > TXN >TGT
Related
My source is Oracle table 'SAMPLE' which contain the JSON value in the column REF_VALUE which becomes as a source in Informatica mapping. The data looks as below
PROPERTY | REF_VALUE
CTMappings | {CTMappings": [
{"CTId":"ABCDEFGHI_T2","EG":"ILTS","Type":"K1J10200"},
{"CTId":"JKLMNOPQR_T1","EG":"LAM","Type":"K1J10200"}
"}]}"
I have the SQL query to explore the JSON Data into rows as below
select
substr(CTId,1,9) as ID,
substr(CTId,9,11) as VERSION,
type as INTR_TYP,
eg as ENTY_ID,
from
(
select jt.*
from SAMPLE,
JSON_TABLE (ref_value, '$.Mappings[*]'
columns (
CTId varchar2(50) path '$.formId',
eg varchar2(50) path '$.eg',
type varchar2(50) path '$.Type'
)) jt
where trim(pref_property) = 'Ellipse.Integration.FSM.FormMappings')
The result of table as below
CTiD VERSION EG Typ
======== ======= ==== ======
ABCDEFGHI T2 ILTS K1J102001
KLMNOPQR T1 LAM K1J102000
which is required as an output ( JSON into rows)
Im new to Informatica so I have used source table as 'SAMPLE' and further I want to use this query to extract the data into row format in Informatica but I dont know how to proceed. I have the image please refer
If anyone answers quickly, it will be a great help
Declare the fields in Source Qualifier and use SQL Override property to place your query - that should do the trick.
I want to Assign 'Y' to the Duplicate Records and 'N' to the Unique Records, And Display those 'Y' and 'N'
Flags in 'Duplicate' Column.
Like Below
Source Table:
Name,Location
Vivek,India
Vivek,UK
Vivek,India
Vivek,USA
Vivek,Japan
Target Table:
=============
Name,Location,Duplicate
Vivek,India,Y
Vivek,India,Y
Vivek,Japan,N
Vivek,UK,N
Vivek,USA,N
How to Create a Mapping in Informatica Powercenter?
Which Logic I Should use?
[See the Image for More Clarification][1]
[1]: https://i.stack.imgur.com/2F20A.png
You need to calculate count grouping by key columns using aggregator. And then join back to original flow based on key columns.
use Sorter sort the data based on key columns like name and country in your example.
use Aggregator to calculate count() group by key columns.
out_count= count(*)
in_out - key_column
use Joiner to join aggregator data and sorter data based on key columns. Drag out_count and key columns from aggregator to joiner. Drag all columns from sorter. Do a inner join on key columns.
use Expression and create an out expression. Use out_count column to calculate your duplicate flag.
out_Duplicate = iif( out_count>1, 'Y','N')
Whole map should look like this
SRC -->SRT ---->AGG-->\
|------------->JNR-->EXP-->TGT
There's one more way to solve it without the Joiner, which is costly. I'm going to use the Name, Location sample columns from your example.
Use the Sorter on the Name and Location
Add an Expression with variable port for each key column called e.g. v_prev_Name and v_prev_Location.
Assign the expressions accordingly:
v_prev_Name = Name
v_prev_Location = Location
Next create another variable v_is_duplicate with following expression:
IIF(v_prev_Name = Name and v_prev_Location = Location, 1, 0)
Move v_is_duplicate up the list of ports so that it is before v_prev_Name and v_prev_Location - THIS IS IMPORTANT. The order needs to be:
v_is_duplicate
v_prev_Name
v_prev_Location
Add output port is_duplicate with expression simply matching v_is_duplicate.
I want to display a link in a data grid, only when a certain condition is met for that record. I also want that link to be dynamic, based on the data in the data grid. Lastly, the data grid is linked to a header record displayed above the data grid region.
Create a hidden field that will be used for the Link Text. Column Name = HIDDEN_LINK_TEXT. Type = Hidden. This field will have a source type of SQL Expression. Q in this example query represents the data grid's source select statement. Parenthesis are required in the SQL Expression text box for the hidden field.
(SELECT '[Static link text]' FROM TABLE B WHERE B.RECORD_ID =
Q.RECORD_ID AND B.FIELD_1 = Q.FIELD_1 AND B.FIELD_2 = Q.FIELD_2)
Create a displayed field for the link. Column Name = DISPLAYED_LINK Type = Link.
Link Text should reference the hidden field we created in step 1. Link Text = &"HIDDEN_LINK_TEXT". Include the ampersand and double quotes.
Set the link target to what your target page. Include any variables or "Set Items" which you want to set when linking to the page.
I have a oracle table where I have columns like Document (type BLOB), Extension ( VARCHAR2(10) with values like .pdf, .doc) and Document Description(VARCHAR2
(100)). I want to export this data and provide to my customer.
Can this be done in kettle ?
Thanks
I have a MSSQL database that stores images in a BLOB column, and found a way to export these to disk using a dynamic SQL step.
First, select only the columns necessary to build a file name and SQL statement (id, username, record date, etc.). Then, I use a Modified Javascript Value step to create both the output filename (minus the file extension):
outputPath = '/var/output/';
var filename = outputPath + username + '_' + record_date;
// --> '/var/output/joe_20181121'
and the dynamic SQL statement:
var blob_query = "SELECT blob_column FROM dbo.table WHERE id = '" + id + "'";
Then, after using a select to reduce the field count to just the filename and blob_query, I use a Dynamic SQL row step (with "Outer Join" selected) to retrieve the blob from the database.
The last step is to output to a file using Text file output step. It allows you to supply a file name from a field and give it a file extension to append. On the Content tab, all boxes are unchecked, the Format is "no new-line term" and the Compression is "None". The only field exported is the "blob_column" returned from the dynamic SQL step, and the type should be "binary".
Obviously, this is MUCH slower than other table/SQL operations due to the dynamic SQL step making individual database connections for each row... but it works.
Good luck!
I have table with a column called State. My requirement is to read data from the table and write it into multiple files based on State name. I'm using Informatica PowerCenter as ETL tool
Create a mapping as below:
Source --> SQ (sort data state name)--> Expression --> Transaction Control --> Target
Expression: Create a variable port to store previous value of state and a output port flag.
flag = IIF(state = var_state,0,1)
var_state = state
In Transaction Control Transformation, use TC_COMMIT_BEFORE when Flag = 1
Add Filename port to Target and map state name as filename.
You can also do this using post shell command:
Output file --> Name of output file of Informatica.
$2 --> assume 2nd field is the statement. Replace it as per your file
awk -F\| '{print>$2}' outputfile