Java Transformation error (Message Code: JAVA PLUGIN_1762) - informatica

I have 2 csv source files.
I am using union transformation to consolidate the sources and then using Java transformation to generate rows for below sample rows :
COLUMN1 COLUMN2 COLUMN3 COLUMN4
abc VK123 DKVGH VP234,VP111
bbb VK345 DGHKD VP999,VM33
Target should be :
COLUMN1 COLUMN2 COLUMN3 COLUMN4
abc VK123 DKVGH VP234
abc VK123 DKVGH VP111
bbb VK345 DGHKD VP999
bbb VK345 DGHKD VM33
Code in JAVA transformation:
String str=COLUMN4;
String[] temp;
String delimiter = ",";
temp = str.split(delimiter);
for (int i =0; i< temp.length; i++){
COLUMN4= temp[i];
generateRow();
}
Encountering the below errors after running workflow :
Message Code: JAVA PLUGIN_1762
Message: [ERROR] java.lang.NullPointerException
Message Code: JAVA PLUGIN_1762
Message: [ERROR] at com.informatica.powercenter.server.jtx.JTXPartitionDriverImplGen.execute(JTXPartitionDriverImplGen.java:195)
Please provide me some inputs in order to fix these issues

Your Java code looks fine. Check if the value of column4 is coming as null. Alternatively, you can include null checking in the Java code.
if (COLUMN4 != null)
str=COLUMN4;
else
str="";

Related

How to emulate the formula Proper on AppScript for a specific column?

I found the following code to emulate the proper formula, but it has a wrong ( maybe outdated) syntax, and as far as i understood, it should applies to all columns of a given sheet.
function PROPER_CASE(str) {
if (typeof str != "string")
throw `Expected string but got a ${typeof str} value.`;
str = str.toLowerCase();
var arr = str.split(/.-:?—/ );
return arr.reduce(function(val, current) {
return val += (current.charAt(0).toUpperCase() + current.slice(1));
}, "");
}
Here's an example of the input :
A
B
C
D
ColumnA
ColumnB
ColumnC
ColumnD
EXCEL ACTION LIMIMTED (毅添有限公司)
207/2018
n/a
without-proper
Hang Wo Holdings
205/2015
35/2020
without-proper
central southwood limited
308/2019
n/a
without-proper
This would be the desired output:
ColumnA ColumnB ColumnC COlumnD
Excel Action Limited (毅添有限公司) 207/2018 n/a without-proper
Hang Wo Holdings 205/2015 35/2020 without-proper
Central Southwood Limited 308/2019 n/a without-proper
And this is the error output of that function :
Erro
Expected string but got a undefined value.
PROPER_CASE # macros.gs:115
This is the only way I can see of reproducing you results. I don't see how to avoid captalizing the first letter of the last two columns with avoiding them:
function lfunko() {
const ss = SpreadsheetApp.getActive();
const sh = ss.getSheetByName("Sheet0");
if (sh.getLastRow() > 4) {
sh.getRange(6, 1, sh.getLastRow() - 5, sh.getLastColumn()).clearContent();
SpreadsheetApp.flush();
}
const vs = sh.getDataRange().getDisplayValues().map((r, i) => {
return r.map((c, j) => {
if (i > 0 && j < 1) {
let arr = c.toString().toLowerCase().split(/.-:?-/g);
return arr.reduce((val, current) => {
//Logger.log(current)
return val += current.charAt(0).toUpperCase() + current.slice(1);
}, '');
} else {
return c;
}
});
});
Logger.log(JSON.stringify(vs))
sh.getRange(sh.getLastRow() + 2, 1, vs.length, vs[0].length).setValues(vs);
}
A
B
C
D
Data
ColumnA
ColumnB
ColumnC
ColumnD
EXCEL ACTION LIMIMTED (毅添有限公司)
207/2018
n/a
without-proper
Hang Wo Holdings
205/2015
35/2020
without-proper
central southwood limited
308/2019
n/a
without-proper
Outpput
ColumnA
ColumnB
ColumnC
ColumnD
Excel action limimted (毅添有限公司)
207/2018
n/a
without-proper
Hang wo holdings
205/2015
35/2020
without-proper
Central southwood limited
308/2019
n/a
without-proper
I have tested your code and it works fine. It does convert the input string into a proper case.
However, take note that in Google Sheets, when you get values, your data is in 2D Array or Nested Array.
So to apply this to your Spreadsheet after getting the values you will have to target the column you want to replace and loop through each string in the array. You will then have to setValues() back to the specified range to replace it in the spreadsheet.
Solution 1:
Try:
With your function, try adding this script to apply to your spreadsheet.
function setToColumn(){
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet();
var dataRange = sheet.getRange(1,1,sheet.getLastRow()); //2ND Parameter is the column, replace if you want to edit different column
var allData = dataRange.getValues().flat();
var properData = []
allData.forEach(function(data){
properData.push([PROPER_CASE(data)])
});
dataRange.setValues(properData);
}
From:
Result:
Solution 2:
If you don't mind using different script which only needs one function you may use the script below:
function properCase() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet();
var dataRange = sheet.getRange(1,1,sheet.getLastRow()); //2ND Parameter is the column, replace if you want to edit different column (1 = Column A, 2 = Column B)
var allData = dataRange.getValues().flat();
var properData = []
allData.forEach(function(data){
properData.push([data.toLowerCase().replace(/\b[a-z]/ig, function(match) {return match.toUpperCase()})]);
});
dataRange.setValues(properData);
}
Reference for Solution 2:
Apps script how to format a cell to Proper Text (Case)

combine column in json format in big query

I have columns in bigquery like this:
expected output:
I am trying to merge columns into json using bigquery.
I am taking letter before underscore(common name ) as column then converting.
I am trying this query:
with selectdata as (
SELECT a_firstname, a_middlename,a_lastname FROM `account_id.Dataset.Table_name`
)
select TO_JSON_STRING(t) AS json_data FROM selectdata AS t;
How can I join columns with condition or with case to achieve this output in bigquery
Consider below approach
create temp function extract_keys(input string) returns array<string> language js as """
return Object.keys(JSON.parse(input));
""";
create temp function extract_values(input string) returns array<string> language js as """
return Object.values(JSON.parse(input));
""";
select * except(row_id) from (
select format('%t',t) row_id,
split(key, '_')[offset(0)] as col,
'{' || string_agg(format('"%s":"%s"', split(key, '_')[safe_offset(1)], value)) || '}' as value
from your_table t, unnest(extract_keys(to_json_string(t))) key with offset
join unnest(extract_values(to_json_string(t))) value with offset
using(offset)
group by row_id, col
)
pivot (any_value(value) for col in ('a','b','c'))
if applied to sample data in your question - output is

DB2 use C and C++ host variable arrays

I'm attempting to insert multiple rows into a DB2 database using c / c++ code like this:
EXEX SQL BEGIN DECLARE SECTION;
char inputArrayChar1 [3][10];
char inputArrayChar2 [3][10];
char inputArrayChar3 [3][10];
EXEX SQL END DECLARE SECTION;
for(int i = 0 ; i < 3 ; i++)
{
sprintf(inputArrayChar1[i] , "column1Data%d" , i + 1);
sprintf(inputArrayChar2[i] , "column2Data%d" , i + 1);
sprintf(inputArrayChar3[i] , "column3Data%d" , i + 1);
}
EXEC SQL INSERT INTO TABLETEST (COLUMN1 , COLUMN2 , COLUMN3)
VALUES(:inputArrayChar1 , :inputArrayChar2 ,inputArrayChar3);
run process DB result : only 1 row data , other 2 rows data not insert to DB.
Can anyone explain what could account for this?
I have inquiries to 1 case , need add ROWS syntax code like this:
EXEC SQL INSERT INTO TABLETEST (COLUMN1 , COLUMN2 , COLUMN3)
3 ROWS VALUES(:inputArrayChar1 , :inputArrayChar2 ,inputArrayChar3);
but use this syntax , Compiler issue is SQL0104N message "An unexpected token '3 ROWS' ....."
Can anyone explain what could account for this? Or I need add Environment variable to db ? or add Environment variable to Compiler Environment ?
thanks.
 

Implementing UPPER,TRIM and REPLACE in Apache Pig

I am quiet new to pig environment. I have tried to implement my pig script file in two ways.
I.
data = LOAD 'sample2.txt' USING PigStorage(',') as(campaign_id:chararray,date:chararray,time:chararray,display_site:chararray,placement:chararray,was_clicked:int,cpc:int,keyword:chararray);
distinct_data = DISTINCT data;
val = foreach distinct_data generate campaign_id,date,time,UPPER(keyword),display_site,placement,was_clicked,cpc;
val1 = foreach val generate campaign_id,date,time,TRIM(keyword),display_site,placement,was_clicked,cpc;
val2 = foreach val1 generate campaign_id,REPLACE(date, '-', '/'),time,keyword,display_site,placement,was_clicked,cpc;
dump val2;
i get error:
2016-09-29 02:45:40,826 INFO org.apache.pig.Main: Apache Pig version
0.10.0-cdh4.2.1 (rexported) compiled Apr 22 2013, 12:04:54 2016-09-29 02:45:40,827 INFO org.apache.pig.Main: Logging error messages to:
/home/training/training_materials/analyst/exercises/pig_etl/pig_1475131540824.log
2016-09-29 02:45:42,371 ERROR org.apache.pig.tools.grunt.Grunt: ERROR
1025: Invalid field
projection. Projected field [keyword] does not exist in schema:
campaign_id:chararray,date:chararray,time:chararray,org.apache.pig.builtin.upper_keyword_12:chararray,display_site:chararray,placement:chararray,was_clicked:int,cpc:int.
Details at logfile: /home/hduser/pig_etl/pig_1475131540824.log
But When i integrate the UPPER,TRIM and REPLACE in one statement then it works:
II.
data = LOAD 'sample2.txt' USING PigStorage(',') as(campaign_id:chararray,date:chararray,time:chararray,display_site:chararray,placement:chararray,was_clicked:int,cpc:int,keyword:chararray);
distinct_data = DISTINCT data;
val = foreach distinct_data generate campaign_id,REPLACE(date, '-', '/'),time,TRIM(UPPER(keyword)),display_site,placement,was_clicked,cpc;
dump val;
So, I just want someone to explain me that why I. method didn't work and what is the error message.
While you are applying TRIM in val1 there is nothing called "keyword" in val.
Note when you are applying any Function use alias so that error u can avoid..
or before creating a new relation it is always good to use describe so that the schema is clear to u..
Solution will be:
data = LOAD 'sample2.txt' USING PigStorage(',') as(campaign_id:chararray,date:chararray,time:chararray,display_site:chararray,placement:chararray,was_clicked:int,cpc:int,keyword:chararray);
distinct_data = DISTINCT data;
val = foreach distinct_data generate campaign_id,date,time,UPPER(keyword) as keyword,display_site,placement,was_clicked,cpc;
val1 = foreach val generate campaign_id,date,time,TRIM(keyword) as keyword,display_site,placement,was_clicked,cpc;
val2 = foreach val1 generate campaign_id,REPLACE(date, '-', '/') as date,time,keyword,display_site,placement,was_clicked,cpc;
dump val2;

The result being cast to double in Pig but is still being ordered as a string

I encountered the following problem:
First my data is a string that looks like this:
decimals, decimals
example: 1.345, 3.456
I used the following pig script to put this column, say QQ, into two columns:
result = FOREACH old_table GENERATE FLATTEN(STRSPLIT(QQ, ',')) as (COL1: double, COL2: double);
Then, I want to order it by first field, then second field.
result_ordered = ORDER result BY COL1, COL2;
However, I got the result like the following:
> 59.619198977071434 -151.4586740547339
> 60.52611316847121 -150.8005347076273
> 64.8310014577408 -147.84786488835852
> 7.059652849999997 125.59985130999996
which implies that my data is still being ordered as a string and not as a double. Has anyone encountered this issue and know how to solve it? Thank you in advance!
I'm not sure why STRSPLIT is returning a chararray though you explicitly state they are doubles. But if you look at http://pig.apache.org/docs/r0.10.0/basic.html#arithmetic, notice that chararrays can't be multiplied by 1.0 to doubles, but bytearrays can. Therefore you can do something like:
result = FOREACH old_table
GENERATE FLATTEN(STRSPLIT(QQ, ',')) AS (COL1: bytearray, COL2: bytearray);
B = FOREACH result GENERATE 1.0 * COL1 AS COL1, 1.0 * COL2 AS COL2 ;
result_ordered = ORDER B BY COL1, COL2;
Which gives me the correct output of:
result_ordered: {COL1: double,COL2: double}
(7.059652849999997,125.59985130999996)
(59.619198977071434,-151.4586740547339)
(60.52611316847121,-150.8005347076273)
(64.8310014577408,-147.84786488835852)
Instead of assigning the output of FLATTEN to a schema with two doubles, try actually casting the fields with (chararray). It may be that Pig only uses the :chararray syntax for applying schema checking, but requires the explicit cast to convert the types during execution.