how to add multiple if conditions in informatica expression - informatica

i have a situation where i need to update flags based on the condition:-
1-if T_skey is present then check C_skey if both present then update flag-'N'
2-if T_skey is present but inactive then check C_skey if that's active then update flag -'Y'
I TRIED writing conditions in single single variable ports and tried to concatenate at different port but it's coming as null. kindly help
thanks in advance

Related

Informatica session refuses to update first then insert in "update then insert" mode

Very basic setup: source-to-target - wanted to replicate the MERGE behavior.
Removed the update strategy, activated "update then insert" rule on target within the session. Doesn't work as described, always attempts to insert into the primary key column, even though the same key arrives, which should have triggered an "update" statement. Tried other target methods - always attempts to insert. Attached is the mapping pic.
basic merge attempt
Finally figured this out. You have to make edits in 3 places: a) mapping - remove update strategy b) session::target properties - set the "update then insert" method c) session's own properties - "treat source rows as"
In the third case you have to switch "treat source rows as" from insert to update.
Which will then allow both - updates and inserts.
Why is it set like this is beyond me. But it works.
I'll make an attempt to clarify this a bit.
First of all, using Update Strategy in the mapping requires the session Treat source rows as property to be set to Data driven. This is slowest possible option as it means it will be set on row-by-row basis within the mapping - but that's exactly what you need if using the Update Strategy transformation. So in order to mirror MERGE, you need to remove it.
And tell the session not to expect this in the mapping anymore - so the property needs to be set to one of the remaining ones. There are two options:
set Treat source rows as to Insert - this means all the rows will be inserted each time. If there are no errors (e.g. caused by unique index), the data will be multiplied. In order to mimic MERGE behavior, you'd need to add the unique index that would prevent inserts and tell the target connector to insert else update. This way in case the insert fails it will make an update attempt.
set Treat source rows as to Update - now this will tell PowerCenter to try updates for each and every input row. Now, using update else insert will cause that in case of failure (i.e. no row to update) there will be no error - instead an insert attempt will be made. Here there's no need for unique index. That's one difference.
Additional difference - although both solutions will reflect the MERGE operation - might be observed in performance. In the environment where new data is very rare, the first approach will be slow: each time an insert attempt will be made just to fail and do an update operation then. Just a few times it will succeed at first attempt. Second approach will be faster: updates will succeed most of the time and just on a rare occasion it will fail and result in an insert operation.
Of course, if updates are not often expected, it will be exactly the opposite.
This can be seen as complex solution for a simple merge. But it also lets the developer to influence the performance.
Hope this sheds some light!

Informatica Update Stragegy is not flagging records

Hello I have a simple mapping, that basically has a router to decided whether the record has to be inserted or updated and then use Update Strategy to flag the row.
The records were updating and inserting as expected, I had to make some modifications to the logic and did the required changes.
And now the records are no more getting flagged as an insert or an update. Below settings :
1) DD_UPDATE and DD_INSERT coded in the update strategy.
2) At session level, treat source as set to Data Driven.
3) The 2 targets set to update as update and insert respectively.
I even ran a debugger to see what is happening, the insert update records are passing through the update strategy, however the row type is set to blank when its passed to the target instance :( what could be the issue?
I finally found the issue. Both the Update Strategy's were corrupted. deleting and recreating the update strategy's resolved the issue :) Thanks for your help!

Scenarios possible in apache nifi

I am trying to understand apache nifi in and out keeping files in hdfs and have various scenarios to work on. Please let me know the feasibility of each with explanations. I am adding few understanding with each scenario.
Can we check null value present with in a single column? I have checked different processors, and found notNull property, but I think this works on file names, not on columns present within file.
Can we drop a column present in hdfs using nifi transformations?
Can we change column values as in replace one text with other? I have checked replaceText property for the same.
Can we delete a row from file system?
Please suggest the possibilities and how to achieve the goal.
Try with this:
1.Can we check null value present with in a single column? I have checked different :
Yes using replace text processor you can check and replace if you want to replace or use 'Route on Attribute' if want to route based on null value condition.
Can we drop a column present in hdfs using nifi transformations?
Yes using same 'ReplaceText' processor you can put desired fields with delimiter as I used to have current date field and some mandatory fields only in my data with comma separated so I provided replacement value as
"${'userID'}","${'appID'}","${sitename}","${now():format("yyyy-MM-dd")}"
To change column value use 'ReplaceText' processor.

multiple queries with mysql_query in a c++ project

So, this will not work with mysql_query.
I am strictly working with c++ and I am not using php.
I want this double query to be executed so that I will always have a unique ID in a transaction system with concurrent users creating IDs.
mysql_query(connection, \
"INSERT INTO User() VALUES (); SELECT LAST_INSERT_ID(); ");
It works in MySql DataBase perfectly, but I need to add it to Eclipse( I am using Ubuntu 12.04 LTS).
My application is quite big and I would not like to change to mysqli, if this is possible but if there is no other way it will be ok.
Can you help me with this? Thanks in advance.
According to the MySQL C API documentation:
MySQL 5.6 also supports the execution of a string containing multiple
statements separated by semicolon (“;”) characters. This capability is
enabled by special options that are specified either when you connect
to the server with mysql_real_connect() or after connecting by
calling` mysql_set_server_option().
And:
CLIENT_MULTI_STATEMENTS enables mysql_query() and mysql_real_query()
to execute statement strings containing multiple statements separated
by semicolons. This option also enables CLIENT_MULTI_RESULTS
implicitly, so a flags argument of CLIENT_MULTI_STATEMENTS to
mysql_real_connect() is equivalent to an argument of
CLIENT_MULTI_STATEMENTS | CLIENT_MULTI_RESULTS. That is,
CLIENT_MULTI_STATEMENTS is sufficient to enable multiple-statement
execution and all multiple-result processing.
So, you can supply several statements in a single mysql_query() call, separated by a semicolon, assuming you set up your mysql connection a bit differently, using mysql_real_connect.
You need to pass the following flag as the last argument: CLIENT_MULTI_STATEMENTS, whose documentation says:
Tell the server that the client may send multiple statements in a
single string (separated by “;”). If this flag is not set,
multiple-statement execution is disabled. See the note following this
table for more information about this flag.
See C API Support for Multiple Statement Execution and 22.8.7.53. mysql_real_connect() for mroe details.

Powercenter - concurrent target instances

We have a situation where we have a different execution order of instances of the same target being loaded from a single source qualifier.
We have a problem when we promote a mapping from DEV to TEST when we execute in TEST after promoting there are problems.
For instance we have a router with 3 groups for Insert, Update and Delete followed by the appropriate update strategies to set the row type accordingly followed by three target instances.
RTR ----> UPD_Insert -----> TGT_Insert
\
\__> UPD_Update -------> TGT_Update
\
\__> UPD_Delete ---------> TGT_Delete
When we test this out using data to do an insert followed by an update followed by a delete all based on the same primary key we get a different execution order in TEST compared to the same data in our DEV environment.
Anyone have any thoughts - I would post an image but I don't have enough cred yet.
Cheers,
Gil.
You can not controll the load order as long as you have a single source. I you could separate the loads to use separate sources the target load order setting in the mapping could be used, or you could even create separate mappings for them.
As it is now you should use a single target and utilize the update strategy transformation to determine the wanted operation for each record passing through. It is then possible to use a sort to define in what order the different operations is made to the physical table.
You can use the sorter transformation just before update strategy......based on update strategy condition you can sort the incoming rows....So first date will go through the Insert, than update at last through delete strategy.
Simple solution is try renaming the target definition in alphabetical order... like INSERT_A, UPDATE_B, DELETE_C then start loading
This will load in A,B,C order. Try and let me know