How to overcome Unique constraint error in informatica - informatica

Suppose one of my session ran successful for last 10days and today it's getting failed because of unique conatarint error. How to resolve this issue with changing session properties

I can suggest non-recommended approach and if you dont care about rejected data, set stop on error =0 in the session like below pic- This will log the rejected rows into session log and let the session succeeded.
But, ideally you should handle the error properly in mapping. Like use a lookup on the target to create a flag if record is to be inserted/updated. Then use an update strategy and use DD_UPDATE or DD_INSERT based on the flag to the target.

Related

Oracle APEX - how to raise an error in your own custom validation process without showing it on the error page

I am using Oracle Apex 19.2. I need to create a Form Page with many fields and many validations.
Instead of creating one Validation Routine for every field and check I want to put them all into one PL/SQL procedure and create Process Routine in APEX. Inside that pl/sql procedure I would perform all necessary checks and use apex_error.add_errorto add all errors into stack.
If there is at least one error I want to break procedure execution and show it on the page. The problem is that the procedure completes always successfully so no error is displayed. I tried raise_application_error but my custom error is also visible on error page.
The same applies when rising apex_application.e_stop_apex_engine - apart from my custom validations there is also ORA-20876 error from apex_application.stop_apex_engine.
Is there any way I could either break procedure execution without raising an error or ignore somehow the error I raise to break it?
Problem solved.
I defined my PL/SQL procedure as an APEX Process routine. Instead of that, I should have done it as an APEX Validation. It looks like APEX carries out all validations and then, even if they all went ok, it checks error stack and does not proceed with processing if there is any error. Below steps how to achieve it.
Define a new APEX Validation with Validation Type = PL/SQL Function Body (returning Boolean)
Here starts the tricky part - in the PL/SQL code field you put your PL/SQL procedure and you always return true. It looks like:
your_plsql_package.your_procedure;
return true;
In the mandatory field Error Message you write whatever message you want - It will never be displayed on the error page.
And that is it. Instead of having dozens validations on the page you can have one procedure to rule them all :) However, it could be more developer friendly.

Trouble Adding Publishing Action to append to an existing big query table?

I have a dataflow flow that appends to an existing bigquery table which has been working for the last few weeks. When i run it now it gives me the error "Cannot run job. Please reload the page and try again." and won't even start the job.
After trying a lot of things, i made a copy of the flow and when the publishing action is creating a new csv file, it works but when i try to
Add a publishing action to an existing big query table,it keeps giving me another bizarre error "SyntaxError: Unexpected token C in JSON at position 0".
I have no idea what is happening since everything used to work perfectly and i made no changes whatsoever.
I am not adding anything new, but I want to give you an answer.
It seems that one of your JSON inputs may be malformed. Try logging it to see what's the problem - and also try skipping malformed JSON strings.
As mentioned in past comments, I suggest to check your logs in StackDriver to find out why are you getting the:
"Cannot run job. Please reload the page and try again."
Also if you can retrieve more information about the error from those logs, It'd be useful to assist you further.
Besides the above, maybe we can solve this issue easier by only just checking the json format, here I put an easy third party json validator and here you can check where you have the error in your json.

Couldn't able to find buit-in function $PMTargetName#numAffectedRows

Hi Iam novice to Informatica, I tried to get the count of inserted rows to target to WF variable, So that I can write a condition to a link to proceed to next session.
I went through few web guides and found $PMTargetName#numAppliedRows.
But I didn't able to find this property under Built-in properties in Post session on success variable assignment.
Even I tried to assign a variable in mapping level with this property, but I am getting error like Invalid symbol reference.
$PMTargetName#numAppliedRows is NOT listed anywhere among any variables. Neither in built-in nor anywhere else. Still - you can use it. Just remember to replace the TargetName with the name of your target transformation, e.g.: if you insert data to AccountList table, you'd need to use the condition $PMAccountList#numAppliedRows. Hope this helps.
Please use this link workflow variable :-
Name: TgtSuccessRows
Desciption: Rows successfully loaded
Datatype: integer
But issue is you can not track data loaded for multiple targets.

Profile attribute being magically set in Siebel

We have a very weird issue in our Siebel 7.8 application.
In the Application_Start event we define a bunch of profile attributes, which determine if the logged user will be allowed to perform certain operations or not. The code is something like this:
if (userHasSuperpowers) {
TheApplication().SetProfileAttr("CanFly", "Y");
} else {
// CanFly is not set, and GetProfileAttr("CanFly") returns ''
}
Everything works fine, except for one of these profile attributes. The conditions are not met, so we don't set its value. But when we check it using GetProfileAttr... it returns 'Y' instead of ''.
I've checked the code. A lot. I've put traces everywhere, and I'm 100% sure that when the last line of the Application_Start event executes, the attribute is still empty. However, in the first Applet_Load event after the login (in the HLS Salutation Applet (HLS Home) applet), its value has already changed to 'Y'. Why!!? I've looked everywhere, but I can't find anywhere else where we'd be doing a SetProfileAttr. So far, I've ruled out:
Every browser and server script for all our applets, application, BCs and business services.
All the runtime business services (the ones defined directly in the application instead of the SRF).
The Personalization Profile business component fields.
SmartScripts (not that they would matter in this particular scenario, I just mention them to acknowledge that you can set profile attributes there too).
Workflows: every step invoking the SIS OM PMT Service method Set Profile Attribute.
Siebel magically setting its value. The profile attribute name is custom made, in Spanish, and it contains our project name and a row_id. I really don't think Siebel is using the same name for its own profile attributes :).
But wait, there is more, I left the best part for last: the problem only happens in our development environment!
It's not an SRF issue: if we promote the same SRF to our testing or production environments, it works and returns the expected value.
It's not a data problem: still with the same SRF, I can use my local thick client, connecting to our development database with the same login and password, and it works fine too.
It's not a concurrency problem: we are testing with only one user logged in. And even if we had more, they wouldn't share sessions. And even if they did, the value wouldn't be always 'Y'.
It's not a temporary glitch, or something due to a wrong incremental compilation or a corrupted SRF: we have been experiencing this for at least 6 months (obviously, in that time frame, we've had dozens of different SRF files... all of them having the same problem, but only in development, and only if you use the server and not the dedicated client... seriously...).
Where else could I search the profile attribute being set? I've read that they can be persisted to the DB, but in order to do so, you have to define them as a field in a BC based on an S_PARTY extension table, right?
Is there any way to trace profile attribute changes somehow? Maybe rising some loglevel?
How can I find out at least what's being executed after the Application_Start, before loading the first applet?
Any other ideas? I tried checking the SQL spool file too, but didn't find anything suspicious there either (i.e., any of the queries we use to check the conditions, being run twice with different parameters).
Update: following Ranjith R's suggestions, I've also checked:
Other vanilla business services which could be also invoked from a workflow to set a profile attr: User Registration > SetProfileAttr, SessionAccessService > SetProfileAttr and ISS Promotion Agreement Manager > SetProfileAttributes.
Runtime events setting profile attributes directly or using a business service (we don't have any runtime events apart from the vanilla ones).
Business services being called from DVMs (we only have vanilla data validation rules, and none of them apply to our buscomps).
Still no luck...
Ok... finally we found what's happening:
We access the URL to our server and get to the login page. This triggers a first Application_Start event, for the SADMIN user.
We set the profile attributes in that session. SADMIN is the Siebel administrator user, so yes, he hasSuperpowers and therefore we do TheApplication().SetProfileAttr("CanFly", "Y");.
The Application_Start event finishes.
We enter our username and password in the login screen to access into Siebel. This triggers a second Application_Start event, this time for our user. This is the one I was monitoring with the trace files.
We set the profile attributes again in the new session. Our user doesn't hasSuperpowers, so we don't set any value for the CanFly attribute.
The Application_Start event finishes, and CanFly is still empty.
Siebel merges both sessions into one before loading the first screen!! Or at least, it transfers over the profile attributes we had set for SADMIN.
I'm sure it happens that way, for two reasons. First, we changed the profile attribute name to include the username too. And second, instead of storing just an "Y", we are storing now the current date:
var time = (new Date()).getTime();
TheApplication().SetProfileAttr("CanFly_" + TheApplication().LoginName(), time);
We end up having CanFly_SADMIN, but no CanFly_USER, and the time value stored is the same we see in the log file for step 2... which is smaller than any of the values for the *_USER attributes.
So that's what happening. I still don't know why Siebel behaves this way, but that would be matter for another question. According to the Siebel bookshelf:
The Start event is called when the client starts and again when the user interface is first displayed.
...but it doesn't say anythign about it being called from two different sessions, different users too, and then merging them together. It must be something misconfigured in our dev environment, considering it doesn't happen in the other ones.
Does Siebel 7.8 have runtime Events? I can't recall. Runtime events have an action set for setevent, which can set/clear profile attributes.
There are still other vanilla business services which can set profile attributes, try searching in tools flat under business service methods for *rofile*tt*.
The SIS OM service can also be invoked from DVMs for from RunTime events directly, so thats also a possibility.
There is no logging system to see values of Profile Attributes changing, testing is the only way out.

Check for live Data Source Name Before proceeding

Would it be ok to get a CF app to check for a valid database before proceeding to process that request?
This is because there may be instances where the database server may be down or being upgraded, hence an error comes when a db dependant request is made.
If there is no connection to the db server, the user can be safely redirected to a safe page.
Or can cfcatch work?
How can this check be done?
Thank you.
in your onRequestStart method of your Application.cfc file or in an Application.cfm file you can run a simple query to check that the database is available. Wrap the query in cftry/cfcatch. If the query fails, you can redirect the user in the cfcatch, if it succeeds, you can be reasonably sure that your database is "alive".
I've used such a check in one project. Code may looks as follows (not sure if it will work in versions of ColdFusion lower than 8), consider this sample as chunk of UDF written in CFScript:
// service factory object instance
factory = CreateObject("java","coldfusion.server.ServiceFactory");
// the datasource service
dsService = factory.DatasourceService;
// verify the dsn
return dsService.verifyDataSource(arguments.dsn);
Oh, I have even found small note in the code I wrote on my old laptop couple of years ago:
// [performance note] this server check takes 1-3ms at local PC (Kubuntu 7.10, CF8 + Apache2, Sempron 3500+, 1GB RAM)
While time looks like small I have found out that doing this check on each request is not really useful for my application. Any way I have a habit to use the try/catch extensively for errors handling. But if your datasources may cheange frequently it may have more sense.
Adding an extra query to every request to make sure that the database is up is a patently bad idea. A better approach would be to build a "maintenance mode" switch into your application, that you would manually enable when you are doing planned maintenance (upgrades, etc).
If you want to have a "friendly" page displayed when an error (like database issues) occur, then use the onError() method in Application.cfc and/or the <cferror .../> tag in Application.cfm, as a global error handler.
If you are worried the db could vanish, I would implement a "SELECT 1 AS A" query in your OnRequestStart handler that runs only every N minutes. This can be accomplished by using the query caching feature. I'd start with performing the query every 30 min.