I am trying to understand when to use PrevTaskStatus vs TaskStatus when defining link conditions in Informatica. I understand PrevTaskStatus takes into account the last task executed by the integration service, ignoring disabled tasks.
But if that is the case, then does PrevTaskStatus return the same value for ALL session objects.
i.e. doesnt $session1.PrevTaskStatus, $session20.PrevTaskStatus both equal SUCCEEDED if session 10 has just completed successfully.
or is $session20.PrevTaskStatus somehow referring to the last task executed PRIOR to $session20, and $session30.PrevTaskStatus somehow refer to a different session.
If I have:
session1 --> session2 --> session3 --> session4
and session 3 is disabled
and I want to put in a link condition from session3 to session4
do I use:
$Start.PrevTaskStatus = SUCCEEDED
$session3.PrevTaskStatus = SUCCEEDED
$session4.PrevTaskStatus = SUCCEEDED
Thanks
Related
I have a command task to call a batch file which returns 1 if File.Ok does not exists and 0 if File.Ok exists in a particular location. Following this command task I have 2 links:
link 1: $commandtask.status = succeeded
link 2: $commandtask.status = failed
After each of these links there are several session and other tasks.
PROBLEM: Whenever File.OK is not found, Link 2 is executed followed by tasks/sessions of this branch (as desired and expected) but after executing all remaining items the workflow gets failed.
note: I have not checked 'Fail Parent if task fails' property anywhere.
You might have checked the "Fail parent if task does not run" in some task. If the task with this property checked does not run then it fails the workflow.
Is there a way I can have a task require the completion of multiple upstream tasks which are still able to finish independently?
download_fcr --> process_fcr --> load_fcr
download_survey --> process_survey --> load_survey
create_dashboard should require load_fcr and load_survey to successfully complete.
I do not want to force anything in the 'survey' task chain to require anything from the 'fcr' task chain to complete. I want them to process in parallel and still complete even if one fails. However, the dashboard task requires both to finish loading to the database before it should start.
fcr *-->*-->*
\
---> create_dashboard
/
survey *-->*-->*
You can pass a list of tasks to set_upstream or set_downstream. In your case, if you specifically want to use set_upstream, you could describe your dependencies as:
create_dashboard.set_upstream([load_fcr, load_survey])
load_fcr.set_upstream(process_fcr)
process_fcr.set_upstream(download_fcr)
load_survey.set_upstream(process_survey)
process_survey.set_upstream(download_survey)
Have a look at airflow's source code: even when you pass just one task object to set_upstream, it actually wraps a list around it before doing anything.
download_fcr.set_downstream(process_fcr)
process_fcr.set_downstream(load_fcr)
download_survey.set_downstream(process_survey)
process_survey.set_downstream(load_survey)
load_survey.set_downstream(create_dashboard)
load_fcr.set_downstream(create_dashboard)
I am having some concurrency issues in cfwheels.
I have some code in events/onrequeststart.cfm that is being executed every time the user is requesting something.
Test case:
User A - request time: 10sec
User B - request time: 2sec
If the user B issues a request while the user A is already working on a request, user's B settings will go into user A and user A will display results based on user's B request.
I tried using cflock on the onrequeststart.cfm but it doesn't seem to work. I don't have much experience with cfwheels so I maybe trying to do something that is logically wrong.
This is part of the code that gets confused.
<cfquery name="currentUser" datasource="#application.ds#">
select * from clientadmin where clientAdminid ='#session.clientadminid#'
</cfquery>
<cfquery name="currentClient" datasource="#application.ds#">
select * from clientBrands where clientbrandID ='#currentUser.ClientBrandID#'
</cfquery>
<cfset application.clientAdminSurveys = application.generalFunctions.clientSurveys(clientAdminID=session.clientAdminID, clientBrandID = currentUser.clientBrandID)>
<cfset application.AssociatedDoctors = application.generalFunctions.AssociatedDoctors(clientAdminID=session.clientAdminID, clientBrandID = currentUser.clientBrandID)>
So, I guess my question is, how to avoid this from happening?
1) The Application scope is "application wide" (all users of site wide) - you shouldn't be setting per user settings there, ever, as as you've discovered, user B overwrites user A. Use the session scope for per user stuff. So in your last two lines you're setting application scope stuff using session scope data!
2) As a side note, in wheels you can use application.wheels.datasourcename to get the database name
I would put that code into a function inside the controller(controller.cfc) and run it using a filter.
see: http://cfwheels.org/docs/1-1/chapter/filters
This has worked for me without issues for similar tasks.
Also I would drop any reference to application. because that's likely where items are getting mixed up. The correct place to put these functions is into events/functions.cfm
of course this is without seeing more of your code...
As mentioned by Neokoenig, you are utilizing a shared scope to store user specific data, you should store that in SESSION. If you need the data in application scope you should be using a lock while setting that data, but it looks like you should be running this in onSessionStart once and not on every request. If you need to run it on every request you may want to continue using the onRequestStart but utilize the user specific session storage and not the global application layer.
Just remember:
Application variables will show the same data for all users. So if user a sets application.foo = 1 and user b sets application.foo = 2 then user one try to access application.foo, user 1 will see user 2's value of 2. If this was using session scope you would not have this same issue. If user 1 sets SESSION.foo = 1 and user 2 sets SESSION.foo = 2. When the user accesses the SESSION.foo variable it will only contain the data set by that user (ex: user 1 will output SESSION.foo and see the value, 1)
Scenario:
User clicks a command on the workflow
Workflow custom action carry out a number of checks
Workflow custom action executes another command on the same workflow dependant on results
The code I have so far is:
Database db = Factory.GetDatabase("master");
if (Request.QueryString["_id"] != null)
{
var itm = db.GetItem(new ID(Request.QueryString["_id"]));
WorkflowCommand[] availableCommands = wf.GetCommands(itm.Fields["__Workflow state"].Value);
wf.Execute(Request.QueryString["command"], itm, "Testing working flow new screens", false, new object[] { }); // Execute the workflow step.
}
However, I get a Object not set to an instance error on the wf.Execute line - but with no meaningful stack trace or anything :(
I've put in the wf.GetCommands line just to check that things are actually where I expect them, and availableCommands is populated with a nice list of commands that exist.
I've checked the commandId is valid, and exists.
Itm is not null, and is the Content Item that the workflow is associated to (that I want the workflow to run in context with).
I've checked that the user context etc is valid, and there are no permission issues.
The only difference is that I am running this code within an .aspx page that is executing within sitecore - hopeever, I wouldn't have expected this to cause a problem unless there is a context item that isn't being set properly.
Workflow needs to be run within a SiteContext that has a ContentDatabase and workflow enabled. The easiest way to do this within your site is to use a SiteContextSwitcher to change to the "shell" site.
using (new SiteContextSwitcher(SiteContextFactory.GetSiteContext("shell")))
{
wf.Execute(Request.QueryString["command"], itm, "Testing working flow new screens", false, new object[] { }); // Execute the workflow step.
}
An example of this can be found within the code for the WeBlog Sitecore module.
http://svn.sitecore.net/WeBlog/Trunk/Website/Pipelines/CreateComment/WorkflowSubmit.cs
EIM job is getting error out while running it. Below is my IFB file -
"[Siebel Interface Manager]
USER NAME = 'SADMIN'
PASSWORD = 'SADMIN'
PROCESS = "PROCESS UPDATE"
[PROCESS UPDATE]
TYPE = IMPORT
BATCH = 30032012 - 30032015
TABLE = EIM_FN_ASSET5
INSERT ROWS = S_ASSET_CON, FALSE
UPDATE ROWS = S_ASSET_CON, TRUE
ONLY BASE TABLES = S_ASSET_CON
ONLY BASE COLUMNS = S_ASSET_CON.ATTRIB_37,S_ASSET_CON.ATTRIB_38,S_ASSET_CON.ATTRIB_50,S_ASSET_CON.ASSET_ID,S_ASSET_CON.CONTACT_ID,\
S_ASSET_CON.RELATION_TYPE_CD"
In application, it shows error --
"SBL-EIM-00426: All batches in run failed."
I have placed IFB in admin folder itself and below is the log file -
"2021 2012-04-03 05:35:25 2012-04-03 05:35:25 -0500 00000002 001 003f 0001 09 srvrmgr 16187618 1 /004fs02/siebel/siebsrvr/log/srvrmgr.log 8.1.1.4 [21225] ENU
SisnapiLayerLog Error 1 0000000c4f7a00e2:0 2012-04-03 05:35:25 258: [SISNAPI] Async Thread: connection (0x204ec5b0), error (1180682) while reading message"
Kindly help.
Async Thread: connection (0x204ec5b0), error (1180682) while reading message
This happens when an object manager lost the connection to the gateway. There can be many reasons for this: Restart the gateway without bouncing the app server. Network issues... etc.
But, this is the error in your Server Manager session, not in the EIM session (Batch Component). For each EIM job that you start (via server manager) you should see a corresponding EIM tasks. The best is to see the error in the EIMxxxx.log file. Also, you can debug your EIM task by setting Event Logs levels:
change evtloglvl %=3 for comp EIM
(set detailed logging)
(run your EIM job) start task ......
list active tasks for comp EIM
(you should see the job running..)
list tasks for comp EIM
(Or you can see the list of jobs)
change evtloglvl %=1 for comp EIM
(use this line to set the log levels back to "normal")
This will give you some detailed info on what the EIM component is doing. Note: Make use of a small batch or your log will be too big to manage.
If you have some connection errors and you recently lost your DB connection, the best is to completely restart the siebel servers and gateway in the correct order.
Have you tried re-runing the EIM Job.
If the scenario continues even after the second run - Please check the batch number you have given in the IFB file with the batch numbers given in the Input Data file for the EIM component - as from the error it seems that the EIM component is not able to fetch the data.
SBL-SVR-01042 is a generic error when this error is encountered while attempting to instantiate a new instance of a given component and is generic. As to why the error has occurred, one needs to review the accompanying error messages which will help provide context and more detailed information
You can ignore SisnapiLayerLog Error. This is generic error and does not have any significance.
You should concentrate on SBL-EIM-00426. before running task can you check if there is any record in your EIM table. This error comes when you have zero record in interface table.you should increase log level to high and try to trache error. There is also fixed released by Oracle. Refer oracle support for same.
https://support.oracle.com/epmos/faces/BugDisplay?parent=DOCUMENT&sourceId=498041.1&id=10469733
I have edited the IFB file code little bit and it worked for me.
Can you please try the below code and let me know.
[Siebel Interface Manager]
USER NAME = 'SADMIN'
PASSWORD = 'SADMIN'
PROCESS = "PROCESS UPDATE"
[PROCESS UPDATE]
TYPE = SHELL
INCLUDE = "Update Records"
[Update Records]
TYPE = IMPORT
BATCH = 30032012 - 30032015
TABLE = EIM_FN_ASSET5
INSERT ROWS = S_ASSET_CON, FALSE
UPDATE ROWS = S_ASSET_CON, TRUE
ONLY BASE TABLES = S_ASSET_CON
ONLY BASE COLUMNS = S_ASSET_CON.ATTRIB_37 \
,S_ASSET_CON.ATTRIB_38 \
,S_ASSET_CON.ATTRIB_50 \
,S_ASSET_CON.ASSET_ID \
,S_ASSET_CON.CONTACT_ID \
,S_ASSET_CON.RELATION_TYPE_CD
Hope this helps!