Getting error while running EIM job - siebel
EIM job is getting error out while running it. Below is my IFB file -
"[Siebel Interface Manager]
USER NAME = 'SADMIN'
PASSWORD = 'SADMIN'
PROCESS = "PROCESS UPDATE"
[PROCESS UPDATE]
TYPE = IMPORT
BATCH = 30032012 - 30032015
TABLE = EIM_FN_ASSET5
INSERT ROWS = S_ASSET_CON, FALSE
UPDATE ROWS = S_ASSET_CON, TRUE
ONLY BASE TABLES = S_ASSET_CON
ONLY BASE COLUMNS = S_ASSET_CON.ATTRIB_37,S_ASSET_CON.ATTRIB_38,S_ASSET_CON.ATTRIB_50,S_ASSET_CON.ASSET_ID,S_ASSET_CON.CONTACT_ID,\
S_ASSET_CON.RELATION_TYPE_CD"
In application, it shows error --
"SBL-EIM-00426: All batches in run failed."
I have placed IFB in admin folder itself and below is the log file -
"2021 2012-04-03 05:35:25 2012-04-03 05:35:25 -0500 00000002 001 003f 0001 09 srvrmgr 16187618 1 /004fs02/siebel/siebsrvr/log/srvrmgr.log 8.1.1.4 [21225] ENU
SisnapiLayerLog Error 1 0000000c4f7a00e2:0 2012-04-03 05:35:25 258: [SISNAPI] Async Thread: connection (0x204ec5b0), error (1180682) while reading message"
Kindly help.
Async Thread: connection (0x204ec5b0), error (1180682) while reading message
This happens when an object manager lost the connection to the gateway. There can be many reasons for this: Restart the gateway without bouncing the app server. Network issues... etc.
But, this is the error in your Server Manager session, not in the EIM session (Batch Component). For each EIM job that you start (via server manager) you should see a corresponding EIM tasks. The best is to see the error in the EIMxxxx.log file. Also, you can debug your EIM task by setting Event Logs levels:
change evtloglvl %=3 for comp EIM
(set detailed logging)
(run your EIM job) start task ......
list active tasks for comp EIM
(you should see the job running..)
list tasks for comp EIM
(Or you can see the list of jobs)
change evtloglvl %=1 for comp EIM
(use this line to set the log levels back to "normal")
This will give you some detailed info on what the EIM component is doing. Note: Make use of a small batch or your log will be too big to manage.
If you have some connection errors and you recently lost your DB connection, the best is to completely restart the siebel servers and gateway in the correct order.
Have you tried re-runing the EIM Job.
If the scenario continues even after the second run - Please check the batch number you have given in the IFB file with the batch numbers given in the Input Data file for the EIM component - as from the error it seems that the EIM component is not able to fetch the data.
SBL-SVR-01042 is a generic error when this error is encountered while attempting to instantiate a new instance of a given component and is generic. As to why the error has occurred, one needs to review the accompanying error messages which will help provide context and more detailed information
You can ignore SisnapiLayerLog Error. This is generic error and does not have any significance.
You should concentrate on SBL-EIM-00426. before running task can you check if there is any record in your EIM table. This error comes when you have zero record in interface table.you should increase log level to high and try to trache error. There is also fixed released by Oracle. Refer oracle support for same.
https://support.oracle.com/epmos/faces/BugDisplay?parent=DOCUMENT&sourceId=498041.1&id=10469733
I have edited the IFB file code little bit and it worked for me.
Can you please try the below code and let me know.
[Siebel Interface Manager]
USER NAME = 'SADMIN'
PASSWORD = 'SADMIN'
PROCESS = "PROCESS UPDATE"
[PROCESS UPDATE]
TYPE = SHELL
INCLUDE = "Update Records"
[Update Records]
TYPE = IMPORT
BATCH = 30032012 - 30032015
TABLE = EIM_FN_ASSET5
INSERT ROWS = S_ASSET_CON, FALSE
UPDATE ROWS = S_ASSET_CON, TRUE
ONLY BASE TABLES = S_ASSET_CON
ONLY BASE COLUMNS = S_ASSET_CON.ATTRIB_37 \
,S_ASSET_CON.ATTRIB_38 \
,S_ASSET_CON.ATTRIB_50 \
,S_ASSET_CON.ASSET_ID \
,S_ASSET_CON.CONTACT_ID \
,S_ASSET_CON.RELATION_TYPE_CD
Hope this helps!
Related
Stage level data is not coming for bigquery running jobs through java bigquery libraries
I am using com.google.cloud.bigquery library for fetching the job level details. We have the following code snippets Job job = getBigQuery(projectId, location).getJob(JobId.newBuilder().setJob("myJobId"). setLocation(location).setProject(projectId).build()); private BigQuery getBigQuery(String projectId, String location) throws IOException { // path to your credentials file String credentialsPath = "my private key crdentials file"; BigQuery bigQuery; bigQuery = BigQueryOptions.newBuilder().setProjectId(projectId).setLocation(location) .setCredentials(GoogleCredentials.fromStream(new FileInputStream(credentialsPath))).build() .getService(); return bigQuery; } My Dependency <dependency> <groupId>com.google.cloud</groupId> <artifactId>google-cloud-bigquery</artifactId> <version>2.10.0</version> </dependency> Now for completed jobs, I have no issue, but for some jobs which are in a running state like having a duration of more than 1 minute, we are getting the incomplete query plan data which is ultimately giving the null pointer exception. If we observe the picture, for the job, there is jobStatistics part, there it is giving the warning like it will throw java.lang.NullPointerException . Now the main issue is, in our processing, when we check the queryPlan field, it is not null and it is showing the size of some number. When I try to process that in any loop, iterator, stream it is throwing the NullPointerException. When I try to fetch the data for the same running job using API, it is giving complete details. Ultimately the conclusion is why the bigquery is giving different results for the java library and API, why there is incompleteness in the java library side(I have tried by updating the dependency version also). What is the solution for me, how can I prevent my code from going into the NullPointerException. Ultimately the library is also using the same API, but somehow in the internal processing the query plan data is not getting generated properly when the job is in running state.
I was able to test the behaviour of the code as well as the API. When the query is running, most of the API response fields under queryPlan are 0, therefore not complete. Only when the query has completed its execution, the queryPlan field shows the complete information. Also, as per this client library documentation, the queryPlan is available only once the query has completed its execution. So, the NullPointerException is the expected behaviour when the query is still running (tested this as well). To prevent the NullPointerException, you might have to access the queryPlan when the state of the query is DONE.
I am getting timeout error after loading table data using Power Query Editor. How can I proceed further?
I am getting the below error while selecting the "close and apply" option in Power Query Editor:- We timed out waiting for page 'https://www.imf.org/en/Publications/WEO/weo-database/2020/April/weo-report?c=512,914,612,614,311,213,911,314,193,122,912,313,419,513,316,913,124,339,638,514,218,963,616,223,516,918,748,618,624,522,622,156,626,628,228,924,233,632,636,634,238,662,960,423,935,128,611,321,243,248,469,253,642,643,939,734,644,819,172,132,646,648,915,134,652,174,328,258,656,654,336,263,268,532,944,176,534,536,429,433,178,436,136,343,158,439,916,664,826,542,967,443,917,544,941,446,666,668,672,946,137,546,674,676,548,556,678,181,867,682,684,273,868,921,948,943,686,688,518,728,836,558,138,196,278,692,694,962,142,449,564,565,283,853,288,293,566,964,182,359,453,968,922,714,862,135,716,456,722,942,718,724,576,936,961,813,726,199,733,184,524,361,362,364,732,366,144,146,463,528,923,738,578,537,742,866,369,744,186,925,869,746,926,466,112,111,298,927,846,299,582,474,754,698,&s=PPPGDP,&sy=2014&ey=2021&ssm=0&scsm=1&scc=0&ssd=1&ssc=0&sic=0&sort=country&ds=.&br=1'. I modified the code in "Advanced Editor" tab by adding timeout argument as: Web.BrowserContents("https://microsoft.com", [WaitFor = [Timeout = #duration(0,0,10,0)]]) But still, I am unable to load the table data. Please help to resolve this.
Try Increasing the Timeout parameter from 10 to more. Like Web.BrowserContents("https://microsoft.com", [WaitFor = [Timeout = #duration(0,0,30,0)]])
Data Refinery Job failed with SCAPIException CDICO2060E
I'm building my first project in Watson Studio and a Data Refinery Job fails with the following error: ERROR: Failed to execute the flow. Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): com.ibm.connect.api.SCAPIException: CDICO2060E: The metadata for the select statement could not be retrieved Sql syntax error: THE DATA TYPE, LENGTH, OR VALUE OF ARGUMENT 1 OF RID IS INVALID. SQLCODE=-171 The SQL it's executing contains this: FROM \"SCHEMA\".\"VIEW_NAME_A\" WHERE MOD(COALESCE(RID(\"SCHEMA\".\"VIEW_NAME_A\"), 0), 3) = 0 The job was built from a DB2 for Z/OS connection --> Connected Data object --> Data Refinery Flow where once the flow looked good, it was saved and then a job was created. Which failed on the execution. SCHEMA.VIEW_NAME_A is a view built of a complex query joining two or more tables together. I have another data refinery flow for a simpler view table, where it's job (created the same way) works successfully. The query for this view is only one table. I don't quite understand why Watson Studio built this query for the job run with this WHERE statement and I can't find anything about it. Someone have an idea on how to fix or workaround this issue?
Watson Studio extracts the source data using multiple queries that partition the data, and that WHERE clause came from its partitioning algorithm. Apparently its partitioning strategy for z/OS does not work properly when the source is a complex view. I apologize for the inconvenience and cannot think of a suitable workaround. We will fix the issue as soon as possible.
JCO PoolManager: How to confirm if JCO Pool is created in JCO PoolManager by looking at JCO traces
I am trying to analyze one problem where in JCO Trace file I can see that the JCO Pool is added as follows SAPEngine_Application_Thread[impl:3]_12 [14:44:41:772]: [JAV-LAYER] JCO.PoolManager.addPool: name = pool name, connection = connection, pool_size = 20, max_wait_time = 30000, pooled_connection_timeout = 600000, timeout_check_period = 60000 but after this when I try to get a connection from this pool I get following error SAPEngine_Application_Thread[impl:3]_12 [14:45:00:942]: [JAV-LAYER] JCO.PoolManager.getClient(poolName, true) Error: application tries to get client from removed or non existent pool. This error occurs just after my XI system is restarted, rest of the time it works as expected. XI system has two stacks JAVA AS and ABAP AS. I tried to go through the JRFC logs and defaultTrace.log files but no clues are yet found on this. Thanks,
I Don't know very much about XI, but from your trace it looks like the added pool is actually named "pool name" while from the error message it seems that your application/XI tries to get a connection from a pool named "poolName". Maybe you should check your configuration..?
Celery Storing unrecoverable task failures for later resubmission
I'm using the djkombu transport for my local development, but I will probably be using amqp (rabbit) in production. I'd like to be able to iterate over failures of a particular type and resubmit. This would be in the case of something failing on a server or some edge case bug triggered by some new variation in data. So I could be resubmitting jobs up to 12 hours later after some bug is fixed or a third party site is back up. My question is: Is there a way to access old failed jobs via the result backend and simply resubmit them with the same params etc?
You can probably access old jobs using: CELERY_RESULT_BACKEND = "database" and in your code: from djcelery.models import TaskMeta task = TaskMeta.objects.filter(task_id='af3185c9-4174-4bca-0101-860ce6621234')[0] but I'm not sure you can find the arguments that the task is being started with ... Maybe something with TaskState... I've never used it this way. But you might want to consider the task.retry feature? An example from celery docs: #task() def task(*args): try: some_work() except SomeException, exc: # Retry in 24 hours. raise task.retry(*args, countdown=60 * 60 * 24, exc=exc)
From IRC <asksol> dpn`: task args and kwargs are not stored with the result <asksol> dpn`: but you can create your own model and store it there (for example using the task_sent signal) <asksol> we don't store anything when the task is sent, only send a message. but it's very easy to do yourself This was what I was expecting, but hoped to avoid. At least I have an answer now :)