Data Refinery Job failed with SCAPIException CDICO2060E - data-science-experience

I'm building my first project in Watson Studio and a Data Refinery Job fails with the following error:
ERROR: Failed to execute the flow. Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): com.ibm.connect.api.SCAPIException: CDICO2060E: The metadata for the select statement could not be retrieved Sql syntax error: THE DATA TYPE, LENGTH, OR VALUE OF ARGUMENT 1 OF RID IS INVALID. SQLCODE=-171
The SQL it's executing contains this: FROM \"SCHEMA\".\"VIEW_NAME_A\" WHERE MOD(COALESCE(RID(\"SCHEMA\".\"VIEW_NAME_A\"), 0), 3) = 0
The job was built from a DB2 for Z/OS connection --> Connected Data object --> Data Refinery Flow where once the flow looked good, it was saved and then a job was created. Which failed on the execution. SCHEMA.VIEW_NAME_A is a view built of a complex query joining two or more tables together.
I have another data refinery flow for a simpler view table, where it's job (created the same way) works successfully. The query for this view is only one table.
I don't quite understand why Watson Studio built this query for the job run with this WHERE statement and I can't find anything about it.
Someone have an idea on how to fix or workaround this issue?

Watson Studio extracts the source data using multiple queries that partition the data, and that WHERE clause came from its partitioning algorithm. Apparently its partitioning strategy for z/OS does not work properly when the source is a complex view. I apologize for the inconvenience and cannot think of a suitable workaround. We will fix the issue as soon as possible.

Related

Stage level data is not coming for bigquery running jobs through java bigquery libraries

I am using com.google.cloud.bigquery library for fetching the job level details. We have the following code snippets
Job job = getBigQuery(projectId, location).getJob(JobId.newBuilder().setJob("myJobId").
setLocation(location).setProject(projectId).build());
private BigQuery getBigQuery(String projectId, String location) throws IOException {
// path to your credentials file
String credentialsPath = "my private key crdentials file";
BigQuery bigQuery;
bigQuery = BigQueryOptions.newBuilder().setProjectId(projectId).setLocation(location)
.setCredentials(GoogleCredentials.fromStream(new FileInputStream(credentialsPath))).build()
.getService();
return bigQuery;
}
My Dependency
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-bigquery</artifactId>
<version>2.10.0</version>
</dependency>
Now for completed jobs, I have no issue, but for some jobs which are in a running state like having a duration of more than 1 minute, we are getting the incomplete query plan data which is ultimately giving the null pointer exception.
If we observe the picture, for the job, there is jobStatistics part, there it is giving the warning like it will throw java.lang.NullPointerException .
Now the main issue is, in our processing, when we check the queryPlan field, it is not null and it is showing the size of some number. When I try to process that in any loop, iterator, stream it is throwing the NullPointerException.
When I try to fetch the data for the same running job using API, it is giving complete details.
Ultimately the conclusion is why the bigquery is giving different results for the java library and API, why there is incompleteness in the java library side(I have tried by updating the dependency version also). What is the solution for me, how can I prevent my code from going into the NullPointerException.
Ultimately the library is also using the same API, but somehow in the internal processing the query plan data is not getting generated properly when the job is in running state.
I was able to test the behaviour of the code as well as the API. When the query is running, most of the API response fields under queryPlan are 0, therefore not complete. Only when the query has completed its execution, the queryPlan field shows the complete information.
Also, as per this client library documentation, the queryPlan is available only once the query has completed its execution. So, the NullPointerException is the expected behaviour when the query is still running (tested this as well).
To prevent the NullPointerException, you might have to access the queryPlan when the state of the query is DONE.

Jberet - Retryable exception class working?

Is there a way to see in the log that the retry is happening? I need to know if this is working in our test environment before implementing it into production.
There are rare instances when we get the following due to a portion of the key being a timestamp and data coming in to the table from various sources. We need to have the writer retry when we get a - DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505
<chunk>
...
<retryable-exception-classes>
<include class="com.ibm.db2.jcc.am.SqlIntegrityConstraintViolationException"></include>
</retryable-exception-classes>
</chunk>
JBeret does not log these event, but you can implement some listeners defined by batch spec to act on you own. For example, RetryReadListener, RetryWriteListener, or RetryProcessListener.

Should I have concern about datastoreRpcErrors?

When I run dataflow jobs that writes to google cloud datastore, sometime I see the metrics show that I had one or two datastoreRpcErrors:
Since these datastore writes usually contain a batch of keys, I am wondering in the situation of RpcError, if some retry will happen automatically. If not, what would be a good way to handle these cases?
tl;dr: By default datastoreRpcErrors will use 5 retries automatically.
I dig into the code of datastoreio in beam python sdk. It looks like the final entity mutations are flushed in batch via DatastoreWriteFn().
# Flush the current batch of mutations to Cloud Datastore.
_, latency_ms = helper.write_mutations(
self._datastore, self._project, self._mutations,
self._throttler, self._update_rpc_stats,
throttle_delay=_Mutate._WRITE_BATCH_TARGET_LATENCY_MS/1000)
The RPCError is caught by this block of code in write_mutations in the helper; and there is a decorator #retry.with_exponential_backoff for commit method; and the default number of retry is set to 5; retry_on_rpc_error defines the concrete RPCError and SocketError reasons to trigger retry.
for mutation in mutations:
commit_request.mutations.add().CopyFrom(mutation)
#retry.with_exponential_backoff(num_retries=5,
retry_filter=retry_on_rpc_error)
def commit(request):
# Client-side throttling.
while throttler.throttle_request(time.time()*1000):
try:
response = datastore.commit(request)
...
except (RPCError, SocketError):
if rpc_stats_callback:
rpc_stats_callback(errors=1)
raise
...
I think you should first of all determine which kind of error occurred in order to see what are your options.
However, in the official Datastore documentation, there is a list of all the possible errors and their error codes . Fortunately, they come with recommended actions for each.
My advice is that your implement their recommendations and see for alternatives if they are not effective for you

Error: This operation is not allowed when there are no records displayed. Please execute a query that returns

I have added the following code in the WebApplet_Load of Service Request Applet.It's giving me the above error once, I tries to open the SR screen from the application.
try
   {
      var currBC = this.BusComp();
      with (currBC)
      {
ActivateField("Restrict_drop_down");
ClearToQuery();
//BC.SetViewMode(3);;
TheApplication.SetProfilAttr("SR Type", GetFieldValue("Restrict_drop_down"));
        ExecuteQuery(ForwardBackward);
      }
   }
   catch (e)
   {
      TheApplication().RaiseErrorText(e.errText);
   }
Any idea on how to solve the issue?
You cannot do GetFieldValue when the BC is in Query mode. You have just done ClearToQuery, so you have to execute the query first, check for FirstRecord(); and then do a GetFieldValue();
Also, during the WebApplet load the first BC query is not finished running. It might not be the best place to write this code.
Please check with a siebel expert on your team, such kind of code needs to be placed carefully.

Getting error while running EIM job

EIM job is getting error out while running it. Below is my IFB file -
"[Siebel Interface Manager]
USER NAME = 'SADMIN'
PASSWORD = 'SADMIN'
PROCESS = "PROCESS UPDATE"
[PROCESS UPDATE]
TYPE = IMPORT
BATCH = 30032012 - 30032015
TABLE = EIM_FN_ASSET5
INSERT ROWS = S_ASSET_CON, FALSE
UPDATE ROWS = S_ASSET_CON, TRUE
ONLY BASE TABLES = S_ASSET_CON
ONLY BASE COLUMNS = S_ASSET_CON.ATTRIB_37,S_ASSET_CON.ATTRIB_38,S_ASSET_CON.ATTRIB_50,S_ASSET_CON.ASSET_ID,S_ASSET_CON.CONTACT_ID,\
S_ASSET_CON.RELATION_TYPE_CD"
In application, it shows error --
"SBL-EIM-00426: All batches in run failed."
I have placed IFB in admin folder itself and below is the log file -
"2021 2012-04-03 05:35:25 2012-04-03 05:35:25 -0500 00000002 001 003f 0001 09 srvrmgr 16187618 1 /004fs02/siebel/siebsrvr/log/srvrmgr.log 8.1.1.4 [21225] ENU
SisnapiLayerLog Error 1 0000000c4f7a00e2:0 2012-04-03 05:35:25 258: [SISNAPI] Async Thread: connection (0x204ec5b0), error (1180682) while reading message"
Kindly help.
Async Thread: connection (0x204ec5b0), error (1180682) while reading message
This happens when an object manager lost the connection to the gateway. There can be many reasons for this: Restart the gateway without bouncing the app server. Network issues... etc.
But, this is the error in your Server Manager session, not in the EIM session (Batch Component). For each EIM job that you start (via server manager) you should see a corresponding EIM tasks. The best is to see the error in the EIMxxxx.log file. Also, you can debug your EIM task by setting Event Logs levels:
change evtloglvl %=3 for comp EIM
(set detailed logging)
(run your EIM job) start task ......
list active tasks for comp EIM
(you should see the job running..)
list tasks for comp EIM
(Or you can see the list of jobs)
change evtloglvl %=1 for comp EIM
(use this line to set the log levels back to "normal")
This will give you some detailed info on what the EIM component is doing. Note: Make use of a small batch or your log will be too big to manage.
If you have some connection errors and you recently lost your DB connection, the best is to completely restart the siebel servers and gateway in the correct order.
Have you tried re-runing the EIM Job.
If the scenario continues even after the second run - Please check the batch number you have given in the IFB file with the batch numbers given in the Input Data file for the EIM component - as from the error it seems that the EIM component is not able to fetch the data.
SBL-SVR-01042 is a generic error when this error is encountered while attempting to instantiate a new instance of a given component and is generic. As to why the error has occurred, one needs to review the accompanying error messages which will help provide context and more detailed information
You can ignore SisnapiLayerLog Error. This is generic error and does not have any significance.
You should concentrate on SBL-EIM-00426. before running task can you check if there is any record in your EIM table. This error comes when you have zero record in interface table.you should increase log level to high and try to trache error. There is also fixed released by Oracle. Refer oracle support for same.
https://support.oracle.com/epmos/faces/BugDisplay?parent=DOCUMENT&sourceId=498041.1&id=10469733
I have edited the IFB file code little bit and it worked for me.
Can you please try the below code and let me know.
[Siebel Interface Manager]
USER NAME = 'SADMIN'
PASSWORD = 'SADMIN'
PROCESS = "PROCESS UPDATE"
[PROCESS UPDATE]
TYPE = SHELL
INCLUDE = "Update Records"
[Update Records]
TYPE = IMPORT
BATCH = 30032012 - 30032015
TABLE = EIM_FN_ASSET5
INSERT ROWS = S_ASSET_CON, FALSE
UPDATE ROWS = S_ASSET_CON, TRUE
ONLY BASE TABLES = S_ASSET_CON
ONLY BASE COLUMNS = S_ASSET_CON.ATTRIB_37 \
,S_ASSET_CON.ATTRIB_38 \
,S_ASSET_CON.ATTRIB_50 \
,S_ASSET_CON.ASSET_ID \
,S_ASSET_CON.CONTACT_ID \
,S_ASSET_CON.RELATION_TYPE_CD
Hope this helps!