I am using Transaction control for all load plans in the mapping. It still throws an error: "The Target Definition has more than one Transaction Control point connected to it stackoverflow informatica". Any idea why this error?
Thank you
Actually this happens when you have below scenario
Src1 - SQ1- EXP1- TGT1
Src2 - SQ2- Exp2-TC-TGT2
Solution:
You need to define Transaction Control transformation in other pipeline with property TC_CONTINUE_TRANSATION.
I mean make it like-
Src1 - SQ1- EXP1-TC(TC_CONTINUE_TRANSATION)- TGT1
Src2 - SQ2- Exp2-TC-TGT2
Related
migrating my frontdoor from the to the azure-native package I am facing a strange error message that I cannot make sense of:
azure-native:network:FrontDoor (frontDoor):
error: Code="BadRequest" Message="Frontdoor location must be global."
I took almost 1 to 1 the example at https://www.pulumi.com/registry/packages/azure-native/api-docs/network/frontdoor/ I only changed subId and rg
For the record, I am migration to azure-native package because 1) it is advised and 2) I want to add waf policy and I was not able to do with the azure.network package.
Does that ring a bell?
Actually, the location must be set specifically to global. Something like
location: "global",
I did not know of this location and it is not one of the values in the location enumeration.
I want to create the following state machine, with Boost MSM:
I would like to be able to prevent the Error event to trigger the AllOk + Error == InError transition if the orthogonal state is on "B". For example, specifying transition for all orthogonal states would be nice. Something like:
{AllOk, B} + Error == {AllOk, A}
However, I cannot find how to do it with Boost MSM, neither with regular UML nomenclature, which makes me think I am going the wrong way.
Is there a classic "UML idiomatic" to handle this kind of behavior?
I see two possible solutions:
Put a guard on AllOk + Error == InError which checks if the other state is B, like this response.
Send a more specific error (in my case, CouldNotComputePath, as I am programming a robot), and somehow transform it in Error if it is not handled. I am not really sure how to do it.
Ok, I find a solution:
The Error event can be "catched" in the MainStateMachine. If it is not, an internal transition is triggered on the MainStateMachine, which will send the EnterError event to make the other orthogonal state switch to InError.
Is there a way to see in the log that the retry is happening? I need to know if this is working in our test environment before implementing it into production.
There are rare instances when we get the following due to a portion of the key being a timestamp and data coming in to the table from various sources. We need to have the writer retry when we get a - DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505
<chunk>
...
<retryable-exception-classes>
<include class="com.ibm.db2.jcc.am.SqlIntegrityConstraintViolationException"></include>
</retryable-exception-classes>
</chunk>
JBeret does not log these event, but you can implement some listeners defined by batch spec to act on you own. For example, RetryReadListener, RetryWriteListener, or RetryProcessListener.
I'm building my first project in Watson Studio and a Data Refinery Job fails with the following error:
ERROR: Failed to execute the flow. Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost, executor driver): com.ibm.connect.api.SCAPIException: CDICO2060E: The metadata for the select statement could not be retrieved Sql syntax error: THE DATA TYPE, LENGTH, OR VALUE OF ARGUMENT 1 OF RID IS INVALID. SQLCODE=-171
The SQL it's executing contains this: FROM \"SCHEMA\".\"VIEW_NAME_A\" WHERE MOD(COALESCE(RID(\"SCHEMA\".\"VIEW_NAME_A\"), 0), 3) = 0
The job was built from a DB2 for Z/OS connection --> Connected Data object --> Data Refinery Flow where once the flow looked good, it was saved and then a job was created. Which failed on the execution. SCHEMA.VIEW_NAME_A is a view built of a complex query joining two or more tables together.
I have another data refinery flow for a simpler view table, where it's job (created the same way) works successfully. The query for this view is only one table.
I don't quite understand why Watson Studio built this query for the job run with this WHERE statement and I can't find anything about it.
Someone have an idea on how to fix or workaround this issue?
Watson Studio extracts the source data using multiple queries that partition the data, and that WHERE clause came from its partitioning algorithm. Apparently its partitioning strategy for z/OS does not work properly when the source is a complex view. I apologize for the inconvenience and cannot think of a suitable workaround. We will fix the issue as soon as possible.
I have to create a consumer proxy in SAP, the proxy generation is OK (or no errors were reported), but when i tried yo consume the proxy (SE80), i have the next error:
SOAP:1.027 SRT: Serialization / Deserialization failed
System expected a value for the type g.
If i continue, i have the response, but when i tried to call the customer service in a report, i have the error and i can't continue.
In a report, when i tried yo consume the proxy using this code, i have the same error, and i don't have response:
CREATE OBJECT proxy
EXPORTING
logical_port_name = 'LOGICAL_01'.
CALL METHOD proxy->proccess_check_status_invoice
EXPORTING
process_check_status_invoice = input
IMPORTING
process_check_status_invoice_r = output.
Whow can i solve this error?
Thanks,
Please use srt_util and verify the execution error with a trace of that proxy. The error log will specify which field and values are not allowed during the transformation.
SOAP:1.027 SRT: Serialization / Deserialization failed errors are due to incompatible data types, in my experience, most of the times are dates since ABAP datum and the standard differ and must be transformed.
Type g is usually the constant for the TYPEKIND of STRING. My guess is you are binding values that are CHARs instead of the STRING datatype.