Informatica MDM: Failing to clean inter-op lock records - informatica

I am trying to call PUT method from my java service to BES, this works but its taking ~10 secs to get the response. I am seeing below error in the logs; wondering this would be causing the delay in getting the response.
[default task-28] [ERROR] com.informatica.mdm.api.AbstractSifCall: failing to clean inter-op lock
records
java.lang.NullPointerException
at com.informatica.mdm.api.put.Put.releaseLocks(Put.java:303)
This is how my REST call looks like:
HTTP PUT
http://infmdm:8080/cmx/cs/ecm-ECM_ORS/PartyContact/4960299.json?systemName=CRM
json body
{
"firstName": "James"
}
Please advice.

Related

Unable to start amplify mock InternalFailure: The request processing has failed because of an unknown error, exception or failure

I have just tried to start my amplify mock service and received the follow error:
InternalFailure: The request processing has failed because of an unknown error, exception or failure.
This has previously worked, a few hours ago with no resets or other changes.
To fix this, I did have success with removing amplify completely, doing amplify init & amplify add api but this means I lose my local data each time, but it happens randomly multiple times in the last few hours.
For the full log while error is taking place:
hutber#hutber:/var/www/unsal.co.uk$ amplify mock
GraphQL schema compiled successfully.
Edit your schema at /var/www/unsal.co.uk/amplify/backend/api/unsalcouk/schema.graphql or place .graphql files in a directory at /var/www/unsal.co.uk/amplify/backend/api/unsalcouk/schema
Failed to start API Mock endpoint InternalFailure
the problem probably comes from the SQLite file use for the mock (lock stories I guess). Delete the file in the mock-data/dynamodb/your_db_file.db folder and execute the again amplify mock api. The file recreates itself correctly. This avoids resetting the whole amplify project.

SourceHandler Writer null when calling informWriterError, fail to work

I deployed an API in EI. The API's main logic transformed a rest request to soap request and the calling a soap-based endpoint. Later I use a data mapper converting the response message to the format which I need.
But sometimes the endpoint went wrong and respond an error message, which made the data mapper parsed mistakenly and the main process went to the default fault sequence(I put a response mediator to the fault sequence), then I got an error log "SourceHandler Writer null when calling informWriterError".
After the scenario occurs again and again(about 200times in 5 minutes), the EI couldn't deal request any longer.
How can I deal with it?
I used the version 6.1.1 before, there is a bug https://github.com/wso2/product-ei/issues/650.
After upgraded to the version 6.4.0 release, everything is ok now.

Random ssl handshake failure when pulling file from Amazon s3 bucket

I have a specific fetch request in my node app which just pulls a json file from my S3 bucket and stores the content within the state of my app.
The fetch request works 99% of the time but for some reason about every 4 or 5 days I get a notification saying the app has crashed and when I investigate the reason is always because of this ssl handshake failure.
I am trying to figure out why this is happening as well as a fix to prevent this in future cases.
The fetch request looks like the following and is called everything someone new visits the site, Once the request has been made and the json is now in the app's state, the request is no longer called.
function grabPreParsedContentFromS3 (prefix, callback) {
fetch(`https://s3-ap-southeast-2.amazonaws.com/my-bucket/${prefix}.json`)
.then(res => res.json())
.then(res => callback(res[prefix]))
.catch(e => console.log('Error fetching data from s3: ', e))
}
When this error happens the .catch method with get thrown and returns the following error message:
Error fetching data from s3: {
FetchError: request to https://s3-ap-southeast-2.amazonaws.com/my-bucket/services.json failed,
reason: write EPROTO 139797093521280:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:../deps/openssl/openssl/ssl/s3_pkt.c:659:
...
}
Has anyone encountered this kind of issue before or has any idea why this might be happening? Currently I am wondering if maybe there is a limit to the amount of S3 request i can make at one time which is causing it to fail? But the site isn't super popular either to be request a huge portion of fetches.

ORA-29270: too many open HTTP requests while calling webservice from pl/sql procedure

I am invoking a webservice from PL/SQL procedure ,while executing the procedure after 5 times I am getting the below error ,any help will be appreciated.
ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1367
ORA-29270: too many open HTTP requests
you need to close your requests once you are done with them, it does not happen automatically (unless you disconnect form the db entirely)
It used to be utl_http.end_response, but I am not sure if it is the same api any more. Raise an exception when there are more incoming requests as below
EXCEPTION
WHEN TOO_MANY_REQUESTS THEN
END_RESPONSE(resp);
I faced the same problem , and solved it by Disconnecting from Database and reConnecting

Catching timeout errors in AWS Api Gateway

Since Api Gateway time limit is 10 seconds to execute any request I'm trying to deal with timeout errors, but a haven't found a way to catch and respond a custom message.
Context of the problem: I have a function that takes less than 2 seconds to execute, but when the function performs a cold start sometimes it takes more than 10 seconds creating a connection with DynamoDB in Java. I've already optimize my function using threads but I still cannot keep between the 10-seconds limit for the initial call.
I need to find a way to deliver a response model like this:
{
"error": "timeout"
}
To find a solution I created a function in Lambda that intentionally responds something after 10 seconds of execution. Doing the integration with Api Gateway I'm getting this response:
Request: /example/lazy
Status:
Latency: ms
Response Body
{
"logref": "********-****-****-****-1d49e75b73de",
"message": "Timeout waiting for endpoint response"
}
In documentation I found that you can catch this errors using HTTP status regex in Integration Response. But I haven't find a way to do so, and it seems that nobody on the Internet is having my same problem, as I haven't find this specific message in any forum.
I have tried with these regex:
.*"message".*
Timeout.*
.*"status":400.*
.*"status":404.*
.*"status":504.*
.*"status":500.*
Anybody knows witch regex I should use to capture this "message": "Timeout... ?
You are using Test Invoke feature from console which has a timeout limit of 10 seconds. But, the deployed API's timeout is 30 seconds as mentioned here. So, that should be good enough to handle Lambda cold start case. Please deploy and then test using the api link. If that times out because your endpoint takes more than 30 seconds, the response would be:
{"message": "Endpoint request timed out"}
To clarify, you can configure your method response based on the HTTP status code of integration response. But in case of timeout, there is no integration response. So, you cannot use that feature to configure the method response during timeout.
You can improve the cold start time by allocating more memory to your Lambda function. With the default 512MB, I am seeing cold start times of 8-9 seconds for functions written in Java. This improves to 2-3 seconds with 1536MB of memory.
Amazon says that it is the CPU allocation that is really important, but there is not way to directly increase it. CPU allocation increases proportionately to memory.
And if you want close to zero cold start times, keeping the function warm is the way to go, as described here.