My situation:
I create via OTRS Admin Backend a custom webservice:REST
added an custom invoker on GenericInterface::Invoker::ModuleRegistration
added new invoker:REST Event-Trigger: TicketSlaveLinkAdd
configured otrs as requester HTTP::REST Host as: http://myhost.com
controller mapping: /LinkAdd/:TicketID
standard command: PATCH
My Problem:
System Log returns:
DebugLog error: Summary: HTTP::REST Error while determine Operation for request URI '/LinkAdd'. Data : No data provided.
DebugLog error: Summary: Request could not be processed Data : HTTP::REST Error while determine Operation for request URI '/LinkAdd'..
DebugLog error: Summary: Returning provider data to remote system (HTTP Code: 500) Data : HTTP::REST Error while determine Operation for request URI '/LinkAdd'..
Maybe somebody know about my problem and can help :)
The Error while determine Operation text points out that OTRS is the incoming call is not able to determine the Operation for the incoming URI; please check your data mapping.
Related
I am executing a workflow in informatica which is supposed to inset values in a target file.
Some of the records are getting inserted but i get an error after a few insertions saying:
[Informatica][ODBC PWX Driver] PWX-00267 DBAPI error for file……… Write error on record 119775 Requested 370 SQLSTATE [08S01]
Is this because of file constraints of how the record can be or due to some other reasons?
I'm not sure if this is exactly the case, but looking for the error code 08S01 I've found this site that lists Data Provider Error Codes. Under SQLCODE 370 (assuming this is what your error message indicates) I've found:
Message: There are insufficient resources on the target system to
complete the command. Contact your server administrator.
Reason: The resource limits reached reply message indicates that the
server could not be completed due to insufficient server resources
(e.g. memory, lock, buffer).
Action: Verify the connection and command parameters, and then
re-attempt the connection and command request. Review a client network
trace to determine if the server returned a SQL communications area
reply data (SQLCARD) with an optional reason code or other optional
diagnostic information.
similar question to
AWS Lambda send image file to Amazon Sagemaker
I try to make simple-mnist work (the model was built by referring to aws tutorial)
Then I am using API gateway (REST API w/ proxy integration) to post image data to lambda, and would like to send it to sagemaker endpoint and make an inference.
In lambda function, I wrote the code(.py) like this.
runtime = boto3.Session().client('sagemaker-runtime')
endpoint_name = 'tensorflow-training-YYYY-mm-dd-...'
res = runtime.invoke_endpoint(EndpointName=endpoint_name,
Body=Image,
ContentType='image/jpeg',
Accept='image/jpeg')
However, when I send image to lambda via API gateway, this error occurs.
[ERROR] ModelError: An error occurred (ModelError) when calling the
InvokeEndpoint operation: Received client error (415) from model with
message " {
"error": "Unsupported Media Type: image/jpeg" }
I think I need to do something referring to Working with binary media types for REST APIs
But since I am very new, I have no idea about the appropriate thing to do, on which page (maybe API Gateway page?) or how...
I need some clues to solve this problem. Thank you in advance.
Looking here you can see that only some specific content types are supported by default, and images are not in this list. I think you have to either implement your input_fn function or adapt your data to one of the supported content types.
I am facing an issue while invoking the Pytorch model Endpoint. Please check the below error for detail.
Error Message:
An error occurred (InternalFailure) when calling the InvokeEndpoint operation (reached max retries: 4): An exception occurred while sending request to model. Please contact customer support regarding request 9d4f143b-497f-47ce-9d45-88c697c4b0c4.
Automatically restarted the Endpoint after this error. No specific log in cloud watch.
There may be a few issues here we can explore the paths and ways to resolve.
Inference Code Error
Sometimes these errors occur when your payload or what you're feeding your endpoint is not in the appropriate format. When invoking the endpoint you want to make sure your data is in the correct format/encoded properly. For this you can use the serializer SageMaker provides when creating the endpoint. The serializer takes care of encoding for you and sends data in the appropriate format. Look at the following code snippet.
from sagemaker.predictor import csv_serializer
rf_pred = rf.deploy(1, "ml.m4.xlarge", serializer=csv_serializer)
print(rf_pred.predict(payload).decode('utf-8'))
For more information about the different serializers based off the type of data you are feeding in check the following link.
https://sagemaker.readthedocs.io/en/stable/api/inference/serializers.html
Throttling Limits Reached
Sometimes the payload you are feeding in may be too large or the API request rate may have been exceeded for the endpoint so experiment with a more compute heavy instance or increase retries in your boto3 configuration. Here is a link for an example of what retries are and configuring them for your endpoint.
https://aws.amazon.com/premiumsupport/knowledge-center/sagemaker-python-throttlingexception/
I work for AWS & my opinions are my own
I have a workflow that contains a bunch of activities. I store each activity's response in a S3 bucket.
I pass the S3 key as an input to each activity. Inside the activity, I have a method that retrieve the data from S3 and perform some operation. But my last activity failed and threw error:
Caused by: com.amazonaws.AmazonServiceException: Request entity too large (Service: AmazonSimpleWorkflow; Status Code: 413; Error Code: Request entity too large; Request ID: null)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:820)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:439)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:245)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.invoke(AmazonSimpleWorkflowClient.java:3173)
at com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient.respondActivityTaskFailed(AmazonSimpleWorkflowClient.java:2878)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.respondActivityTaskFailed(SynchronousActivityTaskPoller.java:255)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.respondActivityTaskFailedWithRetry(SynchronousActivityTaskPoller.java:246)
at com.amazonaws.services.simpleworkflow.flow.worker.SynchronousActivityTaskPoller.execute(SynchronousActivityTaskPoller.java:208)
at com.amazonaws.services.simpleworkflow.flow.worker.ActivityTaskPoller$1.run(ActivityTaskPoller.java:97)
... 3 more
I know AWS SWF has some limits on data size, but I am only passing a S3 Key to activity. Inside activity, it will read from S3 and process the data. I am not sure why I am getting this error. If anyone knows, please help! Thanks a lot!
Your activity failed as respondActivityTaskFailed SWF API call is seen in the stack trace. So my guess is that the exception message + stack trace exceeded the maximum size allowed by SWF service.
I am consuming the webservice https://www.uat.p20.experian.nl/WS_SDPGateway/sdpgateway.asmx?wsdl.
I am getting an error:
ERROR
ERROR 2015-09-21 23:08:04,789 [[experian_spd_sandbox].HTTP_8044.worker.01] org.mule.exception.DefaultMessagingExceptionStrategy:
Message : COULD_NOT_READ_XML_STREAM. Failed to route event via endpoint: org.mule.module.cxf.CxfOutboundMessageProcessor. Message payload is of type: byte[]
Type : org.mule.api.transport.DispatchException
Code : MULE_ERROR--2
Payload : [B#df32cd7
JavaDoc :
Exception stack is:
Unexpected character '>' (code 62) expected '='
at [row,col {unknown-source}]: [7,21] (com.ctc.wstx.exc.WstxUnexpectedCharException)
com.ctc.wstx.sr.StreamScanner:647 (null)
COULD_NOT_READ_XML_STREAM (org.apache.cxf.interceptor.Fault)
org.apache.cxf.databinding.stax.StaxDataBinding$XMLStreamDataWriter:151 (null)
COULD_NOT_READ_XML_STREAM. Failed to route event via endpoint: org.mule.module.cxf.CxfOutboundMessageProcessor. Message payload is of type: byte[] (org.mule.api.transport.DispatchException)
org.mule.module.cxf.CxfOutboundMessageProcessor:163
My goal is just to fix the communication and ensure the webservice can be consumed properly from Mule.
"Failed to route event via endpoint" sounds more like a network then a parse error.
Can you try to curl the wsdl from the box that is executing the Mule flow?
Found it:
Afterall it was a mistake in the XML message itself. In addition I had to remove the metadata I had added during trouble shooting. So in case you run into the same just go back to basic and first validate the XML message ;-)
Thanks all for getting me there.